BACKGROUND OF THE INVENTION
1. Field of the Invention
[0001] The present invention relates generally to improvements in aircraft passenger information
systems and, more particularly, pertains to a new audio information system for the
passengers of an aircraft. Still more specifically, the invention provides means for
generating informational messages which are initially created on a ground-based computer
system and transmitted up to an aircraft in flight to be converted from digital computer
data to audio words and sentences and broadcast in multiple languages via the cabin
audio system to the passengers.
2. Description of the Prior Art
[0002] A wide variety of information systems exist for providing audio messages to a listening
audience. For entirely automatic systems; that is, systems which do not require an
operator, audio messages have traditionally been prerecorded prior to broadcast. Such
information systems are incapable of handling real time information to produce audio
messages reciting the real time information. To remedy this, various prior art audio
information systems have been developed which utilize a voice synthesizer device to
convert real time digital information into spoken words or phrases. Unfortunately,
the resulting audio messages are often metallic- or artificial-sounding.
[0003] A particular application for an audio information system for automatically providing
spoken messages is in the aircraft and air transportation arena. General information
systems relating to aircraft abound in the prior art. Such general systems are utilized
for a variety of purposes, such as tracking and analyzing information relating to
air traffic control, displaying information on flights to provide for advanced planning
and scheduling, and monitoring ground traffic at an airport. Other than U.S. Patent
No. 4,975,696 (Salter, Jr. et al.) and copending U.S. application Serial No. 07/763,370
(Pitts), such systems are typically used for the administering of aircraft traffic.
[0004] In U.S. Patent No. 4,975,696, an electronics package connecting the airborne electronics
of a passenger aircraft to the passenger visual display system of the aircraft was
disclosed. The electronics package provides passengers with a variety of real-time
video displays of flight information, such as ground speed, outside air temperature,
or altitude. Other information displayed by the electronics package includes a map
of the area over which the aircraft flies, as well as destination information, such
as a chart of the destination terminal including aircraft gates, baggage claims areas,
and connecting flight information listings.
[0005] The electronics system of U.S. Patent application Serial No. 07/763,370 displays
flight information with the flight information automatically tailored to the phases
of flight of the aircraft.
[0006] Although the electronics systems of U.S. Patent No. 4,975,696 and U.S. application
Serial No. 07/763,370 provide much useful information in video displays, the systems
do not provide the information over audio channels. Furthermore, as noted above, existing
systems which do provide information over audio channels in other applications have
not successfully provided natural-sounding, automatically-generated spoken messages
incorporating real time information.
OBJECTIVES AND SUMMARY OF THE INVENTION
[0007] Accordingly, it is an object of the present invention to provide a flight information
system wherein the system provides real-time flight information such as speed, altitude,
and passing points of interest, destination airport terminal information such as connecting
flights and gates, and other useful information, over an audio system to passengers
in an aircraft.
[0008] It is another object of the present invention to provide an information system which
automatically generates spoken messages in a natural-sounding voice.
[0009] In accordance with these objectives, the invention provides an information system
for generating spoken audio messages incorporating real-time, i.e. "variable," input
data by assembling digitized spoken words corresponding to the input data into complete
messages or sentences. Each sentence to be assembled includes a framework of fixed
digitized words and phrases, into which variable digitized words are inserted. The
particular digitized variable words which correspond to the specific input data are
retrieved from digital computer memory. All anticipated input parameters are stored
as digitized spoken words such that, during operation of the system, appropriate spoken
words corresponding to the input data can be retrieved and inserted into the framework
of the sentence. In this manner, a complete natural-sounding spoken message which
conveys the input data is automatically generated for broadcast.
[0010] More specifically, the system includes a memory means for storing digitized spoken
words, a receiver for receiving input data, and a data processor. The data processor
means includes a retrieval means for retrieving selected digitized words corresponding
to the input data and a message assembly means for assembling the retrieved words
into audio messages.
[0011] Some of the digitized spoken words are stored in a variety of different inflection
forms. The data processor means includes means for selecting digitized forms of the
words having the proper inflection for inclusion in the spoken sentence, such that
a natural-sounding spoken sentence is achieved.
[0012] The various digitized words and phrases may be recorded in a variety of languages,
such that a spoken message may be generated in any of a variety of different languages.
[0013] In accordance with a preferred embodiment, the audio information system is mounted
aboard a passenger aircraft for automatically generating informative messages for
broadcast to the passengers of the aircraft. The system includes a receiver for receiving
flight information from the on-board navigation systems of the aircraft and from ground-based
transmitters. The input flight information, such as the location of the aircraft or
the travel time to destination, is automatically communicated to the passengers in
the form of natural-sounding spoken sentences. The system may also generate audio
messages identifying points of interest in the vicinity of the aircraft.
[0014] In one embodiment, the system generates spoken messages describing destination terminal
information received from a ground-based transmitter including connecting gates and
baggage claim areas. The system assembles audio messages incorporating the destination
terminal information received from the ground and broadcasts the assembled messages
to the passengers. The system is alternatively configured to simultaneously provide
the destination terminal information in both video and audio form.
[0015] In another embodiment, the invention provides audio messages to aircraft passengers
wherein the messages are tailored to the phases of flight of the aircraft. In accordance
with this embodiment, the system includes data processor means utilizing received
flight information for determining a current phase of the flight plan and for inputting
information corresponding to the current phase of the flight plan to the audio system
for broadcast to the passengers. In this manner, a wide variety of informative spoken
messages may be automatically provided to the passengers, with the content of the
messages tailored to the various phases of flight of the aircraft. For example, the
system may automatically generate one set of spoken messages during the takeoff phase
of the flight of the aircraft, and a separate set of messages during the en route
cruise phase of the aircraft. As with the previously-described embodiments, the messages
are automatically generated by the system in response to input flight information
which is received by the system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Other objects and many of the attendant advantages of this invention will become
apparent as the invention becomes better understood by reference to the following
detailed description when considered in conjunction with the accompanying drawings,
in which like reference numerals designate like parts throughout the figures thereof,
and wherein:
Figure 1 is a flow chart representing a method in accordance with the invention for
assembling sentences from digitized words;
Figure 2 is a flow chart representing a method for selecting words of proper inflection
for use in assembling sentences having numbers spoken in a natural-sounding voice;
Figure 3 is a block diagram, somewhat in pictorial form, of an aircraft passenger
information system in accordance with a preferred embodiment of the present invention;
Figure 4 is a block diagram of the data processor of Figure 3;
Figure 5 is a representation of a screen that may be displayed by the system of the
present invention while corresponding audio messages are broadcast;
Figure 6 is another representation of a screen that may be displayed by the system
of the present invention while corresponding audio messages are broadcast;
Figure 7 provides a flow chart of an alternative embodiment of the invention wherein
audio messages conveying flight information such as points of interest are generated;
Figure 8 is a representation of a video display screen that may be displayed by the
system of Figure 7 while corresponding audio messages are broadcast; and
Figure 9 is a representation of another video display screen that may be displayed
by the system of Figure 7 while corresponding audio messages are broadcast.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0017] The following description is provided to enable any person skilled in the art to
make and use the invention and sets forth the best modes contemplated by the inventor
of carrying out his invention. Various modifications, however, will remain readily
apparent to those skilled in the art, since the generic principles of the present
invention have been defined herein specifically to provide an audio information system
for receiving real time data and for generating natural-sounding spoken messages reciting
the real time data.
[0018] Referring to Figure 1, a spoken message assembler system 200 is illustrated. Message
assembler 200 receives input information in the form of digital alphanumeric data
and generates natural-sounding spoken sentences which recite the received data for
output to a listening audience through a speaker system, perhaps a public address
(PA) system. To this end, message assembler 200 includes hundreds or thousands of
digitized words and phrases covering all anticipated words which may be required to
create sentences reciting the input data. The words and phrases are prerecorded from
a human voice in a digitized format and stored in computer ROM. Message assembler
200 assembles sentences by retrieving appropriate digitized words and phrases and
assembling the words and phrases into proper syntactic sentences. Preferably, some
of the words and phrases are stored in a number of digitized forms, each having a
different inflection, such that the assembled sentence has proper inflection in accordance
with natural speech.
[0019] In this manner, input information in the form of digital data can be communicated
to a listening audience in the form of natural-sounding spoken sentences. The input
data is received and the spoken sentences are generated and broadcast entirely automatically
without the need for a human operator or human speaker.
[0020] In a preferred embodiment, discussed in detail below, the spoken message assembler
is employed within an audio/video information system for use in the passenger compartment
of an aircraft. In that embodiment, the message assembler receives flight information
such as ground speed, outside air temperature, destination terminal, connecting gate,
or baggage claim area information. The message assembler then constructs natural-sounding
sentences for broadcasting the flight information to the passengers in the aircraft.
The spoken messages may be broadcast over a public address system of the aircraft
for all passengers to hear, or may be broadcast over individual passenger headphone
sets. Also, as will be described below, the spoken message assembler may be configured
to generate sentences in a variety of different languages for either sequential broadcast
or simultaneous broadcast over multiple channels.
[0021] The spoken message assembler of the system thus provides a wide range of useful and
informative information to the passengers, while freeing the flight crew from having
to provide the information to the passengers. As will be described below, the system
may additionally include a video display system for simultaneously displaying the
flight information over a video screen or the like.
[0022] Although advantageously implemented within an information system for passenger aircraft,
the message assembler of the invention is ideally suited for any application benefitting
from the automatic communication of input data to a listening audience.
[0023] Figure 1 provides a flow chart illustrating the operation of message assembler 200.
Initially, at 202, the message assembler receives an input sentence over a data line
201 in a digital alphanumeric format suitable for input and manipulation by a computer
or similar data processing device. The data is received within a sentence format having
specific data fields. For example, one data field of the input sentence may provide
the time of day. Within that data field, an alphanumeric sequence is received which
provides the time of day, e.g., "12:32PM." A separate data field may provide a destination
city for an aircraft flight, e.g., "Los Angeles." Message assembler 200 may be preprogrammed
to receive any of a number of suitable data formats. Any format is suitable so long
as the variable data is received within preselected fields such that the message assembler
can determine the type of data contained within the received message.
[0024] For each type of data, message assembler 200 stores all possible instances of the
data type in a digitized spoken form in a mass storage device 211. For the example
of destination cities, the message assembler stores the names of all cities that the
airline flies into or out of in digitized spoken form. Thus, the message assembler
stores the words "New York," "Los Angeles," "Chicago," etc. in ROM.
[0025] For data types requiring numbers, such as the time of day, message assembler 200
stores all necessary component numbers in digitized form. To recite the time "12:10,"
message assembler 200 retrieves and combines the words "twelve" and "ten." To recite
the time "1:57," message assembler 200 retrieves and combines the words "one," "fifty,"
and "seven." To handle any input time of day, message assembler 200 need only store
the component numbers 0-9 and 10, 20 ... 50 in digitized form. The numbers 1-10 are
assembled either as "one" or "oh-one," etc., to allow the handling of both hours and
minute values between 1 and 10.
[0026] In this manner, the message assembler stores the various possible instances of the
various possible data types that may be received within an incoming message. The specific
data fields that are employed and the specific instances of the data stored for each
data field are configurable parameters of the system. Although a digitized data base
can be constructed to provide for almost any type of information, the system is preferably
employed where a limited number of types of information must be conveyed to a listening
audience, especially where each type of information has a fairly limited range of
possible instances. In such case, the total number of digitized spoken words that
must be stored in ROM is fairly limited. A system requiring a greater number of digitized
words may be implemented using a computer with a greater amount of ROM.
[0027] An exemplary input sentence format received by the system, at step 202, is provided
in Table I.

[0028] The exemplary sentence format of Table I provides the departure gate number and departing
time for particular departing flights. Thus, for each flight departing from the destination
terminal, the input sentence of Table I provides a framework for communicating the
departing flight's airline name and flight number and the departing flight's gate
number and departure time, along with the destination city and destination airport
terminal.
[0029] An input sentence includes a framework of fixed words interlaced with variable words
(shown in parentheses) in Table I. In the input sentence shown in Table I, the fixed
words are "flight," "will depart," "from," "terminal," "gate," and "at." The variable
data for inclusion within the sentence include the airline name, the flight number,
the city name, the terminal name, the gate number, the departure time, and either
"AM" or "PM" appended to the departure time. Each unit of the sentence, comprising
either a single fixed or variable word or a fixed or variable phrase, is denoted by
a position number. For example, the variable "airline name" is identified as position
1. The fixed word "flight" is identified as position 2. In this manner, each fixed
or variable data unit within the input sentence is represented by a unique number.
[0030] At 206, the system examines the first position within the input sentence, initially
position 1. At 208, the system determines whether position 1 corresponds to a fixed
word or a variable word. Continuing with the example of Table I, position 1 requires
a variable word. Accordingly, the system proceeds to step 210 to retrieve the digitized
variable word from the data base of the system, which corresponds to the input airline
name to be included at position 1.
[0031] The data base of variable digitized words is set up to include the names of currently
operating airlines, with the names digitized from a recording of the spoken airline
name. Thus, the data base may include, for example, "ABC Airlines" or "XYZ Airlines"
in digitized form. To retrieve the digitized spoken name of the proper airline, the
system examines the received message for an alphanumeric representation of the airline
name, then, based on the alphanumeric, retrieves the corresponding digitized spoken
name from the system's data base. Once retrieved, the digitized data providing the
spoken airline name is immediately broadcast to the passengers. Alternatively, the
digitized spoken airline name may be transferred to a temporary memory unit (not shown
in Figure 1) of the system for subsequent broadcast. In Figure 1, the broadcast step
is identified by reference numeral 212.
[0032] At step 214, the system determines whether the final position of the sentence format
has been processed. If not, the system increments a position pointer and returns along
flow line 216 to process the next position within the sentence format. Thus, in the
example of Table I, the system returns to process position 2. At step 208, the system
determines that position 2 requires a fixed digitized word. Hence, the system proceeds
to step 218 to retrieve the fixed word designated by the sentence format. In this
case, the fixed word is "flight." Hence, the system retrieves digitized data presenting
the spoken word "flight" from the data base and broadcasts the retrieved word.
[0033] Again, the system returns along data flow line 216 to process a new position within
the sentence format. In the example of Table I, the next position, position 3, calls
for a variable word setting forth the flight number. Accordingly, the system proceeds
to step 210, wherein the system retrieves the digitized data setting forth the spoken
flight number corresponding to the alphanumeric flight number designation received
in the input message. Thus, if the flight number received in the input message is
represented by the alphanumeric sequence "1059," the system retrieves digitized data
providing the spoken words "ten," "fifty," and "nine." To this end, the system maintains
a "number" data base which stores spoken numbers for use with any data type requiring
numbers. Exemplary data types such as flight number, gate number, baggage claim area,
departure time, etc. thereby share a common data base. Thus, the digitized spoken
words "ten," "fifty," and "nine" are retrieved in circumstances requiring that the
number "1059" be spoken, such as if the departing gate number is "1059," the departure
time is "10:59," or the baggage claim area is "1059." As will be described below,
the numbers are preferably stored in a variety of different styles and inflections
to allow natural-sounding numbers to be recited in any circumstances.
[0034] Once the digitized words "ten," "fifty," and "nine" are retrieved from memory and
broadcast, the system proceeds to the next position wherein the system retrieves the
fixed digitized words "will depart." Execution continues, during which time the system
processes each successive position within the sentence format. At each position, the
appropriate variable or fixed digitized words are retrieved from the data base memory
and immediately broadcast. Execution proceeds at a sufficient speed such that the
words are broadcast one after the other in close succession to produce a natural-sounding
sentence.
[0035] The assembled sentence is thereby "spoken" in the same manner in which a conventional
compact disc system broadcasts words or music; that is, the digitized words are "played"
in succession. Appropriate pauses may be included between words within the sentence
to ensure a natural sentence flow.
[0036] Continuing with the example of Table I, the resulting "spoken" sentence might be
"XYZ Airlines Flight ten fifty-nine will depart Chicago from Terminal One, gate twenty-three
at twelve forty-seven PM." The sentence is broadcast by means described below to the
passengers in the aircraft, who thereby hear a natural-sounding sentence as if spoken
by a member of the flight crew. By assembling the sentence from digitized words and
phrases, rather than by using a voice synthesizer wherein words are created by phonetically
"sounding out" individual syllables or words, a more natural-sounding sentence is
achieved.
[0037] At step 222, the system returns to step 202 to receive and process a new message.
The new message may provide the departing flight information for a different airline
flight. Typically, an incoming message will provide the departing flight information
for many connecting flights, perhaps 10-20 such flights. Thus, the system will reexecute
the steps shown in Figure 1 a number of times to process the input data corresponding
to each of the connecting flights, to thereby generate sentences reciting all of the
connecting gate information.
[0038] In an alternative embodiment, the retrieved words are stored in a temporary member
for later broadcast. Such a system might include parallel processing capability such
that, while a first sentence is being broadcast from temporary memory, a second sentence
is being assembled.
[0039] Once all of the information within a particular incoming message is processed to
generate one or more spoken sentences, the system waits to receive a new message.
The new message may set forth different types of information within a different sentence
format. Typically, the system will receive numerous input sentence formats to allow
the system to broadcast a wide variety of natural-sounding sentences conveying a wide
variety of possible input data.
[0040] Also, although generally described with respect to an exemplary flight information
system for providing flight information to passengers of an aircraft, the message
assembler shown in Figure 1 is advantageously employed in any environment where variable
input information must be communicated to a listening audience over an audio system.
In particular, the system is advantageously employed wherever input data to be broadcast
falls within a finite number of data types, each having a range of anticipated values
which may be stored in digitized spoken form in a data base.
[0041] With reference to Figure 2, a method by which the invention provides spoken numbers
of proper style and inflection will now be described.
[0042] A natural-sounding sentence is composed of words of differing inflections. Automatically-generated
sentences which do not use the proper inflection for component words may sound artificial
or metallic. Accordingly, to assemble a natural-sounding sentence from digitized words,
the proper inflection for the component words is preferably determined.
[0043] Generally, it has been found that three broad forms of inflection are necessary for
use in achieving natural-sounding sentences incorporating numbers. The three forms
of inflection are falling, rising, and constant. A word spoken at the end of a sentence
generally has a falling inflection. A word spoken in the middle of a sentence generally
has a rapidly rising inflection if it is closely followed by another word. A word
spoken in the middle of a sentence generally has a slowly rising inflection if it
is not followed closely by another word. In accordance with the invention, at least
a portion of the words used in assembling sentences are stored in three different
digitized forms corresponding to the three inflection forms. Thus, a version of the
word having the proper inflection can be retrieved, depending upon the location of
the word within the sentence. In a possible embodiment, all words in the data base
of digitized words are recorded under all three different inflections.
[0044] In a preferred embodiment, only "number" words, i.e., words used to recite numeric
strings, are stored under all three inflection forms. It has been found that input
sentence formats may be selected wherein all other words need be stored under only
one inflection to achieve sufficient natural-sounding sentences. For example, the
word "and" need only be stored under the slowly rising inflection form because the
word "and" will always appear in mid-sentence not followed closely by another word.
[0045] Numbers are stored under all three inflections, since numbers may appear in a variety
of positions within a sentence or at the end of a sentence. For example, the number
string "1024" may appear in the middle of a sentence followed closely by another word:
"Flight 1024A will depart from gate 15." Alternatively, the number string "1024" may
appear in the middle of a sentence not followed closely by another word: "Flight 1024
will depart from gate 15." Finally, the string "1024" may appear at the end of a sentence:
"Flight 15 will depart from gate 1024." Thus, all numbers are stored under all three
inflection forms such that the proper inflection form can be retrieved depending upon
the position of the number within the sentence.
[0046] In the example just described, the numeric string "1024" is actually composed of
three component numbers: "ten," "twenty," and "four." The system processes the inflection
of each of the individual component words separately. In this example, the word "ten"
is followed closely by the word "twenty" and the word "twenty" is followed closely
by the word "four." Accordingly, the words "ten" and "twenty" both have a rapidly
rising inflection, regardless of the position of "1024" in the sentence. In this example,
only the word "four" will have a slowly rising, rapidly rising, or falling inflection,
depending upon the location of the number "1024" within the sentence.
[0047] The system also selects a proper style for reciting numbers. The system characterizes
numbers according to one of two general numeric styles. In the first, "short" style,
the words "hundreds" or "thousands" are not spoken. For example, in the short style,
the number "1024" is spoken as "ten twenty-four." In a "long" numeric style, the words
"hundreds" or "thousands" are inserted. For example, the number "1024" is recited
as "one thousand twenty-four."
[0048] When embodied within an information system for a passenger aircraft, the short style
is used for reciting gate numbers, flight numbers, baggage claim areas, and the like.
The long style is used for reciting altitudes, distances, temperatures, and the like.
Thus, "flight 1024" is recited as "flight ten twenty-four," whereas "1024 feet" is
recited as "one thousand twenty-four feet."
[0049] During assembly of sentences incorporating numbers, the message assembler determines
the proper numeric style and retrieves the digitized words appropriate to the selected
numeric style. Thus, in the example, to recite a "flight 1024," the system retrieves
the individual words "flight," "ten," "twenty," and "four" from the digitized word
data base for playback in succession. To recite "1024 feet," the system retrieves
the individual digitized words "one," "thousand," "twenty," "four," and "feet."
[0050] A method by which the invention accounts for numeric style and numeric inflection
to generate natural-sounding spoken numbers is shown in Figure 2. The steps of Figure
2 are executed as a part of the execution of step 210 of Figure 1. However, the steps
of Figure 2 are executed only for processing alphanumeric strings which include numbers.
Thus, other variable words, such as destination cities, i.e., "Los Angeles," are not
processed using the procedure of Figure 2.
[0051] For alphanumeric strings with numbers, the system, at step 250, initially extracts
all numeric strings from the input alphanumeric character string. Thus, for input
string "1024A," the system extracts "1024." Also as an example, for the string "10B24,"
the system extracts the number strings "10" and "24." Thus, an input character string
may contain one or more numeric strings. For each extracted numeric string, the system,
at step 252, determines the proper numeric style for the numeric string. Thus, if
the numeric string is "1024," the system determines whether this should be recited
in the long style or the short style. This determination is made from an examination
of the data type of the input character string. For each numeric data type, the system
stores an indicator of the corresponding style. For example, if the data type is a
"flight number," then the short style is used. If the data type for the input character
string is an altitude, then the long style is selected. The proper data type may be
determined from the location of the character string within the input data block.
Alternatively, the data block may include headers immediately prior to each data type,
designating the data type.
[0052] Once the proper numeric style is determined, the system, at step 254, parses the
numeric string into its component numbers according to the selected numeric style.
Thus, "1024" is parsed as "1000" and "24" for the long numeric style, and "10" and
"24" for the short numeric style.
[0053] Next, at step 256, the system assembles a word equivalent of the alphanumeric string
which includes any parsed numeric strings, as well as any letters or other characters.
Once a word equivalent of the alphanumeric string is assembled in sequential order,
the system, at step 258, determines the inflection of all component numbers included
within the word equivalent of the alphanumeric string. To this end, the system examines
each "number" word within the string to determine whether the word is positioned in
the middle of the string or at the end of the string. If in the middle, then the rapidly
rising inflection form is chosen. If the "number" word occurs at the end of the string,
then the system must determine what words, if any, follow the alphanumeric string.
If the alphanumeric string constitutes the final portion of a sentence, a "number"
word at the end of the string therefore falls at the end of the sentence. Hence, the
falling inflection is chosen. If, on the other hand, the alphanumeric string is positioned
in the middle of a sentence, then a "numeric" word falling at the end of the string
will be assigned the slowly rising inflection.
[0054] Once the proper inflection form for each component number is determined at step 258,
the system is ready to retrieve the digitized spoken words corresponding to all components
of the word equivalent of the alphanumeric string. This retrieval is accomplished
at step 260. Processing continues at step 212 of Figure 1, which operates to broadcast
the retrieved words. As the sentence is broadcast to the passengers, numbers recited
within the sentence are thereby spoken in the proper style and with the proper inflection.
[0055] The system shown in Figures 1 and 2 may be configured to assemble sentences in any
of a variety of languages. To handle various languages, the data base of digitized
words must include the necessary foreign words and phrases. Also, each different language
has different sentence formats. For example, for a German sentence, the sentence format
may have the fixed verb of the sentence at the end of the sentence format, rather
than near the beginning of the sentence format as commonly found in English sentences.
[0056] Each alternative language may be handled by a separate microprocessor device. Alternatively,
a single microprocessor device may sequentially process all languages.
[0057] In accordance with a preferred embodiment shown in the remaining figures, the spoken
message assembler described above is implemented within an on-board flight information
system for providing flight information to airline passengers. In a first embodiment,
the system provides connecting gate and baggage claim area information. In a second
embodiment, the system provides flight information such as air speed, altitude, and
information regarding points of interest over which the aircraft travels. This information
may be tailored to the various phases of flight of the aircraft.
[0058] The heart of the system, a data processor 13, receives messages containing flight
information over a data bus 59 from various systems of the aircraft. Examples of such
systems include an ACARS receiver 19, a navigation system 15, an aircraft air data
system 17, and a maintenance computer 21. Each of these systems, from which information
is received, is entirely conventional and will not be described in detail. Data processor
13 may be connected to any one or a multiple of these systems depending on the type
of information desired to be displayed to the passengers of the aircraft. Data processor
13 may be controlled by a control unit 22, which includes various means for allowing
for manual activation of the data processor and control over the functions of the
data processor.
[0059] Data processor 13 generates audio messages using the message assembler described
above and transmits the audio messages in the form of audio signals over an audio
link line 91 to an audio selector unit 92 that routes the audio signal to a plurality
of conventional audio systems. For example, the audio signals may be transmitted over
a link line 93 to a public address speaker 95 in the passenger compartment of the
aircraft or over link line 97 to a plurality of individual passenger headphone sets
96 via individual multichannel selectors 94.
[0060] The data processor may also generate video display screens which set forth the data
incorporated in the audio messages. The video display screens are output as a video
signal and transmitted over a video link line 31 to a conventional video selector
unit 29 that routes the video signal to a plurality of conventional video display
systems. For example, the video signal may be transmitted over link lines 39 to a
preview monitor 33, or over link lines 43 to a video monitor 37, or over link lines
41 to a video projector 35, which projects the sequences of video screens received
onto a video screen 45.
[0061] Message assembler 200 and its data base of digitized words and phrases are components
of data processor 13 and, hence, are not shown separately in Figure 3.
[0062] It should be understood that this particular illustration of an aircraft audio/video
display system is only set forth as an example of one of many such systems that may
be utilized and, therefore, should not be considered as limiting the present invention.
[0063] The first embodiment, wherein connecting gate and baggage claim area information
is processed, will now be described with particular reference to Figures 3-6. In Figure
3, a conventional ACARS/AIRCOM/SITA receiver 19 is shown. This receiver receives connecting
gate and baggage claim area information from an airline central computer 47 via a
transmitting antenna 51 over carrier waves 53. A link line 49 connects airline computer
47 to transmitting antenna 51. However, any transmitter receiver system could be used,
including a satellite communication system, and this invention is not limited to the
ACARS system referred to herein.
[0064] Destination airport information may also be entered into the system via an optional
data entry terminal (not shown).
[0065] Assuming that the ground base station and the aircraft are communicating over an
ACARS/AIRCOM/SITA communication system, information transmitted from ground base computer
47 is received by the ACARS/AIRCOM/SITA receiver 19. The data is output from the ACARS/AIRCOM/SITA
receiver 19 to the data processor 13 in a format such as described in ARINC characteristic
597, 724, or 724B.
[0066] In order for the data processor 13 to promptly process the information received,
the data is assumed to be in a specific fixed format when it is received from ACARS
receiver 19. The format illustrated in Table II is an example of a possible format
for up-linked data:

[0067] The data format contains strings of characters which are utilized by data processor
13 to generate audio messages and optional video displays. Exemplary strings are the
flight number string "966," the destination airport string "Frankfurt," the arrival
gate string "17," and the baggage claim area string "C." For audio messages, relevant
data is extracted from the strings and incorporated into audio messages via message
assembler 200. For video displays, these strings are used both to retrieve an airport
chart representing the destination airport, and for direct inclusion in video displays.
[0068] From information contained within the exemplary data block of Table II, the following
spoken audio messages may automatically be generated:
"Lufthansa flight nine six six arriving in Frankfurt at eleven forty five A M,
terminal A, gate number seventeen, baggage claim area C."
"Air France flight eight forty one will be departing for Paris from terminal A
gate ten at twelve fifteen."
"Lufthansa flight five oh two will be departing for Hamburg from terminal B gate
five at twelve thirty."
"Swissair flight sixty five will be departing for Zurich from terminal B gate two
at twelve thirty five."
[0069] To generate these spoken word audio messages, the data processor utilizes the message
assembler, described above, to extract relevant data and to assemble messages reciting
the data.
[0070] To generate the message "Lufthansa flight nine six six arriving in Frankfurt at eleven
forty five A M, terminal A, gate number seventeen, baggage claim C," the message assembler
extracts the variable data "Lufthansa," "966," "Frankfurt," "11:45," "A," "17," and
"C" for incorporation into a sentence having fixed words "flight," "arriving in,"
"at," "terminal," "gate number," and "baggage claim area." The message processor retrieves
spoken word equivalents of the alphanumeric data extracted from the message in the
manner described above. The numbers "966," "11:45," and "17" contained within the
flight number, arrival time, and arrival gate may be processed according to the inflection
and style manipulation procedure described above with reference to Figure 2.
[0071] To generate the connecting flight information messages, the message assembler extracts
the various fixed and variable words from the input message, retrieves spoken word
equivalents for these alphanumeric values, and broadcasts the spoken word equivalents
in succession to produce complete sentences.
[0072] A total of four different audio messages are thereby generated from the data contained
within the data block of Table II. The four messages are generated by executing the
steps of Figure 1 a total of four times. Once completed, the system waits until a
new input message is received.
[0073] An extremely wide range of spoken messages can be generated providing a wide variety
of useful information. For example, input messages may provide flight information
such as altitude, ground speed, outside air temperature, time or distance to destination,
time or distance from destination, etc. Also, weather-related messages may be received
and processed, such as messages describing the temperature and weather conditions
at the destination airport. Alternatively, weather conditions within the vicinity
of the aircraft may be described, including wind speed, visibility, ceiling, etc.
Messages providing marine-related information may be provided. For example, messages
specifying the surf, tide, and marine visibility may be provided.
[0074] In general, any input message can be processed so long as each of the component words
for inclusion in the sentence is stored in the digitized memory of the system. Thus,
a wide variety of custom messages may be typed into a ground-based computer, then
transmitted to the aircraft for conversion to a spoken audio message. The variety
of possible messages is limited only by the number of digitized words stored in the
digitized memory of the system. Accordingly, by providing a system with a larger vocabulary
of digitized words, a wider range of audio messages can be generated.
[0075] The system may also generate an optional video display for presentation to the passengers
while the audio messages are simultaneously provided over the speaker system. To this
end, the system may extract the above-described flight information from the input
message of Table II and format the information for a textual display. Alternatively,
rather than providing a simple textual display, the system may retrieve a map of the
destination terminal and provide icons or the like identifying the locations of the
various arrival and departure gates on the map.
[0076] Data processor generator 13 operates on the information it receives in a manner illustrated
by the flowchart of Figure 4. The input to data processor 13 is from a digital data
bus input port on an interrupt basis, 181. Whenever there is information to be received,
the data processor interrupts whatever it is doing to read the new data. At 183, processor
13 reads the input message containing the connecting gate data from the bus until
a completed message, 185, is received. The processor keeps returning to the interrupt,
187, until an end of message is received.
[0077] After receiving an end of message, the alphanumeric strings providing the fixed and
variable words are extracted, at 189, from the input message. At 90, the extracted
alphanumeric strings are output to message assembler 200 for generation of audio messages
based on data contained within the fixed and variable alphanumeric strings.
[0078] The thus-generated audio message is output to the passenger audio system, at 194,
via a link line 101 to an audio broadcast system 103 (Figure 3). The audio messages
may be broadcast over a public address speaker system within the passenger cabin or
may be broadcast over a conventional multichannel individual headphone system to the
passengers. Alternatively, the message assembler may provide the audio messages in
a variety of languages, each language either being provided over a separate audio
channel or broadcast sequentially over a single channel. Background music may be provided
to accompany the audio messages.
[0079] For the optional video display, the extracted connecting gate information is arranged
into its predetermined page format, at 91, for display. A terminal chart signifying
the destination airport specified in the input message is retrieved, at 93, from a
data storage unit. An aircraft symbol is positioned at the arrival gate on the terminal
chart and the arrival gate and baggage claim area information is written on the terminal
chart for display. The terminal chart, along with its information, is output as a
video signal to the video display according to a specified sequence, at 195. The terminal
chart is displayed, at 197, for a period of typically 10 to 60 seconds. Upon that
display time being over, portions of the alphanumeric text containing the connecting
gate information are displayed in a suitable format, at 199, for the specific period
of time. Preferably, the duration of the video displays is synchronized with the duration
of the audio message which is simultaneously broadcast.
[0080] If multiple pages of terminal charts or connecting gate information are to be displayed,
the pages are cycled onto the display. The entire process is continually repeated.
[0081] Upon the aircraft approaching its destination, a display, such as an exemplary display
illustrated in Figure 5, may be presented to the passengers while audio messages reciting
the displayed information are simultaneously broadcast.
[0082] In order to familiarize the passengers with the layout of the terminal and all the
gates of the terminal, as well as the baggage claim areas, a display shown in Figure
6 may be provided to the passengers while an audio message reciting the baggage claim
area is simultaneously broadcast. As can be seen, the terminal chart of Figure 6 illustrates
all the gates and terminal buildings for a particular airport, along with baggage
claim areas. In addition, the aircraft symbol is located next to the arrival gate.
[0083] The connecting gate information may be processed to produce audio messages and video
displays immediately after the information is received over the ACARS system, or the
information may be stored until the aircraft begins its approach to its destination.
[0084] The audio portion may be provided as a stand-alone system with no video display generation
hardware or software required. In such case, only the audio messages are generated
and broadcast. All of the information provided in a combined audio/video system is
provided in a stand-alone audio system, with the exception that graphic displays such
as flight plan maps and destination airport charts are not provided.
[0085] The stand-alone audio system is ideally suited for aircraft not possessing passenger
video display systems. In such aircraft, the stand-alone audio system merely interfaces
with a conventional multichannel passenger audio broadcast system, and provides flight
information, as described above, through the passenger audio system.
[0086] Referring to Figures 7-9, an alternative system for providing flight information
to the passengers in the aircraft passenger compartment is illustrated. The alternative
system may tailor the information to various phases of the flight.
[0087] An alternative data processor 13' utilizes the received flight information and determines
a current phase of the flight of the aircraft, i.e., the system determines whether
the aircraft is in "en route cruise," "descent," etc. Once the current phase of the
flight has been determined, data processor 13' generates audio messages and optional
sequences of video display screens tailored to the current phase of the flight for
presentation to the passengers of the aircraft. For example, if the aircraft is in
an "en route cruise" phase, data processor 13' may generate an audio message reciting
the ground speed and outside air temperature and simultaneously generate a video display
screen for displaying the same information. If the aircraft is in a "descent" phase,
data processor 13' may generate a sequence of audio messages reciting the time to
destination and the distance to destination screen and simultaneously generate the
same information.
[0088] Each audio message provides useful information appropriate to the current phase of
the flight plan. For example, during power on, preflight, engine start, and taxi out,
various digitized audio messages may be provided which welcome passengers aboard the
aircraft, describe the aircraft and, in particular, provide safety instructions to
the passengers.
[0089] During flight phases such as takeoff, climb, and en route cruise, various audio messages
may be generated which indicate points of interest over which the aircraft is flying
or recite flight information received via message handler 63'. For example, if an
input message is received providing ground speed, outside air temperature, time to
destination, and altitude, an audio message may be generated by message assembler
200 reciting the information. A video display screen such as shown in Figure 8 may
be simultaneously provided. If the aircraft has approached a point of interest, an
audio may be assembled and broadcast to the passengers indicating the proximity of
the aircraft to the point of interest. A video display screen such as the one shown
in Figure 9 may be simultaneously provided.
[0090] Thus, message assembler 200 may generate an audio voice message such as: "The current
ground speed is 574 miles per hour. The current outside air temperature is minus 67
degrees Fahrenheit." The audio message is then broadcast to the passengers.
[0091] Data processor 13' includes: a message handler 63' for receiving flight information
messages; a flight information processor 65' for determining the current flight phase
and for generating audio messages and video display sequences corresponding to the
current flight phase or point of interest; and a data storage unit 69' for maintaining
flight information and digitized data.
[0092] Message handler 63' receives flight phase information as encoded messages over data
bus 59'. As each new flight information message is received, message handler 63' generates
a software interrupt. Flight information processor 65' responds to the software interrupt
to retrieve the latest flight information from message handler 63'. Once retrieved,
flight information processor 65' stores the flight information in a flight information
block 104' in data storage unit 69'.
[0093] In addition to maintaining digitized words and phrases for use in assembling audio
messages, storage unit 69' also maintains specific sequences of graphic displays 120'.
Storage unit 69' also maintains "range" tables 114, which allow flight information
processor 65' to determine the current phase of the flight plan. For example, for
the "en route cruise" phase, range table 114' may define an altitude range of at least
25,000 feet such that, if the received flight information includes the current altitude
of the aircraft, and the current altitude is greater than 25,000 feet, data processor
65' can thereby determine that the current phase of the flight plan is the "en route
cruise" phase and generate audio messages and optional video displays appropriate
to the "en route cruise" phase of the flight plan.
[0094] Range tables 114' also include points of interest along the flight route of the aircraft.
For each point of interest, range tables 114' provide the location of the point of
interest and a "minimum range distance" for the point of interest. If the received
flight information includes the location of the aircraft, flight information processor
65' determines whether the aircraft is located within the minimum range associated
with any of the points of interest. Thus, once the aircraft has reached the vicinity
of a point of interest, the system automatically generates audio messages and optional
video display screens informing the passengers of the approaching point of interest.
[0095] The audio message may recite the name of the point of interest and the distance and
travel time to the point of interest and the relative location of the point of interest
to the aircraft, i.e., "left" or "right." The audio messages may be provided in a
variety of languages, with each language broadcast on a different audio channel.
[0096] Alternatively, digitized monologues describing the points of interest may be accessed
from a mass storage device for playback while the aircraft is in the vicinity of the
point of interest. In such an embodiment, the message assembler need not be used to
assemble audio messages. Rather, fixed digitized monologues are simply broadcast.
These may be accompanied by background music.
[0097] The optional video screens may provide, for example, the name of the point of interest,
the distance and travel time to the point of interest, and a map including the point
of interest, with the flight route of the aircraft superimposed thereon.
[0098] Considering points of interest in greater detail, periodically, flight information
processor 65' compares the current location of the aircraft with the location of points
of interest in the data base tables and determines whether the aircraft has reached
the vicinity of a point of interest. As can be seen from an exemplary range table
114' provided in Table III, range table 114' can include points of interest such as
cities and, for each point of interest, include the location in latitude and longitude
and a minimum range distance.
Table III
POINTS OF INTEREST |
Item |
Latitude |
Longitude |
Minimum Range |
City A |
45 degrees |
112 degrees |
100 miles |
City B |
47 degrees |
114 degrees |
10 miles |
City C |
35 degrees |
110 degrees |
5 miles |
[0099] Thus, for example, city A is represented as having a particular location and a minimum
range distance of 100 miles, whereas city B has a different location and a minimum
range distance of 10 miles. Flight information processor 65 includes an algorithm
for comparing the current location of the aircraft to the location of each city and
for calculating the distance between the aircraft and the city. Once the distance
to the city is calculated, flight information processor 65 determines whether the
distance is greater than or less than the minimum range specified for that city.
[0100] Taking as an example City A, if the aircraft is 200 miles from city A, flight information
processor 65 will determine that the aircraft has not yet reached the vicinity of
city A. Whereas, if the distance between the aircraft and city A is 90 miles, flight
information processor 65 can determine that the aircraft has reached the vicinity
of city A and initiate a sequence of displays, previously described, informing the
passengers. The algorithm for calculating the distance between the aircraft and each
point of interest, based on the latitudes and longitudes, is conventional in nature
and will not be described further. The algorithm may take considerable processing
time and, hence, is only executed periodically. For example, the point-of-interest
table is only accessed after a certain number of miles of flight or after a certain
amount of time has passed.
[0101] Range table 114' may include the location of a wide variety of points of interest,
including cities, landforms, the equator, the International Date Line, and the North
and South Poles.
[0102] What has been described is a spoken message assembler for generating natural-sounding
spoken sentences conveying input data. As a specific application, the message assembler
has been described in combination with a flight information system for aircraft passengers
that provides useful information to the passengers en route to their destination.
The system connects into a conventional passenger audio broadcast system. In one embodiment,
the system provides destination terminal information such as connecting gates and
baggage claim areas and flight information. In another embodiment, the flight information
is tailored to the current phase of the flight plan of the aircraft. For example,
messages describing points of interest are generated as the aircraft reaches the vicinity
of the points of interest. The systems can be combined to provide both types of information.
In such a combined system, the destination terminal information may be automatically
presented once the aircraft reaches the "approach" phase of the flight. The system
may also provide the information in video form over a video display system.
[0103] Various modifications are contemplated, and they obviously will be resorted to by
those skilled in the art without departing from the spirit and scope of the invention
as hereinafter defined by the appended claims, as only a preferred embodiment of the
invention has been disclosed.
1. An audio information system for generating audio messages for a listening audience,
the system having a receiver for receiving input data, said system comprising:
memory means for storing digitized spoken words, with individual digitized words
corresponding to individual units of the received input data;
data processor means for generating complete audio messages based on said input
data, said data processor means including retrieval means for retrieving, from said
memory means, selected digitized words which correspond to the units of received input
data; and
message assembly means for assembling the retrieved selected words into complete
audio messages which convey the information contained in the input data in natural-sounding
sentences.
2. The audio information system of Claim 1, wherein said input data includes connecting
flight information data including one or more of flight numbers, destination terminals,
gate numbers, baggage claim area numbers, and arrival and departure times, and wherein
said memory means stores digitized spoken words corresponding to said connecting flight
information, such that said complete audio messages provide a recitation of the flight
information in a natural-sounding sentence.
3. The audio information system of Claim 1 or claim 2, wherein at least some of said
digitized spoken words are stored in a plurality of inflection forms, each form having
a different vocal inflection, and wherein said data processor further includes:
means for determining a proper vocal inflection form for said words, said proper
inflection being determined by the relative placement of said words in said audio
message; and
means for selecting said proper inflection form of said selected digitised words
for inclusion in said complete audio message.
4. The audio information system of any preceding claim, wherein at least some of said
digitized words are stored in a plurality of forms, each form being a different language
version of said word, and wherein said data processor further includes means for retrieving
and assembling words of matching languages.
5. The audio information system of Claim 4, wherein said data processor assembles a plurality
of messages conveying the same input data, said messages being in different languages.
6. The audio information system of Claim 5, wherein said system includes means for outputting
said plurality of messages of different languages in sequential order through a single
output channel.
7. The audio information system of Claim 5, wherein said system includes means for outputting
said plurality of messages of different languages simultaneously over a plurality
of separate output channels.
8. The audio information system of any preceding claim, wherein the digitized spoken
words are maintained in digital form on a mass storage device.
9. The audio information system of any preceding claim, wherein said system is mounted
aboard a passenger aircraft and includes means for broadcasting said complete audio
messages to passengers within said aircraft.
10. The audio information system of Claim 9, further including a receiver for receiving
flight information identifying the location of the aircraft, and wherein said memory
means also stores the names and locations of a plurality of points of interest in
digital form;
said data processor means further including means for determining a current point
of interest by:
comparing the location of the aircraft with the locations of points of interest
stored by the memory means to identify, out of the plurality of points of interest,
a point of interest in the vicinity of the current location of the aircraft;
retrieving digitized words identifying the name and relative location of the point
of interest in the vicinity of the aircraft; and
assembling a complete audio message providing the name and relative location of
the point of interest such that, as points of interest are reached during the flight
of the aircraft, the system automatically broadcasts an audio message identifying
the point of interest to the passengers.
11. The audio information system of Claim 9, wherein the aircraft follows a flight plan
having a plurality of phases, and wherein said data processor means further includes:
means for determining a current phase of the flight plan;
means for selectively retrieving flight information from said input data, said
selected flight information being selected according to the determined current phase
of flight, said selected flight information being used by said message assembly means
for generating said audio message such that, as each phase of the flight plan is reached,
the system assembles and broadcasts an audio message reciting useful flight information
tailored to the current phase of the flight plan to the passengers.
12. The audio information system of Claim 11 wherein said data processor means also retrieves
a sequence of video display information corresponding to the determined current phase
of flight and inputs the retrieved sequence of video display information to a video
display system for display to the passengers, such that, as each phase of the flight
plan is reached, the system displays a sequence of video displays tailored to the
current phase of the flight plan to the passengers along with the audio messages.
13. The audio information system of Claim 11, wherein the memory means further includes
a table means for storing a range of flight information corresponding to each phase
of the flight plan and wherein the data processor determines the current phase of
the flight plan by determining a phase having a range corresponding to the received
flight information.
14. An audio information system for automatically generating audio messages for a listening
audience, said audio messages having preselected sentence formats, said system comprising:
receiving means for receiving input data including one or more fixed units of data
and one or more variable units of data;
memory means for storing digitized spoken words including fixed words corresponding
to portions of said preselected sentence formats and variable words corresponding
to said variable units of data, with each variable word being a digitized spoken equivalent
of a corresponding unit of data;
data processor means for generating complete audio messages based on the input
data, said data processor means including:
means for determining a sentence format corresponding to the input data;
means for retrieving digitized fixed words corresponding to the sentence format;
and
means for retrieving digitized variable words corresponding to the variable units
of data within said input data; and
message assembly means for assembling said retrieved fixed and variable words into
complete audio messages, such that audio messages are generated which convey the input
data in natural-sounding sentences.
15. An audio information system for providing terminal and gate information to aircraft
passengers in an aircraft comprising:
a receiver for receiving destination airport terminal information regarding one
or more of connecting flight numbers, departure times, departure gates and destinations,
and baggage claim areas from a ground-based transmitter;
memory means for storing a plurality of digitized words corresponding to said destination
terminal information;
audio message assembly means for creating audio messages incorporating said destination
airport terminal information by selectively retrieving and assembling said digitized
words; and
means for inputting said audio messages to said audio system for broadcast to the
passengers.
16. The audio information system of Claim 15, wherein said memory means further stores
data for a plurality of airport charts representative of destination airport terminals;
with
said receiver receiving information regarding flight numbers and destination airports
from a ground-based transmitter; and
data processor means utilizing the received flight numbers and airport information
to retrieve the data for the airport chart of the destination airport terminal from
said memory means and inputting the data to a video display system for display.