(19)
(11)EP 2 176 812 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
13.04.2016 Bulletin 2016/15

(21)Application number: 08827781.9

(22)Date of filing:  15.08.2008
(51)International Patent Classification (IPC): 
G06K 9/00(2006.01)
G01D 7/02(2006.01)
G01D 3/08(2006.01)
B64D 45/00(2006.01)
G01P 1/08(2006.01)
B64D 47/08(2006.01)
G07C 5/08(2006.01)
(86)International application number:
PCT/US2008/073327
(87)International publication number:
WO 2009/026156 (26.02.2009 Gazette  2009/09)

(54)

SYSTEM FOR OPTICAL RECOGNITION, INTERPRETATION, AND DIGITIZATION OF HUMAN READABLE INSTRUMENTS, ANNUNCIATORS, AND CONTROLS

SYSTEM ZUR OPTISCHEN ERKENNUNG, INTERPRETATION UND DIGITALISIERUNG VON VOM MENSCHEN LESBAREN INSTRUMENTEN, ANZEIGEN UND STEUERUNGEN

SYSTÈME POUR LA RECONNAISSANCE OPTIQUE, L'INTERPRÉTATION ET LA NUMÉRISATION D'INSTRUMENTS, D'ANNONCIATEURS ET DE COMMANDES LISIBLES PAR L'ÊTRE HUMAIN


(84)Designated Contracting States:
AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

(30)Priority: 17.08.2007 US 956675 P

(43)Date of publication of application:
21.04.2010 Bulletin 2010/16

(73)Proprietor: Bell Helicopter Textron Inc.
Fort Worth, TX 76101 (US)

(72)Inventors:
  • LEWIS, George, Steven
    Alvarado, TX 76009 (US)
  • CLANTON, Carson, Blaine
    Arlington, TX 76016 (US)

(74)Representative: Lawrence, John et al
Barker Brettell LLP 100 Hagley Road Edgbaston
Birmingham B16 8QQ
Birmingham B16 8QQ (GB)


(56)References cited: : 
WO-A2-2006/011141
US-A1- 2003 218 070
US-A1- 2005 069 207
US-A1- 2006 098 874
US-A1- 2007 146 689
US-A1- 2001 005 218
US-A1- 2004 208 372
US-A1- 2005 165 517
US-A1- 2006 228 102
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    Technical Field



    [0001] The present invention relates in general to optical recognition, interpretation, and digitization of human readable instruments, annunciators, and controls.

    Description of the Prior Art



    [0002] Instrumentation systems use electrical/electronic analog to digital systems to sense the physical state of specified electromechanical systems, digitize the sensor data, and store the digitized data for subsequent analysis. Such systems, however, are costly to produce, operate and maintain. Moreover, such systems undesirably increase the weight of weight-sensitive systems.

    [0003] There are ways of sensing the physical state of electromechanical systems well known in the art; however, considerable shortcomings remain.

    [0004] WO2006/011141 A2 describes a system and method for acquiring data from an instrument panel or the like by obtaining images of the panel and optically identifying readings of the instruments of the image.

    Summary



    [0005] Aspects of the invention are set out according to the appended independent claims. Embodiments of the invention are set out according to the appended dependent claims.

    Brief Description of the Drawings



    [0006] The novel features believed characteristic of the invention are set forth in the appended claims. However, the invention itself, as well as a preferred mode of use, and further objectives and advantages thereof, will best be understood by reference to the following detailed description when read in conjunction with the accompanying drawings, in which the leftmost significant digit(s) in the reference numerals denote(s) the first figure in which the respective reference numerals appear, wherein:

    Figure 1 is a view of an illustrative embodiment of a rotorcraft instrument panel;

    Figure 2 is an enlarged view of an illustrative embodiment of an annunciator panel of the instrument panel of Figure 1;

    Figure 3 is an enlarged view of an illustrative embodiment of a torque indicator of the instrument panel of Figure 1; Figure 4 is an enlarged view of an illustrative embodiment of an airspeed indicator of the instrument panel of Figure 1;

    Figure 4 is an enlarged view of an illustrative embodiment of an airspeed indicator of the instrument panel of Figure 1;

    Figure 5 is an enlarged view of an illustrative embodiment of an instrument of the instrument panel of Figure 1 having two separate segmented light-emitting diode gauges;

    Figure 6 is an enlarged view of an illustrative embodiment of an instrument of the instrument panel of Figure 1 having two separate needle gauges;

    Figure 7 is an enlarged view of an illustrative embodiment of an attitude indicator of the instrument panel of Figure 1;

    Figure 8 is an enlarged view of an illustrative embodiment of a horizontal situation indicator of the instrument panel of Figure 1;

    Figure 9 is a perspective view of an illustrative embodiment of a rotorcraft instrument panel and cockpit;

    Figure 10 is a graphical representation of an exemplary embodiment of a system for optical recognition, interpretation, and digitization of human readable instruments, annunciators, and/or controls;

    Figure 11 is a block diagram of an illustrative process for controlling the overall optical recognition, interpretation, and digitization process of the system of Figure 10;

    Figure 12 is a block diagram depicting an illustrative process of determining a value of an instrument or indicator as it would be interpreted by a human operator;

    Figure 13 is a block diagram depicting an illustrative image registration process whereby the actual position and orientation of an image relative to an expected position and orientation is calculated;

    Figure 14 is a block diagram of an illustrative target segmentation process which determines a state of an instrument by comparing optical properties of a target's foreground and background images; and

    Figure 15 is a block diagram of an illustrative pattern matching technique used for determining control states or instrument readings.



    [0007] While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

    Description of the Preferred Embodiment



    [0008] Illustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developer's specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.

    [0009] In the specification, reference may be made to the spatial relationships between various components and to the spatial orientation of various aspects of components as the devices are depicted in the attached drawings. However, as will be recognized by those skilled in the art after a complete reading of the present application, the devices, members, apparatuses, etc. described herein may be positioned in any desired orientation. Thus, the use of terms such as "above," "below," "upper," "lower," or other like terms to describe a spatial relationship between various components or to describe the spatial orientation of aspects of such components should be understood to describe a relative relationship between the components or a spatial orientation of aspects of such components, respectively, as the device described herein may be oriented in any desired direction.

    [0010] The present invention represents a system for optical recognition, interpretation, and digitization of human-readable instruments, annunciators, controls, and/or the like.

    [0011] For the purposes of this disclosure, the term "instrument panel" means an apparatus including one or more instruments, annunciators, controls, and/or the like. Collectively, instruments, annunciators, controls and/or the like are "objects" of the instrument panel. Such objects, however, are not limited to instrument panel objects but include instruments, annunciators, controls, and/or the like of any suitable equipment. For example, an instrument may take on the form of a gauge having any suitable shape or size and having a mechanical indicator or needle that moves across the face of a fixed background to indicate the value of a parameter or condition under measurement. In another example, an instrument may include one or more selectively-illuminated segments that display the value of a parameter or condition under measurement. An exemplary annunciator may include one or more selectively-illuminated segments, such that the illumination status, e.g., on or off, bright or dim, color, etc., of one or more of the segments corresponds to the value of a parameter or condition under measurement. A control, for example, refers to any apparatus that may be used to alter the state of the control itself and/or another apparatus. Examples of such controls include, but are not limited to, switches, knobs, buttons, levers, pedals, wheels, actuators, or other such mechanisms that may be manipulated by either a human operator or another apparatus.

    [0012] As used herein, the terms "camera" and "image acquisition sensor" refer to an apparatus that may be used to produce a digitized, computer-readable representation of a visual scene. Examples of such cameras or image acquisition sensors include, but are not limited to, digital cameras, optical scanners, radar systems, ultrasonic scanners, thermal scanners, electronic scanners, and profilometric devices.

    [0013] For the purposes of this disclosure, the term "foreground" means image elements of interest during interpretation of a visual target. For example, a needle of a gauge may be considered as foreground in certain operations. As used herein, the term "background" means image elements that are of little or no interest during the interpretation of a visual target. For example, markings on a dial face of a gauge may be considered as background in certain operations.

    [0014] It should be noted that the system and method of the present invention may be used in many diverse implementations, as is discussed in greater detail herein. The system is described in detail herein in relation to a rotorcraft instrument panel, although the scope of the present invention is not so limited. Rather, the system and method of the present invention may be used in relation to any human-readable instrument or instruments, irrespective of the type of equipment with which the instrument or instruments are associated.

    [0015] Figure 1 depicts a rotorcraft instrument panel 101 including one or more instruments, annunciators, and the like, of various styles. In the illustrated embodiment, an annunciator panel 103, also shown in Figure 2, includes one or more discrete annunciator segments, such as an annunciator segment 105, with each segment acting as an indicator for a specific, discrete condition. Examples of conditions indicated by such annunciator segments include, but are not limited to, engine fire, low fuel, battery hot, generator failure, and the like.

    [0016] As discussed above, rotorcraft instrument panel 101 includes one or more gauges, such as a gauge 107. Referring now to Figure 3, gauge 107 includes a generally circular, graduated indicator or dial 301 on its face in the illustrated embodiment. The particular embodiment of gauge 107 ittustrated in Figure 3 employs segmented light-emitting diodes 303 that are illuminated to indicate the value of the parameter or condition being measured. Note that, in this case, the values on dial 301 are non-linear, i.e., a given angular displacement has different values depending upon which region of dial 303 the angular displacement is imposed. For example, the angular displacement in a region between values of 0 and 4, generally at 305, is about the same as the angular displacement in a region between values of 5 and 7, generally at 307. In other words, the gauge has about twice the resolution between values of 5 and 7 as the resolution between values of 0 and 4. The particular embodiment of gauge 107 provides an indication of engine torque. Gauge 107 further includes a five-character alphanumeric display 309 for displaying a numeric representation of a condition or parameter being measured or system status information during diagnostic procedures. In the illustrated embodiment, display 309 provides a numeric readout of engine torque or system status information during diagnostic procedures.

    [0017] Referring to Figures 1 and 4, the illustrated embodiment of rotorcraft instrument panel 101 further includes a gauge 109 having a circular, graduated indicator or dial 401 on its face. Gauge 109 employs a needle 403 to indicate a value of a condition or parameter being measured.

    [0018] Referring now to Figures 1 and 5, the illustrated embodiment of rotorcraft instrument panel 101 further includes an instrument 111 having two separate segmented light-emitting diode gauges 501 and 503. In the illustrated embodiment, gauge 501 indicates fuel pressure in pounds per square inch and gauge 503 indicates generator output in amperes.

    [0019] Referring to Figures 1 and 6, the illustrated embodiment of rotorcraft instrument panel 101 further includes an instrument 113 having two separate needle gauges 601 and 603. Gauge 601 employs a needle 605 and gauge 603 includes a needle 607. Needles 605 and 607 move independently. In the illustrated embodiment, needle 605 indicates main rotor rotational speed as a percentage of normal rotor rotational speed for flight operations in relation to an outer scale 609. Needle 607 indicates engine turbine rotational speed as a percentage of normal engine turbine speed for flight operations in relation to an inner scale 611.

    [0020] Referring now to Figures 1 and 7, the illustrated embodiment of rotorcraft instrument panel 101 further includes an instrument 115 that combines two graduated scale gauges 701 and 703 and a discrete annunciator 705 into one instrument. The particular instrument 115 depicted in Figures 1 and 7 is known as an "attitude indicator." In the illustrated embodiment, the bank angle or roll attitude of the rotorcraft is indicated using gauge 701 by motion of an outer dial 707 with respect to a fixed pointer 709. The measured value is read from a mark on outer dial 707 corresponding with pointer 709. The region in the center of instrument, i.e., gauge 703, uses a graduated scale 711 on a movable card 713 to indicate rotorcraft pitch attitude corresponding to a fixed pointer 715. Annunciator 705, known as a "barber pole," is a discrete annunciator indicating that instrument 115 is invalid or non-operational whenever annunciator 705 is visible to the human eye.

    [0021] Still referring to Figure 7, instrument 115 further includes a control in the form of a knob 719 that allows arbitrary setting of a zero pitch attitude reference. Instrument 115 also includes a control in the form of a knob 721 to cage gyroscopes operably associated with instrument 115.

    [0022] Referring now to Figures 1 and 8, the illustrated embodiment of rotorcraft instrument panel 101 further includes an instrument 117 that combines a plurality of gauges, markers, scales, pointers, and the like, along with a plurality of discrete annunciators. In the illustrated embodiment, instrument 117 takes on the form of a horizontal situation indicator. The aircraft heading is displayed on a rotating azimuth or compass card 801 under a lubber line 803. A course deviation bar 805 operates with a fixed navigational reference receiver, such as a very high-frequency omni-directional range/ localizer (VOR/LOC) navigation receiver, a very high-frequency omni-directional range tactical air navigation (VORTAC) receiver, or an instrument landing system (ILS) receiver to indicate either left or right deviations from the course that is selected with a course select pointer 807. Course deviation bar 805 moves left or right to indicate deviation from a center of scale 809. The desired course is selected by rotating course select pointer 807 with respect to compass card 801 by means of a course set knob 811. A fixed aircraft symbol 813 and course deviation bar 805 display the aircraft relative to the selected course as though the pilot was above the aircraft looking down. A glide slope deviation pointer, which is not shown in Figure 8 as the pointer is only visible when a valid glide slope signal is being received, indicates the relationship of the aircraft to the glide slope. When the glide slope deviation pointer is above the center position of a scale 815, the aircraft is above the glide slope and an increased rate of descent is required.

    [0023] Still referring to Figures 1 and 8, annunciator 817 indicates, when visible to the human eye, that the navigation functions of instrument 117 are inoperative. Annunciator 819 indicates, when visible to the human eye, that the heading functions of instrument 117 are inoperative. Annunciator 821 indicates a heading reference set by an operator of the rotorcraft using a control knob 823 of instrument 117.

    [0024] Referring to Figure 1, the illustrated embodiment of instrument panel 101 further includes a guarded toggle switch 119, which is a control to open or close a fuel valve of the rotorcraft. Instrument panel 101 further includes a push button switch 121 that is used to control an operational mode, i.e., automatic or manual, of a full authority digital engine control (FADEC). A face of button 121 illuminates as an annunciator of the currently selected FACEC mode.

    [0025] Figure 1 also shows numerous other instruments, annunciators, and controls present, visible to the human eye, and available to a crew during rotorcraft operations. From a single image represented in Figure 1, the system described herein recognizes, interprets, and produces digital time histories for a plurality of annunciators, instruments, and controls using a single image acquisition sensor rather than sensors corresponding to each of the annunciators, instruments, and controls. In the implementation illustrated in Figure 1, the system described herein recognizes, interprets, and produces digital time histories for 57 annunciators, 26 instruments, and seven controls using a single image acquisition sensor rather than 90 independent sensors required using conventional instrumentation. Accordingly, the cost, weight, and complexity of the system described herein are significantly less than for conventional sensing systems.

    [0026] The examples described in the preceding paragraphs are characteristic of the types of a devices and apparatus to which the invention described herein can be applied; however, neither the listed items nor the helicopter context are in any way exhaustive as to the opportunities for application of this invention. For example, the present invention may be used with or incorporated into various types of equipment, systems, devices, and the like other than with or in rotorcraft.

    [0027] Figure 9 is a perspective view of a portion of a cockpit of a rotorcraft as captured by a digital cockpit video recorder (not shown in Figure 9), i.e., an image acquisition sensor. Visible in Figure 9 is a collective head 901 disposed at a free end of a collective lever (not shown in Figure 9). Collective head 901 is used to control a thrust magnitude of a main rotor of the rotorcraft by varying an incidence angle of the main rotor blades of the rotorcraft. In practical use, raising collective head 901 generally increases thrust, while lowering collective head 901 generally decreases thrust. Also, visible in Figure 9 is a cyclic grip 903. Movement of cyclic grip 903 causes cyclical pitch variations of the main rotor blades of the rotorcraft, resulting in redirection of main rotor thrust, to control direction of flight. Anti-torque or tail rotor pedals, indicated collectively as 905, are also visible in Figure 9. Movement of pedals 905 causes a change in tail rotor thrust of the rotorcraft, resulting in a change in yaw attitude. Collective head 901, cyclic grip 903, and tail rotor pedals 905 are further examples of controls having positions that may be processed by the system described herein.

    [0028] According to the system described herein, a digital representation of an image of the instruments, annunciators, and/or controls is interpreted using computerized means in a way corresponding to interpretation by human observation. In the case of annunciators, the state of the annunciator defines a quality or state being measured. Changes in chromaticity and/or intensity between annunciated states can be determined by the present system. In the case of various types of gauges, every possible discrete value for a gauge maps to a discrete position on a digitized image of that gauge. By using the present system to determine the position of the variable features of a gauge, e.g., a needle, the value indicated by the gauge at the time of image capture can be determined. Regarding a needle gauge, for example, if analysis of a digitized image determines that the needle is located at a position on a digitized image that maps to 100 knots of airspeed, the system can store 100 knots as the airspeed value at the time of image capture. Thus, the invention described herein can be used in place of more expensive, more complex, and bulkier conventional wired or even wireless instrumentation systems.

    [0029] Figure 10 depicts an exemplary embodiment of a system 1001 for optical recognition, interpretation, and digitization of human-readable instruments, annunciators, and controls. The objects labeled as 1003 represent objects to be recognized, interpreted, and digitized. For example, in a rotorcraft implementation, objects 1003 represent objects in a rotorcraft cockpit, such as the instruments, annunciators, and controls described herein. System 1001 comprises an image acquisition sensor 1005, such as a digital camera that captures images of a rotorcraft instrument panel and the surrounding environment. In the illustrated embodiment, image acquisition sensor 1005 combines the images capture and encoding steps in one device, although the scope of the present invention is not so limited. Rather, in an alternative embodiment, image acquisition sensor 1005 captures the image and another device is used to encode the image into a digital representation. System 1001 further comprises a logic processing unit 1007, which, in the illustrated embodiment, is a digital computer executing software to decode and interpret the digitally-encoded image. Logic processing unit 1007 stores the states, preferably as time-based states, of one or more instruments, annunciators, and controls of interest represented in the image to a data storage device 1009. In some embodiments, logic processing unit 1007 creates pseudo-parameters to augment information available to the crew of the rotorcraft during flight. Pseudo-parameters are data items produced by combining two or more other parameters mathematically, lexically, and/or logically. For example, a vertical speed pseudo-parameter could be produced by taking the time derivative of altitude. Wind velocity, speed, and direction could be derived from satellite-based global positioning system data, airspeed, and heading. A downwind approach to land could be inferred from vertical speed, e.g., sink rate, low airspeed, and wind direction, while at low altitude. In some embodiments, system 1001 further includes a ground data processor 1011, e.g., a general-purpose computer using custom software. It should be noted that the components of system 1001 depicted in the drawings and/or the functions performed by the components may be combined, separated, and/or redistributed depending upon the particular implementation of system 1001.

    [0030] Figure 11 depicts a block diagram representing an illustrative embodiment of a method for optically recognizing, interpreting, and digitizing human-readable instruments, annunciators, and controls. In one implementation, the method is embodied in software encoded in media that can be read and executed by computing device. Upon initialization of the method (block 1101), a configuration file is read to provide a definition of the image to be interpreted (block 1103). The file contains information including, but not limited to the number, type, and location of all objects of interest, i.e., instruments, annunciators, and/or controls, to be interpreted; a definition of a scan region for each object; value maps associated with discrete locations within the scan region in the digitized image for use with threshold crossing techniques, as is described in greater detail herein; a mathematical representation of a reference region of the image for use in image registration; a mathematical representation of reference patterns used to determine states for objects interpreted using pattern matching techniques; chromaticity and intensity information for use in interpreting illumination status for lighted annunciators; chromaticity and intensity information for use in interpreting background versus foreground information; and/or other parameters needed by the method to ensure efficient operation, such as startup parameters, preferences, and the like.

    [0031] It should be noted that the configuration file stores information for the processing of images to eliminate redundant processing. Human-readable instruments, annunciators, and control can be recognized without the configuration file and the scope of the present invention encompasses such an embodiment. In such an embodiment; the registration process is omitted. The embodiment is particularly useful in implementations wherein computer processing capacity is not an issue in the timely providing of real-time analysis.

    [0032] In certain embodiments, system 1001 includes one or more computer programs encoded in media that can be read and executed by a computing system to facilitate the creation and maintenance of the configuration file using both automated and user-interactive methods. In such embodiments, previously defined objects can be automatically recognized using pattern matching techniques and manual techniques where an operator can define the configuration using custom tools in a graphical user interface (GUI).

    [0033] Still referring to Figure 11, the method assumes a piecewise continuous stream of images for analysis and opens an output file (block 1105), such as in data storage device 1009 of Figure 10, to which a series of values as interpreted for each subsequent image can be written. The output file effectively contains a time history of all parameters interpreted and analyzed according to the definitions read from the configuration file.

    [0034] The method enters a conditional loop 1107 and tests for the presence of a digital image representation to be interpreted (block 1109). If the image is present, the program decodes the image (block 1111) as necessary to create an indexable representation of the image. The decoding process may produce the indexable representation in any one or more possible forms according to the characteristics of the image being processed. In one embodiment, the image is a two-dimensional pixel bitmap or raster stored (block 1113) as two-dimensional array in computer memory. System 1001, however, is equally applicable to three-dimensional images. Gauges are then read from the raster (block 1115), as is discussed in greater detail with regard to Figure 12. The value of each gauge is then written to the output file (block 1117). If the image is not present (block 1109), stored variables and the like are cleaned up and the method ends.

    [0035] Figure 12 depicts one particular embodiment of reading the gauges from the raster (block 1115) in Figure 11. While the method depicted in Figure 12 is described regarding two-dimensional techniques, the scope of the present invention is not so limited. The first step is to "register" the image (block 1201) to ensure proper alignment between the coordinate system of current raster and the coordinate system used in the creation of the configuration file. Figure 13 shows that the first step in the registration process (block 1201) is to calculate a Fourier transform of the current rasterized image (block 1301). In this embodiment, fast Fourier transform (FFT) techniques are used. The reference region read from the configuration file is a conjugate of an FFT of a reference image. The next step is to element-wise multiply the reference FFT conjugate with the FFT of the current image (block 1303). The resulting product is then normalized element-wise such that all FFT values range between -1 and 1 (block 1305). The normalization process is not absolutely necessary but aids in overcoming errors due to image variations such as differences in lighting and/or optical noise. The next step is to compute the inverse FFT (IFFT) of the normalized product (block 1307). The final step is to find the row and column indices of the maximum value in the array from the IFFT (block 1309). The process is then repeated using polar coordinates to correct rotational differences. The resultant row and column indices are applied as offsets to row and column indices read from the configuration file (block 1311) and the process returns to the next step in Figure 12 (block 1313).

    [0036] It should be noted that the process of registration can be accomplished in other ways, which are encompassed within the scope of the present invention. For example, the region to be registered does not have to be a rectangular array. A single row of the raster can be used to obtain horizontal or lateral image shifts and a single column of the raster can be used to obtain vertical shifts. While the use of a single row and column of the raster may fail to determine the image shift when the shift involves both vertical and horizontal translation, the use of a single row and column of the raster may be sufficient or even desirable in certain implementations, Other ways of registering the image are contemplated by the present invention. Regarding embodiments of the present invention wherein registration is used, the manner in which registration is accomplished is immaterial to the practice of the invention.

    [0037] The overall outcome of the registration process may be referred to as "image stabilization," inasmuch as image shifts due to changes in the relative position and orientation between the image acquisition sensor arid the instrument panel in this case can be corrected for translational and/or rotational errors.

    [0038] Referring back to Figure 12, once registration is complete, the coordinates of each target region defined by the configuration file are adjusted by adding the row and column offsets from the registration process (block 1203). The next step is to enter a loop 1205 iterating through each threshold crossing type object defined in the configuration file. Once inside the loop, each pixel in the target region of the registered, current image is scanned (block 1207). A median filter is then applied to each pixel (1209). The value produced by the median filter is then assigned to the pixel. The median filter is used to reduce the effects of noise in the acquired image.

    [0039] Figure 14 presents the process flow for segmenting the target region into foreground and background (block 1211 in Figure 12). The first step is to compute a threshold value (block 1401) for the entire target region using the results of the median filter step. The present invention contemplates many ways for determining a threshold value, but a midpoint calculation between the minimum and maximum values for the target region has been tested and shown to be acceptable.

    [0040] Threshold crossing detection is one method of normalizing the image to account for lighting variations. The present method is capable of operating under a variety of lighting conditions. In some cases objects are illuminated by ambient light from sun, moon, or area lighting. In some cases objects are illuminated by artificial backlighting or directed face lights. The color and intensity will vary widely. For example, night flying generally uses low intensity red artificial lighting. Other techniques including but not limited to grayscale analysis and/or negative image analysis are also options for handling variation in chromaticity and/or intensity of light.

    [0041] Once the threshold value has been established, the intensity of each pixel in the target is retrieved (block 1403) and the intensity of each pixel is compared to the threshold (block 1405) and classified as either foreground (block 1407) or background (block 1409). Once all pixels have been scanned (block 1411), the process returns (block 1413) to the next step in Figure 12.

    [0042] Referring back to Figure 12, the next step is to pick the most likely candidate from among the foreground pixels which correspond to the location of the instrument's variable feature (block 1213). The methods for determining the most likely candidate can be as varied as the types of objects being interpreted. For example, a needle position might be determined by calculating the centroid of a cluster of foreground pixels. A segmented LED gauge value might be interpreted by determining the location at which there is an abrupt and non-reversing change from foreground to background pixels in the target region. An annunciator state might by interpreted by detecting the presence of one or more foreground pixels in the target region, with the number dependent upon such factors as quality of the image, desired confidence level, or other measures.

    [0043] Figure 15 refers to image interpretations using pattern matching techniques, which is one embodiment of block 1213 in Figure 12. The techniques described herein are essentially a repeat of the image registration process using a stored mathematical representation of a reference pattern to determine the state of an object (block 1501). The image raster is windowed around the control to obtain a smaller raster (block 1503) and the registration process is conducted over the smaller region of the image (block 1505). The phase correlation result is statistically analyzed to determine the probability of match of the reference pattern (block 1507) and the process returns (block 1509) to the next step in Figure 12. Numerous other pattern matching variations well known in the literature could likewise be employed. Pattern matching techniques might be a preferred method for determining states of particular knobs, switches, and/or levers not well suited for threshold crossing techniques.

    [0044] Threshold crossing and pattern matching techniques can also be used, either singly or in combination, to interpret the position of primary flight controls such as the collective to which collective head 901 is attached, cyclic grip 903, or tail rotor pedals 905, all shown in Figure 9.

    [0045] Referring again to Figure 12, once the values and/or states of all instruments, annunciators, and/or controls have been interpreted for the current image (block 1215), the values are stored in an output file (block 1117 of Figure 11), preferably stored on removable media. Both crashworthy and non-crashworthy removable media implementations are contemplated by the present invention. The entire process is then repeated for the next image.

    [0046] Various methods for data quality evaluations are possible using this invention. Threshold crossing detection is an effective method for detecting when an object is occluded from the camera view, such as by a pilot's arm as he reaches in front of the instrument panel to press a button or turn a knob. The minimum, maximum, and threshold values calculated from a median filtered image of a bare arm or a shirt sleeve will be very close, allowing the system to interpret that the target region is occluded. Redundant measurements are also used assess the quality of the interpreted value. For example, the roll angle from the attitude indicator 115 (shown in Figures 1 and 7) can be interpreted from target regions scanned on both the left and right sides of the rotating outer dial 707. In addition to multiple target regions, multi-directional scans provide redundancy. For example scanning left-to-right followed by a right to left scan may detect blurriness or multiple threshold crossings in which case the image may be "de-blurred" using Fourier optics techniques. If multiple threshold crossings are detected, various expert and/or statistical wild point editing techniques may be employed. Interpretations made using pattern matching techniques can check for multiple states, up/down, on/off, etc. Data from one source can also be compared and correlated with data from other sources, including other optically interpreted sources to assess data quality. Suspect data can be tagged by such means as sentinel values or data quality bits.

    [0047] In some cases, data interpreted from one source may be used in conjunction with data from other sources (optical or otherwise) for various purposes. One such case would be estimating wind velocity and direction using optically interpreted airspeed and heading data in conjunction with ground speed and ground track data from a wired Global Positioning System (GPS) sensor. Wind speed information thus derived could be stored as a time history "pseudo-item" as well as displayed to the pilot real-time in flight.

    [0048] The present invention provides significant advantages, including: (1) providing a cost-effective means for monitoring human-readable instruments, annunciators, and controls in an aircraft, such as a rotorcraft; (2) providing a cost-effective means for monitoring human-readable instruments, annunciators, and controls associated with any type of equipment or machinery, such as industrial processing machinery, material handling equipment, machine tools, or the like; (3) providing a lower complexity means for monitoring the state of aircraft systems, equipment, machinery, or the like; and (4) providing a lower weight means for monitoring the state of aircraft systems, equipment, machinery, or the like.

    [0049] The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the invention. Accordingly, the protection sought herein is as set forth in the claims below. It is apparent that an invention with significant advantages has been described and illustrated. Although the present invention is shown in a limited number of forms, it is not limited to just these forms, but is amenable to various changes and modifications.


    Claims

    1. A system (1001) for optically recognizing, interpreting, and digitizing human readable instruments, annunciators, and controls (1003), comprising:

    an image acquisition sensor (1005) operable to capture images of at least one of the instruments, annunciators, and controls;

    a configuration file to provide a definition of a target region of the images captured by the image acquisition sensor;

    a logic processing unit (1007) operable to decode the images captured by the image acquisition sensor (1005) and interpret the decoded images to determine a state of the at least one of the instruments, annunciators, and controls;

    wherein the human readable instruments, annunciators, and controls are operably associated with a machine, and configured to assist a human in operation of the machine;

    wherein the logic processing unit (1007) is configured to detect when the target region is occluded from the image acquisition sensor view in the captured images, , tag suspect data by sentinel valves if occlusion of the at least one of the instruments, annunciators and controls was detected, and write the state of the at least one of the instruments, annunciators, and controls to an output file.


     
    2. The system of claim 1, wherein the logic processing unit (1007) is operable to store the state of the at least one of the instruments, annunciators, and controls.
     
    3. The system of claim 1, wherein the logic processing unit (1007) is operable to store time-based states of the at least one of the instruments, annunciators, and controls.
     
    4. The system of claim 1, wherein the logic processing unit (1007) creates a pseudo-parameter from the state of the at least one of the instruments, annunciators, and coritrols.
     
    5. The system of claim 1, further comprising:

    a ground data processor (1011) operably associated with the logic processing unit.


     
    6. The system of claim 1, wherein the human readable instruments, annunciators, and controls are objects in an aircraft cockpit.
     
    7. The system of claim 6, wherein the aircraft is a rotorcraft.
     
    8. A method for optically recognizing, interpreting, and digitizing human readable instruments, annunciators, and controls comprising:

    reading a configuration file to provide a definition of a target region of an image to be interpreted of at least one of the instruments, annunciators, and controls (1103) to be interpreted;

    opening an output file (1105);

    determining whether a captured image to be interpreted exists (1109);

    if the captured image exists:

    decoding the captured image (1111);

    detecting when the target region is occluded from the image acquisition sensor view in the captured image;

    determining a state of the at least one of the instruments, annunciators, and controls (1115) based upon the decoded captured image and the definition in the configuration file and tagging suspect data by sentinel values if occlusion of the at least one of the instruments, annunciators and controls was detected; and

    writing the state of the at least one of the instruments, annunciators, and controls (1117) to the output file;
    wherein the human readable instruments, annunciators, and controls are operably associated with a machine, and configured to assist a human in operation of the machine.
     
    9. The method of claim 8, wherein the configuration file includes one or more of:

    a number of the at least one of the instruments, annunciators, and controls to be interpreted;

    types of the at least one of the instruments, annunciators, and controls to be interpreted;

    locations of the at least one of the instruments, annunciators, and controls to be interpreted;

    a definition of a scan region of the at least one of the instruments, annunciators, and controls to be interpreted;

    value maps associated with the locations of the at least one of the instruments, annunciators, and controls to be interpreted;

    a mathematical representation of a reference region of the image for use in image registration;

    a mathematical representation of reference patterns used to determine states of the at least one of the instruments, annunciators, and controls to be interpreted;

    chromacity and intensity information for use in interpreting illumination status for an annunciator;

    chromacity and intensity information for use in interpreting background versus foreground information of the image; and

    startup parameters and preferences.


     
    10. The method of claim 8, wherein the output film includes a time history of parameters interpreted and analyzed based upon definitions read from the configuration file.
     
    11. The method of claim 8, wherein decoding the image results in a two-dimensional pixel bitmap or raster stored as a two-dimensional array.
     
    12. The method of claim 8, wherein determining the state of the at least one of the instruments, annunciators, and controls is accomplished by:

    registering the captured image with the definition of the target region of the image to be interpreted;

    adjusting coordinates of the target region defined by the configuration file based upon registering the captured image;

    scanning each pixel of the registered, captured image;

    applying a median filter to each pixel of the registered, captured image;

    segmenting the target region into foreground and background; and

    picking foreground pixels corresponding to a variable feature of the at least one of the instruments, annunciators, and controls to be interpreted.


     
    13. The method of claim 12, wherein registering the captured image is accomplished by:

    computing a Fourier transform of the captured image;

    element-wise multiplying a reference Fourier transform conjugate from the configuration file with the Fourier transform of the captured image;

    computing an inverse Fourier transform of the multiplied reference Fourier transform conjugate and the Fourier transform of the captured image;

    finding row and column indices of a maximum value in an array resulting from the inverse Fourier transform; and

    storing the row and column indices.


     
    14. The method of claim 13, further comprising:

    element-wise normalizing the multiplied reference Fourier transform conjugate and the Fourier transform of the captured image prior to computing the inverse Fourier transform.


     
    15. The method of claim 12, wherein segmenting the target region into foreground and background is accomplished by:

    computing a threshold value for the target region based upon results of applying the median filter to each pixel of the registered, captured image;

    retrieving an intensity of each pixel in the target area of the captured image;

    comparing the intensity of each pixel to the threshold value; and

    classifying each pixel as foreground or background.


     
    16. The method of claim 12, wherein picking foreground pixels corresponding to a variable feature of the at least one of the instruments, annunciators, and controls to be interpreted is accomplished by:

    windowing the captured image around the target area;

    registering the windowed area of the captured image with the definition of the target region of the image to be interpreted; and

    determining the state of the at least one of the instruments, annunciators, and controls to be interpreted based on the result of registering the windowed area.


     
    17. Software for optically recognizing, interpreting, and digitizing human readable instruments, annunciators, and controls, the software being embodied in computer-readable media and when executed operable to:

    read a configuration file to provide a definition of a target region of an image to be interpreted of at least one of the instruments, annunciator, and controls (1103) to be interpreted;

    open an output file (1105);

    determine whether a captured image to be interpreted exists (1109);

    if the captured image exists:

    decode the captured images (1111);

    detect when the target region is occluded from the image acquisition sensor view in the captured image;

    determine a state of the at least one of the instruments, annunciators, and controls (1115) based upon the decoded captured image and the definition in the configuration file and tagging suspect data by sentinel values if occlusion of the at least one of the instruments, annunciators and controls was detected; and

    write the state of the at least one of the instruments, annunciators, and controls to the output file (1117);

    wherein the human readable instruments, annunciators, and controls are operably associated with a machine, and configured to assist a human in operation of the machine.


     
    18. The software of claim 17, wherein the configuration file includes one or more of:

    a number of the at least one of the instruments, annunciators, and controls to be interpreted;

    types of the at least one of the instruments, annunciators, and controls to be interpreted;

    locations of the at least one of the instruments, annunciators, and controls to be interpreted;

    a definition of a scan region of the at least one of the instruments, annunciators, and controls to be interpreted;

    value maps associated with the locations of the at least one of the instruments, annunciators, and controls to be interpreted;

    a mathematical representation of a reference region of the image for use in image registration;

    a mathematical representation of reference patterns used to determine states of the at least one of the instruments, annunciators, and controls to be interpreted;

    chromacity and intensity information for use in interpreting illumination status for an annunciator;

    chromacity and intensity information for use in interpreting background versus foreground information of the image; and

    startup, parameters and preferences.


     
    19. The software of claim 17, wherein the output file includes a time history of parameters interpreted and analyzed based upon definitions read from the configuration file.
     
    20. The software of claim 17, wherein the software, when executed, is operable to decode the image results in a two-dimensional pixel bitmap or raster stored as a two-dimensional array.
     
    21. The software of claim 17, wherein the software, when executed, determines the state of the at least one of the instruments, annunciators, and controls by:

    registering the captured image with the definition of the target region of the image to be interpreted;

    adjusting coordinates of the target region defined by the configuration file based upon registering the captured image;

    scanning each pixel of the registered, captured image;

    applying a median filter to each pixel of the registered, captured image;

    segmenting the target region into foreground and background; and

    picking foreground pixels corresponding to a variable feature of the at least one of the instruments, annunciators, and controls to be interpreted.


     
    22. The software of claim 21, wherein the software, when executed, registers the captured image by:

    computing a Fourier transform of the captured image;

    element-wise multiplying a reference Fourier transform conjugate from the configuration file with the Fourier transform of the captured image;

    computing an inverse Fourier transform of the multiplied reference Fourier transform conjugate and the Fourier transform of the captured image;

    finding row and column indices of a maximum value in an array resulting from the inverse Fourier transform; and

    storing the row and column indices.


     
    23. The software of claim 22, wherein the software, when executed, registers the captured image by:

    element-wise normalizing the multiplied reference Fourier transform conjugate and the Fourier transform of the captured image prior to computing the inverse Fourier transform.


     
    24. The software of claim 21, wherein the software, when executed, segments the target region into foreground and background by:

    computing a threshold value for the target region based upon results of applying the median filter to each pixel of the registered, captured image;

    retrieving an intensity of each pixel in the target area of the captured image;

    comparing the intensity of each pixel to the threshold value; and

    classifying each pixel as foreground or background.
     
    25. The method of claim 21, wherein the software, when executed, picks foreground pixels corresponding to a variable feature of the at least one of the instruments, annunciators, and controls to be interpreted is accomplished by:

    windowing the captured image around the target area;

    registering the windowed area of the captured image with the definition of the target region of the image to be interpreted; and

    determining the state of the at least one of the instruments, annunciators, and controls to be interpreted based on the result of registering the windowed area.


     


    Ansprüche

    1. Ein System (1001) zur optischen Erkennung, Interpretation und Digitalisierung von vom Menschen lesbaren Instrumenten, Anzeigen und Steuerungen (1003), das Folgendes aufweist:

    einen Bildaufnahmesensor (1005), der Bilder von mindestens entweder den Instrumenten, Anzeigen oder Steuerungen aufnehmen kann;

    eine Konfigurationsdatei, um eine Definition eines Zielbereichs der vom Bildaufnahmesensor erfassten Bilder zu liefern;

    eine logische Verarbeitungseinheit (1007), die die vom Bildaufnahmesensor (1005) aufgenommenen Bilder dekodieren und die dekodierten Bilder interpretieren kann, um einen Zustand entweder der Instrumente, Anzeigen oder Steuerungen zu bestimmen;

    wobei die vom Menschen lesbaren Instrumente, Anzeigen und Steuerungen bedienbar mit einer Maschine verbunden und so gestaltet sind, dass sie einen Menschen beim Maschinenbetrieb unterstützen;

    wobei die logische Verarbeitungseinheit (1007) so gestaltet ist, dass sie erfasst, wenn der Blick des Bildaufnahmesensors in den aufgenommenen Bildern auf den Zielbereich verdeckt ist, dass sie verdächtige Daten durch Markierungswerte kennzeichnet, wenn eine Verdeckung mindestens entweder der Instrumente, Anzeigen oder Steuerungen erkannt wurde, und dass sie den Zustand entweder der Instrumente, Anzeigen oder Steuerungen in eine Ausgabedatei schreibt.


     
    2. Das System gemäß Anspruch 1, wobei die logische Verarbeitungseinheit (1007) den Zustand entweder der Instrumente, Anzeigen oder Steuerungen speichern kann.
     
    3. Das System gemäß Anspruch 1, wobei die logische Verarbeitungseinheit (1007) zeitabhängige Zustände entweder der Instrumente, Anzeigen oder Steuerungen speichern kann.
     
    4. Das System gemäß Anspruch 1, wobei die logische Verarbeitungseinheit (1007) einen Pseudo-Parameter bezüglich des Zustandes entweder der Instrumente, Anzeigen oder Steuerungen erstellt.
     
    5. Das System gemäß Anspruch 1, das darüberhinaus Folgendes aufweist:

    einen Bodenaufnahmedaten-Prozessor (1011), der bedienbar mit der logischen Verarbeitungseinheit verbunden ist.


     
    6. Das System gemäß Anspruch 1, wobei die vom Menschen lesbaren Instrumente, Anzeigen und Steuerungen Gegenstände in einem Fluggerät-Cockpit sind.
     
    7. Das System gemäß Anspruch 6, wobei das Fluggerät ein Drehflügler ist.
     
    8. Ein Verfahren zur optischen Erkennung, Interpretation und Digitalisierung von vom Menschen lesbaren Instrumenten, Anzeigen und Steuerungen, das Folgendes aufweist:

    die Auslesung einer Konfigurationsdatei, um eine Definition eines Zielbereichs eines zu interpretierenden Bildes bereitzustellen, das von mindestens entweder den Instrumenten, Anzeigen oder Steuerungen (1103) interpretiert werden soll;

    das Öffnen einer Ausgabedatei (1105);

    die Bestimmung, ob ein zu interpretierendes aufgenommenes Bild existiert (1109);

    wenn das aufgenommene Bild existiert:

    die Dekodierung des aufgenommenen Bildes (1111);

    die Erkennung, wenn der Blick des Bildaufnahmesensors im aufgenommenen Bild auf den Zielbereich verdeckt ist;

    die Bestimmung eines Zustandes entweder der Instrumente, Anzeigen oder Steuerungen (1115) auf der Grundlage des dekodierten, aufgenommenen Bildes und der Definition in der Konfigurationsdatei, und die Kennzeichnung verdächtiger Daten durch Markierungswerte, wenn die Verdeckung mindestens entweder der Instrumente, Anzeigen oder Steuerungen erkannt wurde; und

    das Schreiben des Zustandes entweder der Instrumente, Anzeigen oder Steuerungen (1117) in die Ausgabedatei;

    wobei die vom Menschen lesbaren Instrumente, Anzeigen und Steuerungen bedienbar mit einer Maschine verbunden und so gestaltet sind, dass sie einen Menschen beim Maschinenbetrieb unterstützen.


     
    9. Das Verfahren gemäß Anspruch 8, wobei die Konfigurationsdatei eines oder mehrere der folgenden Elemente einschließt:

    eine Anzahl entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    Typen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    Positionen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    eine Definition eines Scan-Bereichs mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    Value Maps, die mit den Positionen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, verbunden sind;

    eine mathematische Darstellung eines Referenzbereichs des Bildes zur Verwendung in der Bildregistrierung;

    eine mathematische Darstellung von Referenzmustern, die zur Bestimmung von Zuständen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, verwendet werden;

    Farbart- und Intensitäts-Informationen zur Verwendung bei der Interpretation eines Beleuchtungszustandes einer Anzeige;

    Farbart- und Intensitäts-Informationen zur Verwendung bei der Interpretation der Hintergrund- versus Vordergrund-Informationen des Bildes; und

    Anfahrt-Parameter und -präferenzen.


     
    10. Das Verfahren gemäß Anspruch 8, wobei die Ausgabedatei eine Zeitbilanz der Parameter einschließt, die auf der Grundlage der von der Konfigurationsdatei gelesenen Definitionen interpretiert und analysiert werden.
     
    11. Das Verfahren gemäß Anspruch 8, wobei die Dekodierung des Bildes eine zweidimensionale Bildpunkt-Bitmap oder ein Raster ergibt, das als zweidimensionales Datenfeld gespeichert wird.
     
    12. Das Verfahren gemäß Anspruch 8, wobei die Bestimmung des Zustandes entweder der Instrumente, Anzeigen oder Steuerungen folgendermaßen erreicht wird:

    durch die Registrierung des aufgenommenen Bildes mit der Definition des Zielbereichs des zu interpretierenden Bildes;

    durch die Einstellung der Koordinaten des Zielbereichs, der durch die Konfigurationsdatei auf der Grundlage der Registrierung des aufgenommenen Bildes definiert wird;

    durch das Scannen jedes Bildpunktes des registrierten, aufgenommenen Bildes;

    durch die Anwendung eines Rangordnungsfilters für jeden Bildpunkt des registrierten, aufgenommenen Bildes;

    durch die Segmentierung des Zielbereichs in einen Vordergrund und einen Hintergrund; und

    durch das Herauslesen von Vordergrund-Bildpunkten, die einer variablen Eigenschaft mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, entsprechen.


     
    13. Das Verfahren gemäß Anspruch 12, wobei die Registrierung des aufgenommenen Bildes durch Folgendes erreicht wird:

    durch die Berechnung einer Fourier-Transformation des aufgenommenen Bildes;

    durch die elementweise Multiplikation eines Referenz-Fourier-Transformations-Konjugats aus der Konfigurationsdatei mit der Fourier-Transformation des aufgenommenen Bildes;

    durch die Berechnung einer inversen Fourier-Transformation des multiplizierten Referenz-Fourier-Transformations-Konjugats und der Fourier-Transformation des aufgenommenen Bildes;

    durch das Auffinden von Zeilen- und Spaltenindizes eines Höchstwerts in einem Datenfeld, das sich aus der inversen Fourier-Transformation ergibt; und

    durch die Speicherung der Zeilen- und Spaltenindizes.


     
    14. Das Verfahren in Anspruch 13, das darüberhinaus Folgendes aufweist:

    die elementweise Normalisierung des multiplizierten Referenz-Fourier-Transformations-Konjugats und der Fourier-Transformation des aufgenommenen Bildes vor der Berechnung der inversen Fourier-Transformation.


     
    15. Das Verfahren gemäß Anspruch 12, wobei die Segmentierung des Zielbereichs in einen Vordergrund und einen Hintergrund folgendermaßen erreicht wird:

    durch die Berechnung eines Schwellenwerts für den Zielbereich auf der Grundlage der Resultate aus der Anwendung eines Rangordnungsfilters für jeden Bildpunkt des registrierten, aufgenommenen Bildes;

    durch die Erfassung einer Intensität jedes Bildpunktes im Zielbereich des aufgenommenen Bildes;

    durch den Vergleich der Intensität jedes Bildpunktes mit dem Schwellenwert; und

    durch die Klassifizierung jedes Bildpunktes als Vordergrund oder Hintergrund.


     
    16. Das Verfahren gemäß Anspruch 12, wobei das Herauslesen von Vordergrund-Bildpunkten, die einer variablen Eigenschaft mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, entsprechen, folgendermaßen erreicht wird:

    durch die Fensterung des aufgenommenen Bildes um den Zielbereich;

    durch die Registrierung des gefensterten Bereichs des aufgenommenen Bildes mit der Definition des Zielbereichs des zu interpretierenden Bildes; und

    durch die Bestimmung des Zustandes mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, auf der Grundlage des Resultats der Registrierung des gefensterten Bereichs.


     
    17. Software zur optischen Erkennung, Interpretation und Digitalisierung von vom Menschen lesbaren Instrumenten, Anzeigen und Steuerungen, die Software ist dabei in computerlesbare Medien eingebettet und hat bei der Ausführung folgende Funktionen:

    die Auslesung einer Konfigurationsdatei, um eine Definition eines Zielbereichs eines zu interpretierenden Bildes bereitzustellen, das von mindestens entweder den Instrumenten, Anzeigen oder Steuerungen (1103) interpretiert werden soll;

    das Öffnen einer Ausgabedatei (1105);

    die Bestimmung, ob ein zu interpretierendes aufgenommenes Bild existiert (1109);

    wenn das aufgenommene Bild existiert:

    die Dekodierung des aufgenommenen Bildes (1111);

    die Erkennung, wenn der Blick des Bildaufnahmesensors im aufgenommenen Bild auf den Zielbereich verdeckt ist;

    die Bestimmung eines Zustandes entweder eines der Instrumente, Anzeigen oder Steuerungen (1115) auf der Grundlage des dekodierten, aufgenommenen Bildes und der Definition in der Konfigurationsdatei, und die Kennzeichnung verdächtiger Daten durch Markierungswerte, wenn die Verdeckung mindestens entweder der Instrumente, Anzeigen oder Steuerungen erkannt wurde; und

    das Schreiben des Zustandes entweder der Instrumente, Anzeigen oder Steuerungen (1117) in die Ausgabedatei;

    wobei die vom Menschen lesbaren Instrumente, Anzeigen und Steuerungen bedienbar mit einer Maschine verbunden und so gestaltet sind, dass sie einen Menschen beim Maschinenbetrieb unterstützen.


     
    18. Die Software gemäß Anspruch 17, wobei die Konfigurationsdatei eines oder mehrere der folgenden Elemente einschließt:

    eine Anzahl mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    Typen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    Positionen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    eine Definition eines Scan-Bereichs mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind;

    Value Maps, die mit den Positionen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, verbunden sind;

    eine mathematische Darstellung eines Referenzbereichs des Bildes zur Verwendung in der Bildregistrierung;

    eine mathematische Darstellung von Referenzmustern, die zur Bestimmung von Zuständen mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, verwendet werden;

    Farbart- und Intensitäts-Informationen zur Verwendung bei der Interpretation eines Beleuchtungszustandes einer Anzeige;

    Farbart- und Intensitäts-Informationen zur Verwendung bei der Interpretation der Hintergrund- versus Vordergrund-Informationen des Bildes; und

    Anfahrt-Parameter und -präferenzen.


     
    19. Die Software gemäß Anspruch 17, wobei die Ausgabedatei eine Zeitbilanz der Parameter einschließt, die auf der Grundlage der von der Konfigurationsdatei gelesenen Definitionen interpretiert und analysiert wurden.
     
    20. Die Software gemäß Anspruch 17, wobei die Software bei der Ausführung die Bildresultate in eine zweidimensionale Bildpunkt-Bitmap oder ein Raster dekodieren kann, das als zweidimensionales Datenfeld gespeichert wird.
     
    21. Die Software gemäß Anspruch 17, wobei die Software bei der Ausführung den Zustand entweder der Instrumente, Anzeigen oder Steuerungen folgendermaßen bestimmt:

    durch die Registrierung des aufgenommenen Bildes mit der Definition des Zielbereichs des zu interpretierenden Bildes;

    durch die Einstellung der Koordinaten des Zielbereichs, der durch die Konfigurationsdatei auf der Grundlage der Registrierung des aufgenommenen Bildes definiert wird;

    durch das Scannen jedes Bildpunktes des registrierten, aufgenommenen Bildes;

    durch die Anwendung eines Rangordnungsfilters für jeden Bildpunkt des registrierten, aufgenommenen Bildes;

    durch die Segmentierung des Zielbereichs in einen Vordergrund und einen Hintergrund; und

    durch das Herauslesen von Vordergrund-Bildpunkten, die einer variablen Eigenschaft mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, entsprechen.


     
    22. Die Software gemäß Anspruch 21, wobei die Software bei der Ausführung die aufgenommenen Bilder folgendermaßen registriert:

    durch die Berechnung einer Fourier-Transformation des aufgenommenen Bildes;

    durch die elementweise Multiplikation eines Referenz-Fourier-Transformations-Konjugats aus der Konfigurationsdatei mit der Fourier-Transformation des aufgenommenen Bildes;

    durch die Berechnung einer inversen Fourier-Transformation des multiplizierten Referenz-Fourier-Transformations-Konjugats und der Fourier-Transformation des aufgenommenen Bildes;

    durch das Auffinden von Zeilen- und Spaltenindizes eines Höchstwerts in einem Datenfeld, das sich aus der inversen Fourier-Transformation ergibt; und

    durch die Speicherung der Zeilen- und Spaltenindizes.


     
    23. Die Software gemäß Anspruch 22, wobei die Software bei der Ausführung die aufgenommenen Bilder folgendermaßen registriert:

    durch die elementweise Normalisierung des multiplizierten Referenz-Fourier-Transformations-Konjugats und der Fourier-Transformation des aufgenommenen Bildes vor der Berechnung der inversen Fourier-Transformation.


     
    24. Die Software gemäß Anspruch 21, wobei die Software bei der Ausführung den Zielbereich in einen Vordergrund und einen Hintergrund folgendermaßen segmentiert:

    durch die Berechnung eines Schwellenwerts für den Zielbereich auf der Grundlage der Resultate aus der Anwendung eines Rangordnungsfilters für jeden Bildpunkt des registrierten, aufgenommenen Bildes;

    durch die Erfassung einer Intensität jedes Bildpunktes im Zielbereich des aufgenommenen Bildes;

    durch den Vergleich der Intensität jedes Bildpunktes mit dem Schwellenwert; und durch die Klassifizierung jedes Bildpunktes als Vordergrund oder Hintergrund.


     
    25. Das Verfahren gemäß Anspruch 21, wobei die Software bei der Ausführung die Vordergrund-Bildpunkte herausliest, die einer variablen Eigenschaft mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, entsprechen, dies wird folgendermaßen erreicht:

    durch die Fensterung des aufgenommenen Bildes um den Zielbereich;

    durch die Registrierung des gefensterten Bereichs des aufgenommenen Bildes mit der Definition des Zielbereichs des zu interpretierenden Bildes; und

    durch die Bestimmung des Zustandes mindestens entweder der Instrumente, Anzeigen oder Steuerungen, die zu interpretieren sind, auf der Grundlage des Resultats der Registrierung des gefensterten Bereichs.


     


    Revendications

    1. Un système (1001) de reconnaissance optique, d'interprétation et de numérisation d'instruments, d'annonciateurs et de commandes lisibles par un humain (1003), comprenant :

    un capteur d'acquisition d'images (1005) conçu de façon à capturer des images d'au moins un élément parmi les instruments, annonciateurs et commandes,

    un fichier de configuration destiné à la fourniture d'une définition d'une zone cible des images capturées par le capteur d'acquisition d'images,

    une unité de traitement logique (1007) conçue de façon à décoder les images capturées par le capteur d'acquisition d'images (1005) et à interpréter les images décodées de façon à déterminer un état du au moins un élément parmi les instruments, annonciateurs et commandes,

    où les instruments, annonciateurs et commandes lisibles par un humain sont associés de manière opérationnelle à une machine et configurés de façon à assister un humain dans l'actionnement de la machine,

    où l'unité de traitement logique (1007) est configurée de façon à détecter si la zone cible est occluse de la vue du capteur d'acquisition d'images dans les images capturées, à marquer des données suspectes au moyen de valeurs sentinelles si une occlusion du au moins un élément parmi les instruments, annonciateurs et commandes a été détectée, et à écrire l'état du au moins un élément parmi les instruments, annonciateurs et commandes dans un fichier de sortie.


     
    2. Le système selon la Revendication 1, où l'unité de traitement logique (1007) est conçue de façon à conserver en mémoire l'état du au moins un élément parmi les instruments, annonciateurs et commandes.
     
    3. Le système selon la Revendication 1, où l'unité de traitement logique (1007) est conçue de façon à conserver en mémoire des états basés sur le temps du au moins un élément parmi les instruments, annonciateurs et commandes.
     
    4. Le système selon la Revendication 1, où l'unité de traitement logique (1007) crée un pseudo-paramètre à partir de l'état du au moins un élément parmi les instruments, annonciateurs et commandes.
     
    5. Le système selon la Revendication 1, comprenant en outre :

    un processeur de données terrestres (1011) associé de manière opérationnelle à l'unité de traitement logique.


     
    6. Le système selon la Revendication 1, où les instruments, annonciateurs et commandes lisibles par un humain sont des objets se trouvant dans un cockpit d'aéronef.
     
    7. Le système selon la Revendication 6, où l'aéronef est un giravion.
     
    8. Un procédé de reconnaissance optique, d'interprétation et de numérisation d'instruments, d'annonciateurs et de commandes lisibles par un humain, comprenant :

    la lecture d'un fichier de configuration destiné à la fourniture d'une définition d'une zone cible d'une image à interpréter d'au moins un élément parmi les instruments, annonciateurs et commandes (1103) à interpréter,

    l'ouverture d'un fichier de sortie (1105),

    la détermination si une image capturée à interpréter existe (1109),

    si l'image capturée existe:

    le décodage de l'image capturée (1111),

    la détection du moment où la zone cible est occluse de la vue du capteur d'acquisition d'images dans l'image capturée,

    la détermination d'un état du au moins un élément parmi les instruments, annonciateurs et commandes (1115) en fonction de l'image capturée décodée et de la définition dans le fichier de configuration et le marquage de données suspectes par des valeurs sentinelles si une occlusion du au moins un élément parmi les instruments, annonciateurs et commandes a été détectée, et

    l'écriture de l'état du au moins un élément parmi les instruments, annonciateurs et commandes (1117) dans le fichier de sortie,

    où les instruments, annonciateurs et commandes lisibles par un humain sont associés de manière opérationnelle à une machine et configurés de façon à assister un humain dans l'actionnement de la machine.


     
    9. Le procédé selon la Revendication 8, où le fichier de configuration comprend un ou plusieurs éléments parmi :

    un nombre du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des types du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des emplacements du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    une définition d'une zone de balayage du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des cartes de valeurs associées aux emplacements du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    une représentation mathématique d'une zone de référence de l'image destinée à une utilisation dans l'enregistrement des images,

    une représentation mathématique de modèles de référence utilisés de façon à déterminer des états du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des informations de chromacité et d'intensité destinées à une utilisation dans l'interprétation d'un statut d'éclairage pour un annonciateur,

    des informations de chromacité et d'intensité destinées à une utilisation dans l'interprétation d'informations d'arrière plan et de premier plan de l'image, et

    des paramètres de démarrage et des préférences.


     
    10. Le procédé selon la Revendication 8, où le fichier de sortie comprend un historique temporel de paramètres interprétés et analysés en fonction de définitions lues à partir du fichier de configuration.
     
    11. Le procédé selon la Revendication 8, où le décodage de l'image résulte en un bitmap ou une trame à deux dimensions conservé en mémoire sous la forme d'une matrice à deux dimensions.
     
    12. Le procédé selon la Revendication 8, où la détermination de l'état du au moins un élément parmi les instruments, annonciateurs et commandes est réalisée par :

    l'enregistrement de l'image capturée avec la définition de la zone cible de l'image à interpréter,

    l'ajustement de coordonnées de la zone cible définies par le fichier de configuration en fonction de l'enregistrement de l'image capturée,

    le balayage de chaque pixel de l'image capturée enregistrée,

    l'application d'un filtre médian à chaque pixel de l'image capturée enregistrée,

    la segmentation de la zone cible en un premier plan et un arrière-plan, et

    la capture de pixels de premier plan correspondant à une caractéristique variable du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter.


     
    13. Le procédé selon la Revendication 12, où l'enregistrement de l'image capturée est réalisée par :

    le calcul d'une transformée de Fourier de l'image capturée,

    la multiplication élément par élément d'une conjuguée de transformée de Fourier de référence à partir du fichier de configuration par la transformée de Fourier de l'image capturée,

    le calcul d'une transformée de Fourier inverse de la conjuguée de transformée de Fourier de référence multipliée et de la transformée de Fourier de l'image capturée,

    la recherche d'indices de lignes et de colonnes d'une valeur maximale dans une matrice résultant de la transformée de Fourier inverse, et

    la conservation en mémoire des indices de lignes et de colonnes.


     
    14. Le procédé selon la Revendication 13, comprenant en outre :

    la normalisation élément par élément de la conjuguée de transformée de Fourier de référence multipliée et de la transformée de Fourier de l'image capturée avant le calcul de la transformée de Fourier inverse.


     
    15. Le procédé selon la Revendication 12, où la segmentation de la zone cible en un premier plan et un arrière-plan est réalisée par :

    le calcul d'une valeur seuil pour la zone cible en fonction de résultats de l'application du filtre médian à chaque pixel de l'image capturée enregistrée,

    la récupération d'une intensité de chaque pixel dans la zone cible de l'image capturée,

    la comparaison de l'intensité de chaque pixel à la valeur seuil, et

    le classement de chaque pixel en tant que premier plan ou arrière-plan.


     
    16. Le procédé selon la Revendication 12, où la capture de pixels de premier plan correspondant à une caractéristique variable du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter est réalisée par :

    le fenêtrage de l'image capturée autour de la zone cible,

    l'enregistrement de la zone à fenêtres de l'image capturée avec la définition de la zone cible de l'image à interpréter, et

    la détermination de l'état du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter en fonction du résultat de l'enregistrement de la zone à fenêtres.


     
    17. Un logiciel de reconnaissance optique, d'interprétation et de numérisation d'instruments, d'annonciateurs et de commandes lisibles par un humain, le logiciel étant incorporé dans un support lisible par ordinateur et, lorsqu'il est exécuté, conçu de façon à :

    lire un fichier de configuration destiné à la fourniture d'une définition d'une zone cible d'une image à interpréter d'au moins un élément parmi les instruments, annonciateurs et commandes (1103) à interpréter,

    ouvrir un fichier de sortie (1105)

    déterminer si une image capturée à interpréter existe (1109)

    si l'image capturée existe :

    décoder l'image capturée (1111),

    détecter si la zone cible est occluse de la vue du capteur d'acquisition d'images dans l'image capturée,

    déterminer un état du au moins un élément parmi les instruments, annonciateurs et commandes (1115) en fonction de l'image capturée décodée et de la définition dans le fichier de configuration et marquer des données suspectes au moyen de valeurs sentinelles si une occlusion du au moins un élément parmi les instruments, annonciateurs et commandes a été détectée, et

    écrire l'état du au moins un élément parmi les instruments, annonciateurs et commandes vers le fichier de sortie (1117),

    où les instruments, annonciateurs et commandes lisibles par un humain sont associés de manière opérationnelle à une machine et configurés de façon à assister un humain dans l'actionnement de la machine.


     
    18. Le logiciel selon la Revendication 17, où le fichier de configuration comprend un ou plusieurs éléments parmi :

    un nombre du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des types du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des emplacements du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    une définition d'une zone de balayage du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des cartes de valeurs associées aux emplacements du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    une représentation mathématique d'une zone de référence de l'image destinée à une utilisation dans l'enregistrement des images,

    une représentation mathématique de modèles de référence utilisée de façon à déterminer des états du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter,

    des informations de chromacité et d'intensité destinées à une utilisation dans l'interprétation d'un statut d'éclairage pour un annonciateur,

    des informations de chromacité et d'intensité destinées à une utilisation dans l'interprétation d'informations d'arrière plan et de premier plan de l'image, et

    des paramètres de démarrage et des préférences.


     
    19. Le logiciel selon la Revendication 17, où le fichier de sortie comprend un historique temporel de paramètres interprétés et analysés en fonction de définitions lues à partir du fichier de configuration.
     
    20. Le logiciel selon la Revendication 17, où le logiciel, lorsqu'il est exécuté, est conçu de façon à décoder les résultats d'image en un bitmap ou une trame à deux dimensions conservé en mémoire sous la forme d'une matrice à deux dimensions.
     
    21. Le logiciel selon la Revendication 17, où le logiciel, lorsqu'il est exécuté, détermine l'état du au moins un élément parmi les instruments, annonciateurs et commandes par :

    l'enregistrement de l'image capturée avec la définition de la zone cible de l'image à interpréter,

    l'ajustement de coordonnées de la zone cible définies par le fichier de configuration en fonction de l'enregistrement de l'image capturée,

    le balayage de chaque pixel de l'image capturée enregistrée,

    l'application d'un filtre médian à chaque pixel de l'image capturée enregistrée,

    la segmentation de la zone cible en un premier plan et un arrière-plan, et

    la capture de pixels de premier plan correspondant à une caractéristique variable du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter.


     
    22. Le logiciel, selon la Revendication 21, où le logiciel, lorsqu'il est exécuté, enregistre l'image capturée par :

    le calcul de une transformée de Fourier de l'image capturée,

    la multiplication élément par élément d'une conjuguée de transformée de Fourier de référence à partir du fichier de configuration avec la transformée de Fourier de l'image capturée,

    le calcul d'une transformée de Fourier inverse de la conjuguée de transformée de Fourier de référence multipliée et la transformée de Fourier de l'image capturée,

    la recherche d'indices de lignes et de colonnes d'une valeur maximale dans une matrice résultant de la transformée de Fourier inverse, et

    la conservation en mémoire des indices de lignes et de colonnes.


     
    23. Le logiciel selon la Revendication 22, où le logiciel, lorsqu'il est exécuté, enregistre l'image capturée par :

    la normalisation élément par élément de la conjuguée de transformée de Fourier de référence multipliée et la transformée de Fourier de l'image capturée avant le calcul de la transformée de Fourier inverse.


     
    24. Le logiciel selon la Revendication 21, où le logiciel, lorsqu'il est exécuté, segmente la zone cible en un premier plan et un arrière-plan par :

    le calcul d'une valeur seuil pour la zone cible en fonction de résultats de l'application du filtre médian à chaque pixel de l'image capturée enregistrée,

    la récupération d'une intensité de chaque pixel dans la zone cible de l'image capturée,

    la comparaison de l'intensité de chaque pixel à la valeur seuil, et le classement de chaque pixel en tant que premier plan ou arrière-plan.


     
    25. Le procédé selon la Revendication 21, où le logiciel, lorsqu'il est exécuté, capture des pixels de premier plan correspondant à une caractéristique variable du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter, l'opération étant réalisée par :

    le fenêtrage de l'image capturée autour de la zone cible,

    l'enregistrement de la zone à fenêtres de l'image capturée avec la définition de la zone cible de l'image à interpréter, et

    la détermination de l'état du au moins un élément parmi les instruments, annonciateurs et commandes à interpréter en fonction du résultat de l'enregistrement de la zone à fenêtres.


     




    Drawing









































    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description