FIELD OF THE INVENTION
[0001] The present invention relates generally to optical devices, and more particularly
to an optical sensor utilizing background reconstruction image processing techniques
in order to provide a much higher level of resolution of a localized object within
a 5 background scene.
BACKGROUND OF THE INVENTION
[0002] Optical sensors are devices which for decades have been used to detect and record
optical images. Various types of optical sensors have been developed which work in
the Ultra Violet Bands, Infra Red Bands as well as in the Visible Bands of operation.
Examples of such devices include Weather Sensors, Terrain Mapping Sensors, Surveillance
Sensors, Medical Probes, Telescopes and Television Cameras.
[0003] An optical sensor typically includes an optical system and one or more detectors.
The optical system portion is made up of various combinations of lenses, mirrors and
filters used to focus light onto a focal plane located at the image plane of the optical
system. The detectors which make at the image plane are used to convert the light
received from the optical system into electrical signals. Some types of optical sensors
use film rather than detectors to record the images. In this case, the grain size
of the film is analogous to the detectors described above.
[0004] An important performance characteristic of optical sensors is their "spatial resolution"
which is the size or the smallest object that can be resolved in the image or, equivalently,
the ability to differentiate between closely spaced objects. If the optical system
is free from optical aberrations (which means being "well corrected") the spatial
resolution is ultimately limited by either diffraction effects or the size of the
detector.
[0005] Diffraction is a well known characteristic of light which describes how light passes
through an aperture of an optical system. Diffraction causes the light passing through
an aperture to spread out so that the point light sources of a scene end up as a pattern
of light (known as a diffraction pattern) diffused across the image. For a well corrected,
unobscured optical system known as a diffraction limited system, the diffraction pattern
includes a very bright central spot, surrounded by somewhat fainter bright and dark
rings which gradually fade away as the distance from the central spot increases.
[0006] An optical sensor that is designed to be diffraction limited usually has a very well
corrected optical system and detectors sized so that the central spot of the diffraction
pattern just fits within the active area of the detector. With conventional sensors,
making the detectors smaller does not improve resolution and considerably increases
the cost due to the expense of the extra detectors and the associated electronics.
[0007] The size of the aperture used in the optical system determines the amount of resolution
lost to diffraction effects. In applications such as camera lenses and telescope objectives,
the aperture size is normally expressed as an f-number which is the ratio of the effective
focal length to the size of the clear aperture. In applications such as microscope
objectives, the aperture size is normally expressed as a Numerical aperture (NA) which
is the index of refraction times the sine of the half angle of the cone of illumination.
For a given focal length, a high f-number corresponds to a smaller aperture, while
a higher Numerical aperture corresponds to a larger aperture.
[0008] A basic limitation of conventional optical sensors is the aperture size required
for a given level of resolution. Higher resolution images require larger apertures.
In many situations the use of such a system is very costly. This is because using
a larger aperture requires a significantly larger optical system.
[0009] The cost for larger systems which have apertures with diameters greater than one
foot is typically proportional to the diameter of the aperture raised to a power of
"x". The variable "x" usually ranges from 2.1 to 2.9 depending on a number of other
particulars associated with the sensor such as its wave band, field of regard, and
field of view.
[0010] The size of the optical sensor is particularly relevant for systems that fly on some
type of platform, either in space or in the air. Under such conditions the sensor
must be light weight, strong, and capable of surviving the rigors of the flight environment.
Thus the cost of going to a larger optical system can be in the hundreds of millions
of dollars for some of the larger and more sophisticated sensors. Practical considerations,
such as the amount of weight the host rocket, plane, balloon, or other vehicle can
accommodate, or the amount of space available, may also limit the size of the sensor.
These practical considerations can prevent a larger system from being implemented
no matter how large the budget.
[0011] A number of optical imaging techniques have been developed to increase spatial resolution.
One such technique is known as sub-pixel resolution. In sub-pixel resolution the optical
system is limited in spatial resolution not by diffraction but by the size of the
detectors or pixels. In this case, the diffraction pattern of the aperture is much
smaller than the detectors, so the detectors do not record all the resolution inherent
in the optical system's image. Sub-pixel resolution attempts to reconstruct an image
that includes the higher resolution not recorded by the detectors. This technique
does not require hardware or system operation changes in order to work. Examples of
sub-pixel resolution techniques are disclosed in an article in
ADVANCED SIGNAL PROCESSING ALGORITHMS, ARCHITECTURES AND IMPLEMENTATIONS II, by J.B.
Abbiss et al., The International Society For Optical Engineering, Volume 1566, P.
363 (1991).
[0012] Another example is the use of "thinned aperture" systems where for example, a widely-spaced
pattern or small holes is used as a substitute for the complete aperture. However,
even "thinned apertures" are limited in resolution by diffraction theory and by the
outer diameter of the widely-spaced pattern or small holes. Note that current electro-optical
systems are sometimes designed so that the size of their detector matches the size
of the diffraction blur of their optics.
[0014] The previously described techniques have a number of drawbacks with regard to optical
sensors. First, only one of these techniques is directed toward a diffraction limited
device. In addition, these techniques often produce systems of equations which cannot
be solved due to the practical constraints on computing power. Furthermore, none of
the previously described techniques specify either the types of detectors or other
system parameters which are used along with these techniques. In "
A PARALLEL IMPLEMENTATION OF A MODIFIED RICHARDSON LUCY ALGORITHM FOR IMAGE DE-BLURRINGS
by A.G. Al-Bakkar et al., International Journal of Infrared and Millimeter Waves,
Vol. 18, No. 3, pages 555 to 575, (1997-03-01) (ISSN: 0195-9271) a process of reconstructing an image from a blurred version of the same image is
disclosed. In
NON-LINEAR TECHNIQUES FOR IMAGE RESTORATION S.M.T. Matthews and A.H. Lettington,
International Symposium on Signal Processing and Its Applications, ISSPA, pp.443 to
446 (1996) filter noisy background data to obtain noise suppressed data is disclosed.
[0015] It is, therefore, an object of the present invention to provide an apparatus and
method for increasing the resolution of an optical sensor without using a substantially
larger aperture.
SUMMARY OF THE INVENTION
[0016] The invention as described in the independent claims 1 and 8 provides an apparatus
and method for improving the spatial resolution of an object using a background reconstruction
approach wherein a localized object containing high spatial frequencies is assumed
to exist inside a background scene containing primarily low and/or very high spatial
frequencies compared to the spatial frequencies of the localized object. The imaging
system cannot pass these high spatial frequencies (neither the high frequencies of
the object, nor the very high frequencies of the background) The background image's
low spatial frequencies are used to reconstruct the background scene in which the
localized object is situated. Using this reconstructed background and the space limited
nature of the localized object (i.e. it is only present in part of the scene, not
the entire scene), the high spatial frequencies that did not pass through an optical
system are restored, reconstructing a detailed image of the localized object.
[0017] An improvement comprises filtering the noisy blurred background data of the same
scene to obtain noise suppressed data; applying estimations of point spread functions
associated with the noise suppressed data and optical system to estimates of the noise
suppressed data to obtain a reconstructed background image (Ir(X)); and low pass filtering
the noisy blurred scene data containing the object to be reconstructed (D1, and using
the reconstructed background image (Ir(x)) to eliminate the background data from the
image data to obtain a reconstructed image of an object with increased spatial resolution.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The above objects, further features and advantages of the present invention are described
in detail below in conjunction with the drawings, of which:
FIGURE 1 is a general block diagram of the optical sensor according to the present
invention;
FIGURE 2 is a diagram of a linear detector array tailored to the present invention;
FIGURE 3 is a diagram of a matrix detector array tailored to the present invention;
FIGURE 4 is a diagram illustrating the size of an individual detector tailored to
the present invention;
FIGURE 5 is a diagram illustrating how the detector grid is sized with respect to
the central diffraction lobe;
FIGURE 6 is a diagram of a multi-linear detector array tailored to the present invention;
FIGURE 7 is a diagram of another version of a multi-linear detector array tailored
to the present invention;
FIGURE 8 is diagram illustrating the operation of a beam splitter as part of an optical
system tailored to the present invention;
FIGURE 9A-B represents an image scene and a flow diagram of the non-linear background
reconstruction technique of the image scene according to the present invention;
FIGURE 10 is a diagram illustrating the Richardson-Lucy background reconstruction
portion of the non-linear image processing technique according to the present invention;
FIGURE 11 is a diagram illustrating the Object Reconstruction and background subtraction
portion of the non-linear image processing according to the present invention;
FIGURE 12 is a schematic illustrating the Fourier Transform characteristic of binotf.
FIGURE 13 is a schematic of the relationship of binmap to the object of interest within
a window of a particular scene according to the present invention; and
FIGURES 14A-D show the Fourier analysis of simulated bar targets;
FIGURES 15A-C show simulated images associated with a thinned aperture optical system
configuration using non-linear method.
FIGURES 16A-C show the super-resolution of an image of a figure taken from a CCD camera
and reconstructed using the non-linear method according to the invention;
FIGURES 16D-F show the Fourier image analysis of the image of FIGs 16A-C;
FIGURES 17A-B show the Fourier analysis of superimposed truth, blurred, and reconstructed
images as function of frequency for SNR values of 50 and 100, respectively;
FIGURES 18a,b are flow diagrams of the linear algebra background reconstruction technique
according to the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] The present invention is directed to an apparatus and method of superresolution of
an image for achieving resolutions beyond the diffraction limit. This is accomplished
by background reconstruction, wherein a localized object containing high spatial frequencies
is assumed to exist inside a background scene containing primarily low and/or very
high spatial frequencies compared to the spatial frequencies of the localized object.
The point spread function (PSF) of an optical system causes the background data to
blend with and flow over objects of interest, thereby contaminating the boundaries
of these objects and making it exceedingly difficult to distinguish these objects
from the background data. However, if one knows or can derive the background, then
the object may be disentangled from the background such that there will exist uncontaminated
boundaries between the object and the background scene. Another way of describing
the effect of a PSF is to note that the spatial frequencies associated with the object
of interest which lie beyond the optical system cutoff frequency are lost. In the
background reconstruction approach, a localized object such as a tractor having high
spatial frequencies is super-resolved inside a background scene containing primarily
low and/or ultra-high spatial frequencies such as a cornfield. Typically, the imaging
system cannot pass either the high spatial frequencies of the object or the ultra-high
spatial frequencies of the background. This invention uses the background images'
low spatial frequencies to reconstruct the background scene in which the localized
object is situated. This reconstructed background and the space limited nature of
the localized object (that is, the object is present in only part of the scene rather
than the entire scene) can be used to restore the high spatial frequencies that did
not pass through the optical system, thereby reconstructing a detailed image of the
localized object.
[0020] In accordance with alternative embodiments of the present invention, both linear
and nonlinear methods for reconstructioning localized objects and backgrounds to produce
super-resolved images are described.
[0021] Referring to FIGURE 1, there is shown a general block diagram of an optical sensor
accommodating the present invention. The sensor (10) as in conventional devices includes
an optical system (12) and detectors (14). The optical system (12) which includes
various combinations of lenses, mirrors and filters depending on the type of application
is used to focus light onto a focal plane where the detectors (14) are located. The
optical system (12) also includes a predetermined aperture size corresponding to a
particular Numeral aperture (NA) which, in conventional diffraction-limited devices,
limits the amount of spatial resolution that is attainable. This is, as previously
described, due to diffraction blurring effects.
[0022] The optical system (12) can be described by an optical transfer function (OTF) which
represents the complete image forming system, and which can be used to characterize
that system.
[0023] The detectors (14) convert the light received from the optical system (12) into the
electrical signals which become the data used to generate images. In conventional
sensors the detectors are configured in a linear array for Scanning systems or in
a matrix array for Staring systems. In Scanning systems, the detector linear array
is swept in a direction perpendicular to the length of the array generating data one
scan line at a time with each line corresponding to one line of the image. In Staring
systems the matrix array is not moved and generates all of the imaging data simultaneously.
Thus each detector of the matrix array corresponds to one pixel of the image. It is
intended that the detectors (14) of the present invention will be configured as a
linear array or a matrix array depending on the type of system being used.
[0024] The detectors (14) take many different forms depending on the wavelength of light
used by the present invention. For example, in the ultraviolet and X-Ray ranges such
detectors as semitransparent photocathodes and opaque photocathodes can be used. In
the visible range such detectors as vacuum phototubes, photomultipliers, photoconductors,
and photodiodes can be used. In the infrared range, such detectors as photoconductors,
photodiodes, pyroelectric, photon drag and golay cell devices can be used.
[0025] In the present invention various elements of the sensor (10) must be optimized to
be used with a particular image-processing technique. The type of optimization depends
on the image-processing technique. As will be described in detail later, the present
invention includes two alternative image-processing techniques. Each of these two
super-resolution methods may be used with the sensor configurations as described below.
[0026] In case one, the sensor (10) must include detectors (14) that have an "instantaneous
field of view" that is equal to or less than the desired level of spatial resolution.
If, for example, the required resolution is one meter or less then the "instantaneous
field of view" of the detectors must be one meter or less (even though the central
lobe of the diffraction pattern is much larger). This makes the pixel size of the
image produced by the sensor (10) smaller than the central diffraction lobe. (Note
that, such a configuration adds additional cost to the sensors. However, for large
systems the increase in cost is less than the cost of a larger aperture.)
[0027] The sensor (10) can obey this rule in one of two ways. One way is to use more smaller-size
detectors (14). In conventional sensors the number of detectors used varies anywhere
from one to millions depending on the application.
[0028] In one embodiment of the present invention at least five times more detectors (14)
than normal are required to achieve the desired resolution. A diagram of a linear
detector array to be used with the present invention is shown in FIGURE 2, while a
diagram of a matrix detector array to be used with the present invention is shown
in FIGURE 3. The number of detectors (14) included in these arrays (28), (30) depends
on the application. However, as previously pointed out to achieve the higher resolution
these arrays (28), (38) will include at least five times more detectors (14) than
conventional sensors for a given application.
[0029] In conventional sensors, the size of the individual detector is never smaller than
the size of the central diffraction on lobe. This is because utilizing smaller sensors
serves no purpose since the resolution is limited by the optical aperture. In the
present invention, the size of the individual detectors (14) must be smaller than
the size of the central diffraction lobe (18), as shown in FIGURE 4.
[0030] Another way of configuring the sensor (10) according to Case one is again to use
a larger number of detectors (14), but instead of using smaller detectors configure
the optical system (12) so that more than one detector is spread across the central
diffraction lobe. This allows conventional size detectors (14) to be used. The number
of detectors (14) used again must be five times or more than required in conventional
sensors. In order to configure the optical system (12) as described above the back
focal length must be adjusted so that five or more detectors (14) spread across the
central diffraction lobe (18), as shown in FIGURE 5.
[0031] In Scanning systems, it is difficult to generate multiple image data by viewing the
object at different times. This is because the optical system assumes that the object
remains stationary while being scanned. The solution is to use detectors (14) configured
in a multi-linear array as shown in FIGURE 6. The multi-linear array (20) includes
a number of individual linear arrays (22) shifted parallel to each other by a distance
(d) which is a fraction of a pixel. When the array (20) is swept each linear array
(22) generates imaging data corresponding to one scan line of each of the images.
Thus each linear array (22) creates one of the images being produced. In the preferred
configuration, the array (20) includes ten or more individual linear arrays 22 which
are capable of producing ten or more different images.
[0032] Now two more examples of this technique are discussed. These examples are labeled
Case three and Case four.
[0033] In Case three, the sensor (10) is configured to take images of objects in multiple
color bands. In Scanning systems, this is accomplished by utilizing a multi-linear
array, as shown in FIGURE 7. The multi-linear array (24) also includes a number of
individual linear arrays (22) arranged in a parallel configuration shifted parallel
to each other by a distance (d) which is a fraction of a pixel. A color filter (26)
is disposed over each of the linear arrays (22). The color filters (26) are configured
to pass only a particular portion of the color spectrum which may include multiple
wavelengths of visible light. When the array (24) is swept, each linear array (22)
produces images of the same object in different color bands. The filters (26) are
fabricated by depositing optical coatings on transparent substrates, which are then
placed over each of the linear arrays (22). This process is well known.
[0034] In Staring systems, the multiple color-band data is created by incorporating a beam
splitter in the optical system (12) and using multiple detector arrays. Such a configuration
is illustrated in FIGURE 8. The beam splitter (28) splits the incoming light (32)
into multiple beams (34); Figure 8 shows the basic idea using a two-beam system. Due
to the operation of the beam splitter (28) each light beam (34) includes a different
part of the color spectrum which may include one or more different bands of visible
or infrared light. Each light beam is directed to one of the detector arrays (30)
producing images of the same object in different color bands.
[0035] In Case four, the sensor (10) is a combination of the three previously described
cases. This is accomplished by combining the principles discussed above with regard
to Cases one, two or three. In all three cases the sensor must be desired to have
a signal to noise ratio which is high as possible. This is done either by increasing
the integration time of the detectors (14) or by slowing down the scan speed as much
as possible for scanning systems. For Case two, the system's design, or its operation
mode, or both, are changed in order to take the required multiple images in a known
pattern displaced by a known distance that is not a multiple of a pixel, but rather
is a multiple of a pixel plus a known fraction of a pixel.
[0036] Referring back to FIGURE 1, coupled to the detectors (14) is a processor (16) which
processes the image data to achieve the higher resolution. This done by recovering
"lost" information from the image data. Even though the diffraction blur destroys
the required spatial resolution, some of the "lost" spatial information still exists
spread across the focal plane. The small-size detectors described above are used to
sample at a 5 to 10 times higher rate than is customary in these sorts of optical
systems in conjunction with processing enabling much of this "lost" information to
be recovered and thus restoring the image to a higher level of resolution than classical
diffraction theory would allow.
[0037] The processor (16) uses one of two image processing techniques: a Non-linear Reconstruction
method using a modified Richardson-Lucy Enhancement technique, and a background reconstruction
approach using a linear algebra technique.
[0038] One reasonable extension of the previously described imaging techniques is to use
phase retrieval or wave front phase information to reconstruct the image and thus
achieve higher resolution. Another reasonable extension of the previously described
technique is to use prior knowledge of the background scene to help resolve objects
that have recently moved into the scene. The processor (16) in addition to using one
of the above described primary data processing techniques, also uses other techniques
to process further the imaging data. This further processing is accomplished by standard
image enhancement techniques which can be used to improve the reconstructed image.
Such techniques include, but are not limited to, edge sharpening, contrast stretching
or other contrast enhancement techniques.
[0039] The Non-linear Background Reconstruction method using a modified Richardson-Lucy
Enhancement Technique is described as follows. In Figure 9A, there is shown a scene
(S) comprising a localized object (O) such as a tractor within a noisy blurred background
(B). In Figure 9B, input data D2 (Block 20), representing the noisy blurred background
data, is input into module 30 to remove the noise from D2 using a modified version
of the method of sieves. Note that the input data D1 and D2 indicated in block 20
has been sampled at the Nyquist rate (5 times the customary image sampling rate) and
preferably at twice the Nyquist rate (ten times the customary sampling rate) to obtain
robust input data.
[0040] The modified method of sieves removes noise by averaging adjacent pixels of the noisy
blurred background data D2 of the same scene together, using two and three pixel wide
point spread functions. Array h0 in equation (9a) of module 30 represents the optical
system PSF, which is the Fourier transform of the OTF input.
[0041] The output of module 30 thus provides separate pictures - the optical system image
I(x) and the modified background data images D3(x) and D4(x). As shown in block 40,
new point spread functions h3T and h4T have been constructed to account for the combined
effect of the method of sieves and the optical system. These new point spread functions
use both the two and three pixel wide point spread functions, h3 and h4, previously
identified, as well as the optical system PSF h0, as shown in equation 9(d) and (e),
to arrive at the new PSF's.
[0042] As shown in block 50, the Richardson-Lucy method is then used to reconstruct the
background scene data, D2, with the reconstructed data defined to be Ir(x). Figure
10 shows an exploded view of the processing steps for reconstructing the background
scene to obtain the reconstructed background Ir(x).
[0043] In Figure 10 the noise suppressed data D2 from block 20 of Figure 9B, is used as
the first estimate of the true background scene, In(x) (Module 100). As shown in module
110, this estimate of the true background scene is then blurred using the combined
method of sieves and optical system point spread functions to obtain two picture representations
13(x) and 14(x), where 13(x) is given by equation 10 (a) and 14 (x) is given by equation
10 (b). Two new arrays, as shown in module 120, are then created by dividing pixel
by pixel the noisy scene data, D3(x) and D4(x), by the blurred estimate of the true
background scene, 13(x) and 14(x), as shown in equations 10(c) and (d) respectively.
The new arrays T3(x) and T4(x) are then correlated with the combined method of sieves
and optical system PSF's and the result is multiplied pixel by pixel with the current
estimate of the true background scene. In this manner a new estimate of the true background
scene, Z(x), is obtained as shown in module 130 and by equations 10 (e)-(g).
[0044] The processor then determines whether a predetermined number of iterations have been
performed, as shown in block 140. If the predetermined number has not been performed,
the current estimate of the true background is replaced by the new estimate of the
true background (Z(x)) (Block 150) and the processing sequence or modules 110 - 140
are repeated. If, however, the predetermined number has been reached, then the latest
estimate of the true background scene is taken to be the reconstructed background
scenes; that is, Ir(x) is taken to equal Z(x) as shown in block 160. Note that for
the optical systems described above the predetermined number of iterations is preferably
between one thousand and two thousand.
[0045] In Figure 9B the reconstructed background Ir(x) (equal to Z(x)) is then input to
module 60, where the background subtraction steps are performed to super-resolve the
object within the noisy blurred scene data (D1). The reconstructed object, G(x), which
has been super-resolved, is then output as shown in module 70.
[0046] The background subtraction method for super-resolving the object of interest in D1
is detailed in Figure 11. In Figure 11 the noisy blurred scene data D1 containing
the object to be reconstructed is used as input, with a low-pass filter (block 600)
applied to remove high spatial frequency noise from the D1 data as shown by equation
11(a). In equation 11 (a) the transfer function hb(x) represents the Fourier transform
of the binotf array. The binotf array specifies the non-zero spatial frequency of
the otf located in the fourier plane. Figure 12 shows the fourier transform of an
image depicting binotf to ensure no higher frequencies in the spectrum exist.
[0047] As shown in Figure 12, the values of binotf are 1 up to the optical system cutoff
value fc and 0 beyond that cutoff. The low pass filtered data Df(x) in Figure 11 is
then multiplied pixel by pixel with the bin man array (binmap (x)) to separate out
a first estimate of the reconstructed object from the filtered D1 data, as shown in
equation 11 (b) in module 610. The binmap array specifies the region in the scene
containing the object to be super-resolved. The binmap has array elements equal to
1 where the object of interest is located and array elements equal to 0 everywhere
else. Figure 13 shows a pictorial representation of the binmap array, where the binman
window (W) holds a region containing an object of interest (RI) consisting of pixels
equal to one in the region containing the object and pixels equal to zero everywhere
else (RO).
[0048] The next step in Figure 11 is to replace the reconstructed background scene pixels,
Ir(x), by the estimated reconstructed object pixels, D0(x), at object positions specified
by the binmap (40), as shown in module 620. Equation 11 ©& provides the mathematical
formula for this replacement, creating a reconstructed object array S(x). S(x) is
then convolved with the optical system PSF (h0) to blur the combination of the reconstructed
background and the estimated reconstructed object as shown in equation 11 (d) of module
630. A new array, N(X), is then created in block 640 by dividing, on a pixel by pixel
basis, the filtered D1 scene array (Df) by the blurred combination of the reconstructed
background and the estimated reconstructed object (IS) as shown in equation 11(e)
of module 640. The new array, N(x), is then correlated with the optical system PSF
(h0) and multiplied, for each pixel specified by binmap, by the current estimate of
the reconstructed object. Equation 11(f) of module 650 is then used to determine Fix),
the new estimate of the reconstructed object.
[0049] After K(x) has been calculated a check is made, as shown in module 660, to determine
whether the specified number of iterations have been completed. If more iterations
are needed the current estimate of the reconstructed object is replaced by the new
estimate as shown by equation 11(g) of module 670. Steps 620 - 660 are repeated until
the specified number of iterations have been accomplished. When this happens the latest
estimate of the reconstructed objects is taken to be the reconstructed object; that
is G(x) is set equal to K(x), as shown in module 680 (equation 11 H).
[0050] In Figure 9B the reconstructed object G(x) is the output of module 70. This G(x)
is in fact the desired super-resolved object. Note that the above description of the
super-resolution method as shown in Figure 11 is set up to handle non-thinned apertures.
For thinned aperture systems, step 630 of Figure 11 (in which the new scene Is(x)
is blurred again using the optical system's PSF) may be excluded.
[0051] Figures 15A - C represent the application of the non-linear method to a thinned aperture
system. In this case, the thinned aperture configuration is an annulus. It should
be noted, however, that the method may be utilized with any thinned aperture design.
Figure 15A represents a computer generated ground scene (i.e. the truth scene). The
blurred image of that scene is then depicted in Figure 15B, while the final reconstructed,
super-resolved image is shown in Figure 15C.
[0052] Figures 16A - C represent images of figures taken from a CCD camera. Figure 16A represents
the truth scene (a picture of a toy spacemen), while Figure 16B shows the blurred
image of the scene (observed through a small aperture). Figure 16C represents the
reconstructed image, and Figure 16D shows the magnitude of the difference of the two-dimensional
Fourier transform between the truth scene in Figure 16A and the blurred image of Figure
16B. Figure 16E shows the difference between the truth scene and the first stage of
reconstruction (i.e. the deconvolved figure), while Figure 16F shows the magnitude
of the difference of the two-dimensional Fourier transform and the truth scene for
the reconstructed, super resolved image. Note that black indicates a 0 difference,
which is the desired result, while white indicates a maximum difference. As one can
see from a comparison of Figures 16D, E and F, the radius of Figure 16D corresponds
to the cutoff of the optical system or camera, and the deconvolved image frequencies
in Figure 16E have been enhanced inside the cutoff but remain zero outside the cutoff.
The super-resolved figure in Figure 16F has further improved the image by restoring
frequencies outside the cutoff as can be shown by the increased blackness of the figure
with respect to either Figures 16D or 16E. This is a clear demonstration that super
resolution has occurred. Figures 17A-B represent a graphical illustration of the truth,
blurred, and reconstructed super-resolved images for SNR values of 50 and 100 respectively.
Figures 17A and B show that the non-linearly reconstructed images closely parallel
the truth images.
[0053] In an alternative embodiment, the reconstruction approach using a linear transformed
method is now described. When reconstructing either the background scene or the localized
object, the imaging system is mathematically characterized by a linear operator represented
by a matrix. To restore either a background scene or the localized object, an inverse
imaging matrix corresponding to the inverse operator must be constructed. However,
due to the existence of system noise, applying the inverse imaging matrix to the image
is intrinsically unstable and results in a poorly reconstructed image. However by
applying a constrained least squares procedure such as Tikhonov regularization, a
regularized pseudo-inverse (RPI) matrix may be generated. Zero-order Tikhonov regularization
is preferably used, although higher order Tikhonov regularization may sometimes give
better results. The details of this are not described here, as Tikhonov regularization
is well-known in the art.
[0054] A key quantity used to construct the RPI matrix is the regularization parameter which
controls the image restoration. Larger parameter values protect the restored image
from the corrupting effects of the optical system but result in a restored image which
has lower resolution. An optimum or near optimum value for the regularization parameter
may be derived automatically from the image data. Singular value decomposition (SVD)
of the imaging operator matrix may be used to compute the RPI matrix. By estimating
the noise or error level of the degraded image the singular values of the matrix determine
the extent to which the full information in the original scene may be recovered. Note
that the use of the SVD process is not essential in determining the RPI matrix, and
other methods such as QR decomposition of the imaging matrix may also be used to achieve
essentially the same result.
[0055] The imaging matrix size increases approximately as the square of the image size,
and the computational burden associated with forming the RPI of the imaging matrix
quickly becomes intolerable. However, in the special case that the image and object
fields are the same size and are sampled at the same intervals (as is the case here),
the imaging matrix can be expanded into circulant form by inserting appropriately
positioned, additional columns. A fundamental theorem of matrix algebra is that the
Fourier transform diagonalizes a circulant matrix. This allows the reduction of the
image reconstruction algorithm to a pair of one dimensional fast Fourier transforms,
followed by a vector - vector multiplication, and finally an inverse one dimensional
fast Fourier transform. This procedure allows the image restoration by this Tikhonov
regularization technique to be done entirely in the Fourier transform domain, dramatically
reducing the time required to compute the reconstructed image. Figure 18 provides
an illustration of the steps taken to obtain the reconstructed image using the linear
transform method.
[0056] A flow diagrams of the Linear Algebra Technique according to the present invention
are shown in Figures 18a,b. Referring now to Figure 18a, the technique first converts
the imaging data collected by the optical system of the sensor into a matrix of the
form g1(i,j)= SIGMA m SIGMA nh(l-m,j-n)f(m,n)+n1(i,j), where h is a matrix representation
of the point spread function of the optical system and f is the matrix representation
of the unblurred background with the embedded object, while n1 is the matrix representation
of the additive white noise associated with the imaging system (module 12). Next,
imaging data g2 comprising the image scene data which contains only the background
data is obtained in the form of g2(i,j= SIGMA m SIGMA nh(l-m,j-n)b(m,n)+n2(i,j) where
b is a matrix representation of the unblurred background data taken alone and n2 is
a matrix representation of additive system white noise (module 14). Both g1 and g2
are then low-pass filtered to the cut-off frequency of the optical system to reduce
the effects of noise (module 15). Module 16 then shows the subtraction step whereby
the matrix representation (g3) of the difference between blurred scene data containing
the background and object of interest (g1) and the blurred scene containing only the
background data (g2) is formed as (g1-g2). Next, the position and size of the object
of interest is specified by choosing x,y coordinates associated with image matrix
(g1-g2) (module 18). A segment of sufficient size to contain the blurred object in
its entirety is then extracted from the matrix representation of (g1-g2), as shown
in Module 20. That is, an area equal to the true extent of the local object plus its
diffracted energy is determined. An identically located segment (i.e. segment having
the same x,y coordinates) is extracted from the blurred background scene matrix g2
as shown in module 22. The two image segments output from module 15 and 20 are then
input to module 24 to restore g2 and (g1-g2) using nth order Tikhonov regularization.
The restored segments are then added together as shown in step 26 and the area containing
the restored object of interest is extracted therefrom, as shown in module 28.
[0057] The resulting reconstructed image includes much of the spatial resolution which was
lost due to diffraction blurring effects.
[0058] It should be noted that the present invention is not limited to any one type of optical
sensor device. The principles which have been described herein apply to many types
of applications which include, but are not limited to, Optical Earth Resource Observation
Systems (both Air and Spaceborne), Optical Weather Sensors (both Air and Spaceborne),
Terrain Mapping Sensors both Air and Spaceborne), Surveillance Sensors (both Air and
Spaceborne), Optical Phenomenology Systems (both Air and Spaceborne), Imaging Systems
that utilize optical fibers such as Medical Probes, Commercial Optical Systems such
as Television Cameras, Telescopes utilized for astronomy and Optical Systems utilized
for Police and Rescue Work. The imaging system/method disclosed herein may be used
on a satellite, and a means for controlling the satellite which is located the earth
may be provided.
[0059] While the invention has been particularly shown and described with reference to preferred
embodiments thereof, it will be understood by those skilled in the art that changes
in form and details may be made therein without departing from the scope of the present
invention as defined in the claims.
1. In an optical system (12) having a detector means (14) and processor means (16) in
which image data is obtained comprising scene data containing an object to be reconstructed,
and noisy, blurred background data of the same scene (D2) which is filtered to obtain
noise suppressed data (30), a method for increasing the spatial resolution of imaging
data produced by the optical system, comprising:
obtaining a reconstructed background image by;
(a) blurring an estimate of the true background scene data using the complete method
of sieves and the optical system point spread function (110) to obtain two picture
representation functions wherein said noise suppressed data is used as the estimate
of the true background scene data;
(b) dividing on a pixel by pixel basis the noise scene data (D3, D4) by the blurred
estimate of the true background scene data to create a new array (T3, T4) (120);
(c) correlating said new array with the complete method of sieves and optical system
point spread function and multiplying the result on a pixel by pixel basis with the
current estimate of the true background scene data to provide a new estimate of said
true background scene Z(x) (130);
(d) repeating steps (a) to (c) until a threshold number of iterations has been performed;
and
(e) when said threshold number of iterations has been performed, taking the new estimate
of the true background scene to be the reconstructed background scene Ir(x) (160);
low pass filtering the scene data containing the object to be reconstructed, and using
the reconstructed background image to eliminate the background data from the image
data to obtain a reconstructed image of the object with increased spatial resolution
by
(f) separating out a first estimate, D0(x), of reconstructed object data from the filtered D1 data (610);
(g) replacing reconstructed background scene pixels by estimate reconstructed object
pixels at predetermined object positions to obtain an image indicative of the reconstructed
background and estimated reconstructed object (S (x)) (620) ;
(h) blurring the combination of the reconstructed background and estimated reconstructed
object to obtain a blurred image IS(x) (630);
(i) dividing on a pixel basis the filtered D1 scene data by the blurred combination
of the reconstructed background and estimated reconstructed object data to obtain
a new array of image data N(x) (640);
(j) correlating said new array N(x) with the optical system point spread function
and multiplying for each pixel specified by the current estimate of the reconstructed
object to provide a new estimate of said reconstructed object (650);
(k) repeating steps (g) to (i) until a threshold number of iterations has been performed;
and
(l) when said threshold number of iterations has been performed taking the new estimate
of the reconstructed object to be the reconstructed image of the object having increased
spatial resolution (680).
2. The method of claim 1 wherein the optical system is diffraction limited.
3. The method according to claim 2, wherein the step of filtering to remove noise from
D2 comprises using the modified method of sieves to remove said noise, using the equations;

where h
3 and h
4 are two and three pixel wide point spread functions for removing noise by averaging
adjacent pixels together, and where h
0 represents the optical system point spread function (30).
4. The method according to at least one of claims 1 to 3, wherein the step of separating
out a first estimate of the reconstructed object from the filter data D1, further
comprises using binmap values specifying the position in a scene of the object to
be resolved in combination with the filtered data

where

and
Binmap(x) specifies the region in the scene containing the object to be superresolved.
5. The method according to claim 4, wherein the predetermined object positions for replacing
the reconstructed background scene pixels by the estimated reconstructed object pixels
S(x) are specified by the binmap values (620), where
6. The method according to claim 5, wherein the step of blurring the combination of the
reconstructed background and estimated reconstructed object data uses the optical
system point spread function (630), where

and I
S (x) is the blurred combination of the reconstructed background and the estimated
reconstructed object, h
0(x-y) is the optical system point spread function, and S(y) is the combination of
the reconstructed background and the estimated reconstructed object.
7. An optical system (12) having a detector means (14) and processor means (16) in which
image data is obtained comprising scene data containing an object to be reconstructed,
and noisy, blurred background data of the same scene, including means for filtering
the noisy blurred background of the same scene to obtain noise suppressed data (30),
an apparatus for increasing the spatial resolution of the imaging data produced by
the optical system, therewith comprising:
means for obtaining a reconstructed background image, comprising:
(a) means for blurring an estimate of the true background scene data using the complete
method of sieves and the optical point spread function to obtain two picture representation
functions (110) wherein said noise suppressed data is used as the estimate of the
true background scene;
(b) means for dividing on a pixel by pixel basis the noisy scene data (D3, D4) by
the blurred estimate of the true background scene data to create a new array (T3,
T4)(120);
(c) means for correlating said new array with the complete method of sieves and optical
system point spread function and multiplying the result on a pixel by pixel basis
with the current estimate of the true background scene data to provide a new estimate
of said scene Z(x) (130);
(d) means for causing (a), (b) and (c) to perform a threshold number of iterations;
(e) means for taking the new estimate of the true background scene to be the reconstructed
background scene Ir(x) (160) when said threshold number of iterations has been performed,
means for low pass filtering the scene data containing the object to be reconstructed
and
means for using the reconstructed background image to eliminate the background data
from the image data to obtain a reconstructed image of the object with increased spatial
resolution (60) comprising
(f) means for separating out a first estimate, D0(x), of reconstructed object data from the filtered D1 data (610);
(g) means for replacing reconstructed background scene pixels by estimate reconstructed
object pixels at predetermined object positions to obtain an image indicative of the
reconstructed background and estimated reconstructed object (S(x)) (620) ;
(h) means for blurring the combination of the reconstructed background and estimated
reconstructed object to obtain a blurred image Is(x) (630);
(i) means for dividing on a pixel basis the filtered D1 scene data by the blurred
combination of the reconstructed background and estimated reconstructed object data
to obtain a new array of image data N(x) (640);
(j) means for correlating said new array N(x) with the optical system point spread
function and multiplying for each pixel specified by the current estimate of the reconstructed
object to provide a new estimate of said reconstructed object (650);
(k) means for causing (g), (h), (i) and (j) to perform a threshold number of iterations,
and
(l) means for taking the new estimate of the reconstructed object to be the reconstructed
image of the object having increased spatial resolution (680) when said threshold
number of iterations has been performed.
8. The apparatus of claim 7, wherein the optical system is diffraction limited.
9. The apparatus of claim 8, wherein the means for filtering to remove noise from D2
comprises means for using the modified method of sieves to remove said noise, using
the equations;

where h
3 and h
4 are two and three pixel wide point spread functions for removing noise by averaging
adjacent pixels together, and where h
0 represents the optical system point spread function(30).
10. The apparatus of at least one of claims 7 to 9, wherein the means for separating out
a first estimate of the reconstructed object from the filter data D1, further comprises
means for using binmap values specifying the position in a scene of the object to
be resolved in combination with the filtered data (D
f(x)) to obtain D
0(x) (610), where
11. The apparatus of claim 10, wherein the predetermined object positions for replacing
the reconstructed background scene pixels by the estimated reconstructed object pixels
S(x) are specified by the binmap values (620), where
12. The apparatus of claim 11, wherein the means of blurring the combination of the reconstructed
background and estimated reconstructed object data uses the optical system point spread
function (630), where
13. The method of claims 1 or 2 wherein the optical system is on a satellite which is
controlled from the earth.
14. The apparatus of claims 7 or 8 wherein the optical system is on a satellite, further
comprising means, located on the earth, for controlling the satellite.
1. In einem optischen System (12) mit einer Detektoreinrichtung (14) und einer Prozessoreinrichtung
(16), worin Bilddaten erhalten werden, die Szenendaten, die ein zu rekonstruierendes
Objekt und verrauschte Daten des verwischten Hintergrundes derselben Szene (D2) enthalten,
umfassen, die gefiltert werden, um rauschunterdrückte Daten (30) zu erhalten, ein
Verfahren zum Erhöhen der räumlichen Auflösung der von dem optischen System erzeugten
bildgebenden Daten, beinhaltend:
Erhalten eines rekonstruierten Hintergrundbildes durch:
(a) Verwischen eines Überschlagswertes der wahren Hintergrundszenendaten unter Anwendung
des vollständigen Siebverfahrens und der Punktspreizfunktion des optischen Systems
(110), um zwei Bilddarstellungsfunktionen zu erhalten, bei denen die rauschunterdrückten
Daten als Überschlagswert der wahren Hintergrundszenendaten dienen;
(b) Dividieren, Pixel für Pixel, der verrauschten Szenendaten (D3, D4) durch den verwischten
Überschlagswert der wahren Hintergrundszenendaten, um eine neue Anordnung (T3, T4)
zu erstellen (120);
(c) Korrelieren der neuen Anordnung mit dem vollständigen Siebverfahren und der Punktspreizfunktion
des optischen Systems und Multiplizieren, Pixel für Pixel, des Ergebnisses mit dem
aktuellen Überschlagswert der wahren Hintergrundszenendaten, um einen neuen Überschlagswert
der wahren Hintergrundszenendaten Z(x) bereitzustellen (130);
(d) Wiederholen der Schritte (a) bis (c), bis eine Schwellenanzahl von Iterationen
durchlaufen worden ist; und
(e) wenn die Schwellenanzahl der Iterationen durchlaufen worden ist, Betrachten des
neuen Überschlagswertes der wahren Hintergrundszene als die rekonstruierte Hintergrundszene
Ir(x) (160);
Filtern der Szenendaten, die das zu rekonstruierende Objekt enthalten, im Tiefpass
und Verwenden des rekonstruierten Hintergrundbildes, um die Hintergrunddaten aus den
Bilddaten zu entfernen mit dem Ziel, ein rekonstruiertes Bild des Objekts mit erhöhter
räumlicher Auflösung zu erhalten durch
(f) Ausscheiden eines ersten Überschlagswertes, D0(x), von Daten des rekonstruierten Objekts aus den gefilterten D1-Daten (610);
(g) Ersetzen von Pixeln der rekonstruierten Hintergrundszene durch Pixel des überschlägig
ermittelten, rekonstruierten Objekts an vorgegebenen Objektpositionen, um ein Bild
zu erhalten, das einen Hinweis auf den rekonstruierten Hintergrund und das überschlägig
ermittelte, rekonstruierte Objekt (S(x)) gibt (620) ;
(h) Verwischen der Kombination aus dem rekonstruierten Hintergrund und dem überschlägig
ermittelten, rekonstruierten Objekt, um ein verwischtes Bild Is(x) zu erhalten (630);
(i) Pixelweises Dividieren der gefilterten D1-Szenendaten durch die verwischte Kombination
aus den Daten des rekonstruierten Hintergrundes und des überschlägig ermittelten,
rekonstruierten Objekts, um eine neue Anordnung von Bilddaten N(x) zu erhalten (640);
(j) Korrelieren der neuen Anordnung N(x) mit der Punktspreizfunktion des optischen
Systems und Multiplizieren, für jedes angegebene Pixel, mit dem aktuellen Überschlagswert
des rekonstruierten Objekts, um einen neuen Überschlagswert des rekonstruierten Objekts
zu erhalten (650);
(k) Wiederholen der Schritte (g) bis (j), bis eine Schwellenanzahl von Iterationen
durchlaufen worden ist; und
(l) wenn die Schwellenanzahl der Iterationen durchlaufen worden ist, Betrachten des
neuen Überschlagswertes des rekonstruierten Objekts als das rekonstruierte Bild des
Objekts mit erhöhter räumlicher Auflösung (680).
2. Verfahren nach Anspruch 1, wobei das optische System beugungsbegrenzt ist.
3. Verfahren nach Anspruch 2, wobei der Schritt des Filterns zum Entfernen von Rauschen
aus D2 die Anwendung des modifizierten Siebverfahrens, um dieses Rauschen zu entfernen,
nach folgenden Gleichungen beinhaltet:

wobei h
3 und h
4 zwei und drei Pixel breite Punktspreizfunktionen zum Entfernen von Rauschen durch
das Mitteln benachbarter Pixel zusammen sind und wobei h
0 die Punktspreizfunktion des optischen Systems repräsentiert (30).
4. Verfahren nach mindestens einem der Ansprüche 1 bis 3, wobei der Schritt des Ausscheidens
eines ersten Überschlagswertes des rekonstruierten Objekts aus den Filterdaten D1
ferner das Verwenden von Binmapwerten, welche die Position des aufzulösenden Objekts
in einer Szene bezeichnen, in Kombination mit den gefilterten Daten (D
f(x)) beinhaltet, um D
0(x) zu erhalten (610), wobei

und binmap(x) den Bereich in der Szene bezeichnet, der das Objekt, das überaufgelöst
werden soll, enthält.
5. Verfahren nach Anspruch 4, wobei die vorgegebenen Objektpositionen zum Ersetzen der
Pixel der rekonstruierten Hintergrundszene durch die überschlägig ermittelten Pixel
S(x) des rekonstruierten Objekts durch die Binmapwerte spezifiziert sind (620), wobei
6. Verfahren nach Anspruch 5, wobei der Schritt des Verwischens der Kombination aus den
Daten des rekonstruierten Hintergrundes und des überschlägig ermittelten, rekonstruierten
Objekts die Punktspreizfunktion des optischen Systems nutzt (630), wobei

und I
s(x) die verwischte Kombination aus dem rekonstruierten Hintergrund und dem überschlägig
ermittelten, rekonstruierten Objekt ist, h
0(x-y) die Punktspreizfunktion des optischen Systems ist und S(y) die Kombination aus
dem rekonstruierten Hintergrund und dem überschlägig ermittelten, rekonstruierten
Objekt ist.
7. Optisches System (12) mit einer Detektoreinrichtung (14) und einer Prozessoreinrichtung
(16), worin Bilddaten erhalten werden, die Szenendaten, die ein zu rekonstruierendes
Objekt und verrauschte Daten des verwischten Hintergrundes derselben Szene enthalten,
umfassen, einschließlich einer Einrichtung zum Filtern des durch Rauschen verwischten
Hintergrundes derselben Szene, um rauschunterdrückte Daten zu erhalten (30), einem
Gerät zum Erhöhen der räumlichen Auflösung der von dem optischen System erzeugten
bildgebenden Daten, damit aufweisend:
eine Einrichtung zum Erhalten eines rekonstruierten Hintergrundbildes, aufweisend:
(a) eine Einrichtung zum Verwischen eines Überschlagswertes der wahren Hintergrundszenendaten
unter Anwendung des vollständigen Siebverfahrens und der optischen Punktspreizfunktion,
um zwei Bilddarstellungsfunktionen zu erhalten (110), bei denen die rauschunterdrückten
Daten als der Überschlagswert der wahren Hintergrundszene dienen;
(b) eine Einrichtung zum Dividieren, Pixel für Pixel, der verrauschten Szenendaten
(D3, D4) durch den verwischten Überschlagswert der wahren Hintergrundszenendaten,
um eine neue Anordnung (T3, T4) zu erstellen (120);
(c) eine Einrichtung zum Korrelieren der neuen Anordnung mit dem vollständigen Siebverfahren
und der Punktspreizfunktion des optischen Systems und Multiplizieren, Pixel für Pixel,
des Ergebnisses mit dem aktuellen Überschlagswert der wahren Hintergrundszenendaten,
um einen neuen Überschlagswert der Szene Z(x) bereitzustellen (130);
(d) eine Einrichtung, die veranlasst, dass (a), (b) und (c) eine Schwellenanzahl von
Iterationen durchlaufen;
(e) eine Einrichtung zum Betrachten des neuen Überschlagswertes der wahren Hintergrundszene
als die rekonstruierte Hintergrundszene Ir(x) (160), wenn die Schwellenanzahl der Iterationen durchlaufen worden ist,
eine Einrichtung zur Tiefpassfilterung der Szenendaten, die das zu rekonstruierende
Objekt enthalten, und
eine Einrichtung zum Verwenden des rekonstruierten Hintergrundbildes, um die Hintergrunddaten
aus den Bilddaten zu entfernen mit dem Ziel, ein rekonstruiertes Bild des Objekts
mit erhöhter räumlicher Auflösung zu erhalten (60), aufweisend
(f) eine Einrichtung zum Ausscheiden eines ersten Überschlagswertes, D0(x), von Daten des rekonstruierten Objekts aus den gefilterten D1-Daten (610);
(g) eine Einrichtung zum Ersetzen von Pixeln der rekonstruierten Hintergrundszene
durch Pixel des überschlägig ermittelten, rekonstruierten Objekts an vorgegebenen
Objektpositionen, um ein Bild zu erhalten, das einen Hinweis auf den rekonstruierten
Hintergrund und das überschlägig ermittelte, rekonstruierte Objekt (S(x)) gibt (620);
(h) eine Einrichtung zum Verwischen der Kombination aus dem rekonstruierten Hintergrund
und dem überschlägig ermittelten, rekonstruierten Objekt, um ein verwischtes Bild
Is(x) zu erhalten (630);
(i) eine Einrichtung für das pixelweise Dividieren der gefilterten D1-Szenendaten
durch die verwischte Kombination aus den Daten des rekonstruierten Hintergrundes und
des überschlägig ermittelten, rekonstruierten Objekts, um eine neue Anordnung von
Bilddaten N(x) zu erhalten (640);
(j) eine Einrichtung zum Korrelieren der neuen Anordnung N(x) mit der Punktspreizfunktion
des optischen Systems und Multiplizieren, für jedes angegebene Pixel, mit dem aktuellen
Überschlagswert des rekonstruierten Objekts, um einen neuen Überschlagswert des rekonstruierten
Objekts zu erhalten (650);
(k) eine Einrichtung, die veranlasst, dass (g), (h), (i) und (j) eine Schwellenanzahl
von Iterationen durchlaufen, und
(l) eine Einrichtung zum Betrachten des neuen Überschlagswertes des rekonstruierten
Objekts als das rekonstruierte Bild des Objekts mit erhöhter räumlicher Auflösung
(680), wenn die Schwellenanzahl der Iterationen durchlaufen worden ist.
8. Gerät nach Anspruch 7, wobei das optische System beugungsbegrenzt ist.
9. Vorrichtung nach Anspruch 8, wobei die Filtereinrichtung zum Entfernen von Rauschen
aus D2 eine Einrichtung zum Anwenden des modifizierten Siebverfahrens, um dieses Rauschen
zu entfernen, nach folgenden Gleichungen aufweist:

wobei h
3 und h
4 zwei und drei Pixel breite Punktspreizfunktionen zum Entfernen von Rauschen durch
das Mitteln benachbarter Pixel zusammen sind und wobei h
0 die Punktspreizfunktion des optischen Systems repräsentiert (30).
10. Vorrichtung nach mindestens einem der Ansprüche 7 bis 9, wobei die Einrichtung zum
Ausscheiden eines ersten Überschlagswertes des rekonstruierten Objekts aus den Filterdaten
D1 ferner eine Einrichtung zum Verwenden von Binmapwerten, welche die Position des
aufzulösenden Objekts in einer Szene bezeichnen, in Kombination mit den gefilterten
Daten (D
f(x)) aufweist, um D
0(x) zu erhalten (610), wobei
11. Vorrichtung nach Anspruch 10, wobei die vorgegebenen Objektpositionen zum Ersetzen
der Pixel der rekonstruierten Hintergrundszene durch die überschlägig ermittelten
Pixel (S(x) des rekonstruierten Objekts durch die Binmapwerte spezifiziert sind (620),
wobei S (x) = Ir(x) (binmap(x)-1) + binmap (x)D0(x).
12. Vorrichtung nach Anspruch 11, wobei die Einrichtung zum Verwischen der Kombination
aus den Daten des rekonstruierten Hintergrundes und des überschlägig ermittelten,
rekonstruierten Objekts die Punktspreizfunktion des optischen Systems nutzt (630),
wobei
13. Verfahren nach Anspruch 1 oder 2, wobei das optische System sich auf einem Satelliten
befindet, der von der Erde aus gesteuert wird.
14. Vorrichtung nach Anspruch 7 oder 8, wobei das optische System sich auf einem Satelliten
befindet, ferner aufweisend auf der Erde befindliche Einrichtungen zum Steuern des
Satelliten.
1. Dans un système optique (12) comportant un moyen de détection (14) et des moyens (16)
de traitement de données dans lequel des données d'image sont obtenues comprenant
des données d'une scène contenant un objet devant être reconstitué, et des données
d'arrière-plan brouillé et bruité de la même scène (D2) qui sont filtrées pour obtenir
des données dont le bruit a été supprimé (30), un procédé pour augmenter la résolution
spatiale des données d'imagerie produites par le système optique, comprenant:
l'obtention d'une image d'un arrière-plan reconstitué, en
(a) brouillant une estimation des données d'arrière-plan de la scène réelle en utilisant
la méthode complète de tamisage et la fonction (110) d'étalement de point de système
optique pour obtenir deux fonctions de représentation d'image dans lequel lesdites
données dont le bruit a été supprimé sont utilisées comme l'estimation des données
d'arrière-plan de la scène réelle;
(b) divisant sur une base pixel par pixel les données de scène bruitée (D3, D4) par
l'estimation des données brouillées d'arrière-plan de la scène réelle pour créer une
nouvelle matrice (T3, T4) (120);
(c) corrélant ladite nouvelle matrice avec la méthode complète de tamisage et la fonction
d'étalement de point de système optique et multipliant le résultat sur une base pixel
par pixel avec l'estimation en cours des données d'arrière-plan de la scène réelle
pour fournir une nouvelle estimation de ladite vraie scène Z(x) avec arrière-plan
(130);
(d) répétant les étapes (a) à (c) jusqu'à ce qu'un nombre seuil d'itérations a été
réalisé; et
(e) lorsque ledit nombre seuil d'itérations a été réalisé, considérant la nouvelle
estimation d'arrière-plan de la scène réelle comme l'arrière-plan reconstitué Ir(x)
de la scène (160);
le filtrage passe-bas des données de scène contenant l'objet à reconstruire, et à
l'aide de l'image d'arrière-plan reconstitué, pour éliminer les données d'arrière-plan
des données d'image pour obtenir une image reconstituée de l'objet avec une résolution
spatiale plus grande, en:
(f) séparant une première estimation, D0(x), de données d'objet reconstitué à partir des données D1 filtrées (610);
(g) remplaçant les pixels de l'arrière-plan de la scène par des pixels estimés d'objet
reconstitué en des positions prédéterminées d'objet pour obtenir une image indicative
de l'arrière-plan reconstitué et de l'objet reconstitué estimé (S(x) (620);
(h) brouillant la combinaison de l'arrière-plan reconstitué et de l'objet reconstitué
estimé pour obtenir une image brouillée Is(x) (630);
(i) divisant sur la base du pixel les données filtrées D1 de scène par la combinaison
brouillée de l'arrière-plan reconstitué et des données estimées d'objet reconstitué
pour obtenir une nouvelle matrice de données d'image N(x) (640);
(j) corrélant ladite nouvelle matrice N(x) avec la fonction d'étalement de point de
système optique et, pour chaque pixel spécifié, en multipliant par l'estimation en
cours de l'objet reconstitué pour fournir une nouvelle estimation de l'objet reconstitué
(650);
(k) répétant les étapes (g) à (j) jusqu'à ce qu'un nombre seuil d'itérations a été
réalisé; et
(l) lorsque ledit nombre seuil d'itérations a été réalisé, considérant la nouvelle
estimation de l'objet reconstitué comme l'image reconstituée de l'objet ayant une
résolution spatiale plus grande (680).
2. Procédé selon la revendication 1 dans lequel le système optique est limité en diffraction.
3. Procédé selon la revendication 2, dans lequel l'étape de filtration pour éliminer
le bruit de D2 comporte l'utilisation de la méthode modifiée de tamisage pour éliminer
ledit bruit, par les équations:

où h
3 et h
4 sont des fonctions d'étalement de point de deux ou trois pixels de large pour éliminer
le bruit en faisant la moyenne d'ensemble des pixels adjacents, et où h
0 représente la fonction d'étalement de point de système optique (30).
4. Procédé selon au moins l'une des revendications 1 à 3, dans lequel l'étape de séparation
d'une première estimation de l'objet reconstitué des données de filtre D1, comporte
en outre l'utilisation des valeurs binmap spécifiant la position dans la scène de
l'objet à reconstituer en combinaison avec les données filtrées (D
f(x)) pour obtenir D
a(x) (610), où

et
binmap(x) représente la région dans la scène contenant l'objet devant être super reconstitué.
5. Procédé selon la revendication 4, dans lequel les positions de l'objet prédéterminé
pour remplacer les pixels d'arrière-plan reconstitué de la scène par les pixels de
l'objet reconstitué estimé S(x) sont spécifiées par les valeurs binmap (620), où
6. Procédé selon la revendication 5, dans lequel l'étape consistant à brouiller la combinaison
de l'arrière-plan reconstitué et des données de l'objet reconstitué estimé utilise
la fonction (630) d'étalement de point de système optique, où

et
I
s(x) est la combinaison brouillée de l'arrière-plan reconstitué et de l'objet reconstitué
estimé, h
0(x-y) est la fonction d'étalement de point de système optique, et S(y) est la combinaison
de l'arrière-plan reconstitué et de l'objet reconstitué estimé.
7. Système optique (12) comportant un moyen de détection (14) et des moyens (16) de traitement
de données dans lequel des données d'image sont obtenues comprenant des données d'une
scène contenant un objet devant être reconstitué, et des données d'arrière-plan brouillé
et bruité de la même scène; comportant des moyens pour filtrer l'arrière-plan brouillé
et bruité de la même scène pour obtenir des données dont le bruit a été supprimé (30),
un dispositif pour augmenter la résolution spatiale des données d'imagerie produites
par le système optique, comprenant:
des moyens d'obtention d'une image d'un arrière-plan reconstitué, comprenant:
(a) des moyens pour brouiller une estimation des données d'arrière-plan de la scène
réelle en utilisant la méthode complète de tamisage et la fonction (110) d'élargissement
de point de système optique pour obtenir deux fonctions de représentation d'image
dans lesquels moyens lesdites données dont le bruit a été supprimé sont utilisées
comme l'estimation des données d'arrière-plan de la scène réelle;
(b) des moyens pour diviser sur une base pixel par pixel les données de scène bruitées
(D3, D4) par l'estimation des données bruitées d'arrière-plan de la scène réelle pour
créer une nouvelle matrice (T3, T4) (120);
(c) des moyens pour corréler ladite nouvelle matrice avec la méthode complète de tamisage
et la fonction d'élargissement de point de système optique et pour multiplier le résultat
sur une base pixel par pixel avec l'estimation en cours des données d'arrière-plan
de la scène réelle pour fournir une nouvelle estimation de ladite scène Z(x) (130);
(d) des moyens pour provoquer l'exécution des étapes (a), (b) et (c) un nombre seuil
d'itérations; et
(e) des moyens pour considérer la nouvelle estimation de l'arrière-plan de la scène
réelle comme de l'arrière-plan reconstitué Ir(x) de la scène(160) lorsque ledit nombre
seuil d'itérations a été réalisé;
des moyens de filtrage passe-bas des données de scène contenant l'objet à reconstruire,
et
à l'aide de l'image d'arrière-plan reconstitué pour éliminer les données d'arrière-plan
des données d'image pour obtenir une image reconstituée de l'objet avec une résolution
spatiale plus grande, comprenant:
(f) des moyens pour séparer une première estimation, Do(x), de données d'objet reconstitué
à partir des données D1 filtrées (610);
(g) des moyens pour remplacer les pixels d'arrière-plan de la scène par des pixels
estimés d'objet reconstitué en des positions prédéterminées d'objet pour obtenir une
image indicative de l'arrière-plan reconstitué et de l'objet reconstitué estimé (S(x))
(620);
(h) des moyens pour brouiller la combinaison de l'arrière-plan reconstitué et de l'objet
reconstitué estimé pour obtenir une image brouillée Is(x) (630);
(i) des moyens pour, sur la base du pixel, diviser les données filtrées D1 de scène
par la combinaison brouillée de l'arrière-plan reconstitué et des données estimées
d'objet reconstitué pour obtenir une nouvelle matrice de données d'image N(x) (640);
(j) des moyens pour corréler ladite nouvelle matrice N(x) avec la fonction d'étalement
de point de système optique et, pour chaque pixel spécifié, pour multiplier par l'estimation
en cours de l'objet reconstitué pour fournir une nouvelle estimation de l'objet reconstitué
(650);
(k) des moyens pour provoquer l'exécution des étapes (g), (h), (i) et (j) avec un
nombre seuil d'itérations; et
(l) des moyens pour considérer la nouvelle estimation de l'objet reconstitué comme
l'image reconstituée de l'objet ayant une résolution spatiale plus grande (680) lorsque
ledit nombre seuil d'itérations a été réalisé.
8. Dispositif selon la revendication 7, dans lequel le système optique est limité en
diffraction.
9. Dispositif selon la revendication 8, dans lequel les moyens de de filtrage pour éliminer
le bruit de D2 comportent des moyens d'utilisation de la méthode modifiée de tamisage
pour éliminer ledit bruit, par les équations:

où h
3 et h
4 sont des fonctions d'étalement de point de deux ou trois pixels de large pour éliminer
le bruit en faisant la moyenne d'ensemble des pixels adjacents, et où h
0 représente la fonction d'étalement de point de système optique (30).
10. Dispositif selon au moins l'une des revendications 7 à 9, dans lequel les moyens de
séparation d'une première estimation de l'objet reconstitué des données de filtre
D1, comportent en outre des moyens d'utilisation des valeurs binmap spécifiant la
position dans la scène de l'objet à reconstituer en combinaison avec les données filtrées
(Df(x)) pour obtenir D0(x) (610), où D0(x) = binmap (x) * Df(x).
11. Dispositif selon la revendication 10, dans lequel les positions prédéterminées de
l'objet pour remplacer les pixels d'arrière-plan de la scène reconstituée par les
pixels de l'objet reconstitué estimé S(x) sont spécifiées par les valeurs binmap (620),
où
12. Dispositif selon la revendication 11, dans lequel les moyens pour brouiller la combinaison
de l'arrière-plan reconstitué et les données de l'objet reconstitué estimé utilisent
la fonction (630) d'étalement de point de système optique, où
13. Procédé selon les revendications 1 ou 2 dans lequel le système optique est disposé
sur un satellite qui est commandé à partir de la terre.
14. Dispositif selon les revendications 7 ou 8 dans lequel le système optique est disposé
sur un satellite, qui comprend en outre des moyens, disposés sur la terre, pour commander
le satellite.