[0001] The present invention generally relates to the field of the production of magnified
images of small objects using optical microscopes.
[0002] More specifically the invention relates to an imaging method for obtaining an super-resolution
image of an object, based upon an optical microscope adapted for capturing a image
of the object, said optical microscope including :
a support plate for bearing the object,
an illumination source adapted to produce an illumination beam for illuminating the
object,
an optical element for focusing the illumination beam on the support plate, the section
of the object on the support plane currently illuminated by the focused illumination
beam being called hereafter the target region,
a digital camera including a matrix of sensors for capturing an image of the target
region, each sensor providing for a respective pixel value and
a displacement block for displacing the support plate relative to the focused illumination
beam and to the digital camera along at least two displacement axes among three axes
x, y, z perpendicular to each other, wherein the axes x and y define the plane of
the support plate, said at least two displacement axes defining two corresponding
perpendicular image axes of the super-resolution image.
[0003] The resolution of an optical microscope relates to the ability of the microscope
to reveal adjacent details as distinct and separate. Independently of imperfections
of lenses, the optical microscope's resolution is limited by light's diffraction.
[0004] Indeed, because of light's diffraction, the image of a point is not a point, but
appears as a fuzzy disk surrounded by a diffraction, called "Airy disk" or "point
spread function". Thus two points of the object, adjacent but distinct, will have
for image points two spots whose overlapping can prevent from distinguishing the two
image points : the details are then not resolved.
[0005] According to the Abbe diffraction limit, the limit of resolving separate points by
the optical microscope, known as the diffraction limit, is stated as

where
λ is the wavelength of the illumination source, and NA is the numerical aperture of
the objective lens of the optical microscope. In general, the lowest value of the
diffraction limit obtainable with conventional objective lens is 200 nanometers (nm).
[0006] There are known processes for producing images with higher resolution than the resolution
allowed by simple use of diffraction-limited optics : for example, the Stimulated
Emission Depletion microscopy (STED), the Spatially Structured Illumination Microscopy
(SSIM), Photoactivated Localization microscopy (PALM), the Stochastic Optical Reconstruction
Microscopy (STORM).
[0008] A common thread in these techniques is that they are able to resolve details beyond
the diffraction limit, for example in PALM this is achieved by switching fluorophores
on and off sequentially in time, so that the signals can be recorded consecutively.
[0009] Unfortunately, these super-resolution methods require purchasing expensive optical
platforms (e.g. more than 1 M€ for STED) and/or require further significant post signal
data treatment, both being beyond the resources of most cellular biology laboratories.
[0010] So there is a need of a super-resolution imaging method that is easier to be implemented.
[0011] According to a first aspect, the invention proposes an imaging method for obtaining
a super-resolution image of an object as mentioned here above, said method being characterized
in iterating the following set of steps :
- capturing, by the digital camera, a first image of the target region currently illuminated
by the focused illumination beam;
- extracting, from the captured first image, a first block of pixel values provided
by a sub-matrix of the matrix of sensors;
- storing said first bloc of pixel values as a first block of pixel values of the super-resolution
image;
- displacing, by a sub-diffraction limited distance, the support plate by the displacement
block along one of the displacement axis;
- capturing, by the digital camera, a second image of the target region currently illuminated
by the focused illumination beam;
- extracting, from the captured second image, a second block of pixel values provided
by said sub-matrix of the matrix of sensors;
- storing said second bloc of pixel values as a second block of pixel values of the
super-resolution image, said second block of pixel values being placed right next
to said first block of pixel values in the super-resolution image along the image
axis corresponding to said one displacement axis.
[0012] Our analogous method to the aforementioned techniques provides for a simple way to
get a super-resolution image of an object with limited calculations and treatments.
[0013] According to some embodiments of the invention, an imaging method for obtaining a
super-resolution fluorescence image of an object includes further the following features:
- the sub-matrix comprises only one sensor;
- a sensor of the matrix is selected as sensor of the sub-matrix in a preliminary step
only if said sensor captures a part of the image of the focused illumination beam
on the support plate;
- the sensor(s) of the matrix is/are selected as sensor(s) of the sub-matrix in a preliminary
step as the one(s) among the sensors of the matrix capturing a part of the image of
the focused illumination beam on the support plate wherein the image is the most in
focus;
- the method comprises a step of applying a background removal to the pixel values of
the super-resolution image and/or including a step of cropping said pixel values that
are outside at least one specified range of upper intensity values, in order to remove
from the pixel values related to an individual fluorophore the overlapping intensities
of neighbouring fluorophores;
- the object includes at least one fluorescence source, and the illumination source
is adapted to produce an illumination beam for generating fluorescence from the illuminated
object, the image of the target region captured by the digital camera being an image
of the fluorescence generated by the illuminated object, the obtained super-resolution
image being a super-resolution fluorescence image;
[0014] According to a second aspect, the invention proposes a system for obtaining a super-resolution
image of an object, said imaging system including a controller and an optical microscope,
said optical microscope including :
- a support plate for bearing an object to be imaged,
- an illumination source adapted to produce an illumination beam for illuminating the
object,
- an optical element for focusing the illumination beam on the support plate, the section
of the object on the support plane currently illuminated by the focused illumination
beam being called hereafter the target region,
- a digital camera including a matrix of sensors for capturing an image of the target
region, each sensor providing for a respective pixel value, and
- a displacement block for displacing the support plate relative to the focused illumination
beam and to the digital camera along at least two displacement axes among three axes
x, y, z perpendicular to each other, wherein the axes x and y define the plane of
the support plate, said at least two displacement axes defining two corresponding
perpendicular image axes of the super-resolution image;
said imaging system being characterized in that the controller is adapted for iterating
the following operation's set of commanding the digital camera to capture a first
image of the target region currently illuminated by the focused illumination beam,
extracting, from the captured first image, a first block of pixel values provided
by a sub-matrix of the matrix of sensors, storing said first bloc of pixel values
as a first block of pixel values of a super-resolution image of the object, then commanding
the displacement block to displace, by a sub-diffraction limited distance, the support
plate along one of the displacement axis, then commanding the digital camera to capture
a second image of the target region currently illuminated by the focused illumination
beam, extracting, from the captured second image, a second block of pixel values provided
by said sub-matrix of the matrix of sensors, and storing said second bloc of pixel
values as a second block of pixel values of the super-resolution image, said second
block of pixel values being placed right next to said first block of pixel values
in the super-resolution image along the image axis corresponding to said one displacement
axis.
[0015] The present invention is illustrated by way of example, and not by way of limitation,
in the figures of the accompanying drawings and in which like reference numerals refer
to similar elements and in which:
- Figure 1 shows a schematic lateral view of a system for obtaining a super-resolution
fluorescence image of an object in an embodiment of the invention;
- Figure 2 is a representation of a top view of the platform supporting the sample;
- Figure 3 is a representation of a bottom view of the camera sensor matrix;
- Figure 4 is a view of a super-resolution image obtained according to an embodiment
of the invention;
- Figure 5 represents steps of a method for obtaining a super-resolution fluorescence
image of an object in an embodiment of the invention;
- Figure 6 is a schematic view of an ordinary fluorescence scanning microscope;
- Figure 7 is a schematic view of an ordinary confocal fluorescence scanning laser microscope;
- Figure 8 is a schematic view of a fluorescence scanning microscope according to the
invention;
- Figure 9 shows the cross sections of the normalized intensity in the detector plane
for two single point fluorescence sources.
[0016] Same references used in different figures correspond to similar referred elements.
Ordinary fluorescence scanning microscopy
[0017] In Ordinary fluorescence scanning microscopy (OFSM), the laser light is focused into
a diffraction limited spot exciting the fluorescence of the objects that are inside
that spot. The light emitted as the fluorescence is collected by the photo-detector.
Then, the laser beam scans the surface of the sample; usually by moving the sample
with x-y-(z) translation stage relative to the laser beam. The final fluorescence
image is reconstructed from step-by-step measured fluorescence intensities. A generalized
scheme of OFSM technical set-up is shown in Figure 6, showing a fluorescence scanning
microscope including a sample plane 31, a diffraction limited laser spot 32, an optical
element 33 such as filters cubes and/or mirrors, a detector plane 34, a diffraction
limited laser spot's image 35 on the detector plane.
[0018] Assuming that there are two single point (infinitely small) sources of fluorescence
light on the sample plane and that the wavelength of light is 500 nm and that numerical
aperture (NA) of our system is 1.45, therefore the lateral resolution of such system
in ideal conditions is approximately equal to 176 nm (0.51*500nm/1.45), based on the
Abbe diffraction limit. Note, that as a general rule the light emitted by two independent
fluorescence sources will not be coherent. The image produced by these two objects
on the detector plane will represent two circular light spots with surrounding concentric
circles with much lower intensity (called Airy patterns). Thus each of these fluorescence
sources will produce a typical diffraction image of the single point source on the
detector plane comprising a central bright part and surrounding concentric rings.
[0019] The main intensity of the fluorescence will be accumulated inside a central bright
part, which in general can be nominally described as a Gaussian distribution. In the
following description, the influence of the concentric rings will be neglected and
the only central part will be taken into account. Moreover, in case of OFSM small
size detectors as often used, so mainly central part of the image is only collected
for step-by-step fluorescence image reconstruction.
[0020] The collected signal for a single scanning step in case of OFSM will be the integrated
fluorescence intensity of the total fluorescence signal hitting the detector while
measuring that point. In the ideal case it is only the centre of the diffraction image.
In the considered assumption (wavelength of light of 500 nm, NA=1.45) and under ideal
experimental conditions the cross section of the central part of the signal collected
can be represented as a Gaussian with the full width at half maximum (fwhm) of 176
nm. Consequently, the total signal intensity for the single point in the future fluorescence
image is the signal limited by fwhw=176 nm Gaussian distribution. In order to determine
the resolution of the system in these ideal conditions, behaviour of the two Gaussian
distributions can be checked in order to determine when they can no longer be distinguished
as individual objects.
[0021] In order to achieve this, one can examine the second derivative of the overlapping
Gaussians (which is a more precise method). In a more simplistic approach it can be
looked at how the regions of the fluorescence signal registrations overlap. It is
intuitive that a normal OFSM microscope will never reach the above theoretical resolution
of 176 nm (unless the detector is small and rightly positioned). In addition, it can
be seen from the cross sections of two single point sources on the detector plane
for the OFSM registration conditions that there is no simple way to resolve two single
point sources in the ideal case for OFSM even at the 200 nm distance.
Confocal fluorescence scanning laser microscopy
[0022] The main difference of confocal fluorescence scanning laser microscopy to that of
the aforementioned OFSM is the insertion of an additional small size pinhole which
removes/blocks part of the spectral contamination due to the presence of the additional
rings from the Airy disk, thus the detector receives only the central part of the
fluorescence that was excited by the incident laser beam. As a general rule, the best
resolution is obtained in case when one uses a pinhole equal to one Airy unit.
[0023] A schematic confocal fluorescence scanning laser microscope is shown in figure 7
including an additional pinhole 36 compared to the microscope of figure 6 and thereby
providing a diffraction limited laser spot's image 39 on the detector plane 34. As
shown, the diffraction limited laser spot's image 39 is only a section of the diffraction
limited laser spot's image 35 obtained with the microscope of figure 6.
[0024] The aforementioned example of two single point fluorescence sources as observed by
an OFSM with the insertion of a one Airy unit pinhole in the optical path (the bigger
the pinhole the lower is the resulting image quality and resolution) is now re-examined
in regard to such a OFSM. The registered intensity of the signal will be mainly the
integral intensity area limited between bars, where distance between the bars equals
the fwhm. Therefore, in contrast with OFSM two single point fluorescence sources lying
200 nm apart should be easily resolvable in a confocal regime with a 1 Airy unit pinhole.
Thus, the theoretical predicted resolution can be achieved.
Super-resolution microscopy according to the invention
[0025] The present invention goes some steps further:
- i) For the image reconstruction a single pixel (or a single block of pixels) is used
that is located within the central area of the diffraction limited spot on a detector
plane and which is smaller than the diffraction limited spot on the detector plane.
- ii) A (flat) background is removed in an embodiment. Consequently, the bulk of the
fluorescence signal, which originates from the object lying in the geometrical optical
axis of our pixel, is used for image reconstruction.
[0026] Employing these two steps a significantly higher resolution is achieved, and avoid
the use of complicated data acquisition methods and subsequent analytical procedures.
It is obvious that not only one pixel is available for image reconstruction, a few
pixels can be used successfully, producing shifted images available for averaging.
Moreover, the possibility for three dimensional picture recovery from the single scan
exists because every pixel probes at a different geometric point within the three
dimensional object under investigation. The generalized scheme of the technical set-up
is shown in figure 8. showing a schematic microscope according to the invention showing
a single pixel 37 inside the diffraction limited laser spot's image 35 in the detector
plane 34 used for reconstructing the final image, and showing also a geometrical projection
38 of the detector's pixel 37 on the sample plane 31.
[0027] In addition, scanning with significantly precise and small steps is a requisite.
However step size can be relatively large in cases where we use images obtained from
multiple pixels for averaging (with necessarily corrections), therefore making the
overall scanning faster.
[0028] For example, looking into the aforementioned case in an example of a fluorescence
microscopy, and referring to Figure 9 showing the cross sections of the normalized
intensity in the detector plane for two single points fluorescence sources, it is
possible to resolve two single point sources of fluorescence lying at a distance of
100 nm by applying a "detector" pixel size of ∼50 nm (here assumed to be circular)
and 30% flat background removal. Greater background removal is often better, but it
is always a trade in with signal to noise ratio. Smaller pixel sizes lead to lower
background removal sizes and better resolution and signal to noise ratio.
[0029] Thus the present invention is analogous to other super-resolution imaging techniques
but not based on them.
[0030] This present invention is able to obtain super-resolution images under high magnification
that is comparable with other known techniques.
[0031] This present invention is able to increase the resolution of any image, irrespective
of the value of magnification of the sample. Hence, this present invention is able
to improve images at low-magnification hitherto considered to be not suitable for
high-resolution imaging.
[0032] Herebelow a particular embodiment of the invention is described.
[0033] A system 20 for obtaining a super-resolution fluorescence image of an object in an
embodiment of the invention is schematically represented in Figure 1.
[0034] The system 20 includes an optical microscope 21.
[0035] The optical microscope 21 includes an illumination source 1, a spatial filter 2,
a dichroic mirror 3, an objective 4, a platform 6, an emission filter 7, a combination
8 of lenses, for example achromatic, and a digital camera 9.
[0036] Optionally the optical microscope 21 further includes a light filter 1 a.
[0037] The optical microscope 21 can be a microscope usually used for capturing fluorescence
images, such as the Nikon Ti Eclipse ® for example.
[0038] The system 20 further includes a controller 10 and a motion block 11.
[0039] The sample 5 to be imaged is adhered to a standard coverslip, or other suitable support,
that is fixed rigidly by a sample holder on the platform 6.
[0040] The sample's surface on the platform 6 lies on a plane define by the perpendicular
axes x, y.
[0041] A top view of the sample on the platform 6 is represented in Figure 2.
[0042] According to classical fluorescence imaging, tiny fluorescence sources, for example
fluorophores based upon the one photon fluorescence physical phenomenon, has been
attached to the sample 5. These fluorescence sources can be intrinsic to the sample.
[0043] The illumination source wavelength is chosen to induce fluorescence from the attached
fluorescence sources: the fluorescence sources, when lightened by the illumination
source, absorb the energy of the illumination source's wavelength(s) and emits fluorescence
light, whose wavelength is higher than the illumination source's wave length(s).
[0044] The controller 10 is adapted for defining displacement commands and providing the
displacement commands to the motion block 11.
[0045] The displacement commands are defined by the controller 10 as a function of a predetermined
Region of Interest (ROI) of the sample 5. A displacement command commands the motion
block 11 to displace the platform 6 and specified the commanded displacement along
the axes x, y, z, z being an axis perpendicular to the axes x, y.
[0046] The controller 10 is further adapted to trigger the capture of an image by the digital
camera 9 by sending a capture command to the digital camera 9, and the controller
10 is adapted to construct the super-resolution image of the sample ROI from successive
fluorescence images captured by the digital camera 9, as detailed hereafter.
[0047] In an embodiment, the controller 10 includes a microprocessor and a memory (not represented).
The memory stores software instructions 10, which, when executed by the microprocessor,
produce the operation of the controller 10.
[0048] The digital camera 9, for example includes a two-dimension (2D) matrix of sensors
wherein each sensor is adapted to detect a local fluorescence intensity during a determined
time.
[0049] The digital camera 9 is adapted to capture an image upon reception of a capture command
from the controller 10.
[0050] When the digital camera 9 is capturing an image of a sample currently illuminated
by the illumination source 1, each sensor detects a respective local fluorescence
intensity and the camera 9 then determines, for each pixel corresponding to each sensor,
a respective pixel intensity as a function of the local fluorescence intensity detected
by the sensor.
[0051] The digital camera 9 is then adapted for providing an image composed of a matrix
of pixels (the position of a pixel in the matrix of pixels is the same as the position
of the corresponding sensor in the matrix of sensors).
[0052] Figure 3 represents a bottom view of the sensor matrix of the camera 9. Said sensor
matrix has L ranks of sensors parallel to the axis x and I columns of sensors parallel
to the axis y. This view of Figure 3 is also considered as representing the matrix
of the L x I pixels of an image captured by the camera 9, wherein I is the size of
the matrix along an image dimension X corresponding to the axis x, and L is the size
of the matrix along an image dimension Y corresponding to the axis y.
[0053] The motion block 11 is adapted for receiving displacement commands from the controller
10 and for, upon reception of such a command defining a commanded displacement, moving
the platform 6 by the commanded displacement
[0054] The operation of the optical microscope 21 is explained hereafter.
[0055] The illumination source 1 emits monochromatic light.
[0056] If necessary, as a function of wavelength involving fluorescence from the sample
(excitation wavelength), the emitted light is cleaned by the optional LASER filter
1a in order to reject, from the emitted light, the wavelengths distinct from the excitation
wavelength.
[0057] Collimation of the light is obtained by use of the spatial filter 2.
[0058] The exciting light beam of thus collimated incident light passes through the dichroic
mirror 3 (as known, the dichroic mirror 3 lets exciting beam to fall on the platform
6 and reflects fluorescence light to the digital camera 9) and through the objective
4 in order to produce a focused diffraction-limited spot 14 on the sample 5 supported
by the platform 6.
[0059] The fluorescence light, emitted by the area of the sample 5 covered by the spot 14,
returns along the incident optical path, via the objective 4, which magnifies the
sample image, to the dichroic mirror 3, and continues onwards to the emission filter
7 that keeps only the fluorescence wavelength(s) of interest.
[0060] The combination 8 (of optional) lenses further magnify the fluorescence light provided
onto the camera sensor matrix 9.
[0061] The sensor matrix of the camera 9 is positioned in regard to the spot 14 so that
the fluorescence light emitted by the sample area in the spot 14 is collected towards
the sensor matrix.
[0062] As known, fluorescence measurement requires that a pulse of incident light arrives
at the sample. This is obtained by placing a shutter (not shown) at the exit of the
source 1.
[0063] The sample surface to be imaged has to be in focus in the z-axis.
[0064] Such focus is preliminary obtained for example by a piezo device attached to the
objective 4, or by commanding a displacement along the axis z of the platform 6 by
the controller 10 to the motion block 10.
[0065] In an embodiment, a LASER (Light Amplification by Stimulated Emission of Radiation)
is used as the illumination source 1, either of constant intensity (Continuous Wave
- CW) or pulsed.
[0066] In an embodiment, the magnification of the objective 4 for the fluorescence light
is in the range that is defined by the specifications of the digital camera and the
zoom of the combination 8 of lenses as required.
[0067] The digital camera 9 is for example a 2D detector for example an (Electron Multiplying)
Charge Coupled Device ((EM)CCD) type, or a CMOS (Complementary Metal Oxide Semiconductor).
The 2D detector may be replaced by a 1 D detector, or point detector, if required.
In this latter case, B0 is equal to I x L, in the description described below.
[0068] The numbers L and I are for example in the range [128, 512].
[0069] In an embodiment, each EMCCD pixel is a square of s x s µm
2 where s is in the range [6 µm, 15 µm].
[0070] For example, when using an objective of NA=1.45, the diameter of the spot 14 in the
plane (x,y) is in the range [150 nm ,350 nm].
[0071] These here above values are of course given only as examples and are dependent on
the type of camera used.
[0072] In an embodiment, a combination of illumination sources may be used such that the
collimated beams all have the same trajectory when arriving at the dichroic mirror
3. In order to remove any potential artefacts due to different polarities of the laser
beams they are depolarised incident to the dichroic mirror 3. Firstly, the polarity
of all the laser beams is orientated to the same direction by the linear polarizer
1 b. Secondly, an assembly 1 c of a circular polarizer coupled to an achromatic depolarizer
1c is inserted into the optical path immediately after the spatial filter 2. Alternatively,
if the combination of wavelengths and laser output powers are too diverse, each laser
beam will have its own spatial filter 2 prior to arrival at the depolarizer 1 c. The
beam will thus converge after the spatial filters 2. If the use of fully depolarised
beams is not considered to be required this option may be removed from the optical
path, leaving only the spatial filter(s) 2.
[0073] According to an embodiment of the invention, the following steps are implemented
based on the system 21.
[0074] In the considered embodiment, the spot 14 does not move relative to the camera 9
during the construction of a super-resolution image according to the invention.
[0075] In a first step 101, a block of sensors B
0 is chosen among the I x L sensors of the camera 9. The number of sensors n inside
B
0 is strictly less than I x L.
[0076] The sensors of the block B
0 form a square in the sensor matrix.
[0077] The sensors of the block B
0 form the sensor matrix which may be any shape (square, rectangle, oval, linear array,
single point, etc.). Hereon, we shall consider the case of the block B0 being a square
of dimension n.
[0078] For example, n is equal to 1, or any other size that is deemed suitable. The value
of n is for example less than 4, 16, 25.
[0079] The block of sensors B
0 is further chosen so that the block of pixels corresponding to the block B
0 of sensors is located inside the image, referenced A in Figure 3, of the spot 14
in any captured image. It means that the size of the block of pixels corresponding
to the block of sensors B
0 is smaller than the size of the diffraction spot 14's image (the block of pixels
corresponding to the block of sensors B
0 is indeed inside the diffraction spot 14's image).
[0080] In the following, n=1 is considered: B
0 comprises only the (n
0, m
0)sensor of the matrix of sensors, i.e. the sensor of position (n
0,m
0) in the matrix of sensors.
[0081] In a step 102, a ROI of the sample 5 is defined, for example the ROI defined by the
rectangle O
1, O
2, O
3, O
4 in the plane of the Figure 2's top view, and the definition of this ROI is provided
to the controller 10.
[0082] For example, the ROI bounds an entire prokaryote, or small subcellular structure,
cf. actin filaments, in a larger biological sample 5.
[0083] A value of distance step is defined, called d hereafter. d is for example chosen
by an operator.
[0084] The value of d is strictly smaller than the diffraction limit of the optical microscope
21 :

where
λ is the wavelength of the illumination source 1, and NA is the numerical aperture
of the lens in the objective 4.
[0085] For example, in case the diffraction limit is 200 nm, d is chosen in the range [15
nm, 50 nm].
[0086] Then the controller 10 will commands the progressive scanning of the ROI by steps
of distance d along the axis x, or along the axis y, from an initial point of the
ROI in reiterated steps 103 and 104 as detailed hereafter.
[0087] At first, an initial displacement of the platform 6 is determined by the controller
10 in regard to the current position of the platform 6.
[0088] In the considered embodiment, the initial displacement of the platform 6 is determined
by the controller 10 in order to firstly direct the focused diffraction-limited spot
14 on the point O
1.
[0089] A displacement command is provided by the controller 10 to the motion bloc 11, indicating
the determined initial displacement.
[0090] Upon reception of this displacement command, the platform 6 is positioned by the
motion block such that the spot 14 covers the point O
1.
[0091] In a first iteration of step 103, the controller 10 triggers the capture of an image
by sending to the camera 9 an image command whereas the spot 14 covers the point O
1.
[0092] Upon reception of this capture command, the camera 9 captures an image as a function
of the currently fluorescent light collected by the sensor matrix.
[0093] The controller 10 extracts, from the captured image, the block of pixels corresponding
to B
0 at this first iteration. Thus in the considered embodiment, it extract the intensity
of the pixel P(n
0,m
0) corresponding to the (n
0,m
0) sensor (i.e. the pixel of position (n
0,m
0) in the matrix of pixels).
[0094] This extracted intensity is called Int(n
0,m
0)
1,1.
[0095] Said intensity Int(n
0,m
0)
1,1 is stored as the intensity of the pixel of position (1,1) in the matrix of pixels
of the super-resolution image 22, referring to Figure 4.
[0096] Then in a first iteration of step 104, the controller 10 commands to the motion block
11 a displacement of d along to the axis x from the current position, so that the
spot 14 be displaced relative to the sample 5 on platform 6, along x in direction
of the limit of the ROI defined by [O
2, O
3].
[0097] And the motion block 11, upon reception of the displacement command, displaces the
platform 6 accordingly to the received displacement command.
[0098] Then, in a second iteration of step 103, the controller 10 triggers the capture of
an image by sending to the camera 9 an image command.
[0099] Upon reception of this capture command, the camera 9 captures an image as a function
of the currently fluorescent light collected by the sensor matrix (emitted from an
area of the sample shifted by distance d from O
1 along the axis x.
[0100] The controller 10 extracts, from the captured image, the block of pixels corresponding
to B
0 at this second iteration of step 103, thus in the considered embodiment, it extract
the intensity of the pixel P(n
0,m
0) corresponding to the (n
0,m
0)-sensor.
[0101] This intensity is called Int(n
0,m
0)
1,2.
[0102] Said intensity Int(n
0,m
0)
1,2 is stored as the intensity of the pixel of position (1,2) in the matrix of pixels
of the super-resolution image 22, referring to Figure 4.
[0103] In the super-resolution image 22, this pixel is adjacent according, to the axis X,
to the pixel of intensity Int(n
0,m
0)
1,1 previously stored.
[0104] Then in a second iteration of step 104, the controller 10 commands to the motion
block a displacement of d along to the axis x from the current position so that the
spot 14 be displaced relative to the sample 5 on platform 6, along x in direction
of the limit of the ROI defined by (O
2, O
3).
[0105] And the motion block 11, upon reception of the displacement command, displaces the
platform 6 accordingly to the received displacement command.
[0106] The group of steps 103 and 104 is reiterated along the axis x until the N
th iteration, wherein the position of platform is such that the spot 14 directs to point
O
2.
[0107] At said N
th iteration of step 103, the intensity Int(n
0,m
0)
1,N of the pixel P(n
0,m
0) is stored as the intensity of the pixel of position (1,N) in the matrix of pixels
of the super-resolution image 22. In the super-resolution image 22, this pixel being
adjacent according to the axis X to the pixel of intensity Int(n
0,m
0)
1,N-1 previously stored.
[0108] The controller 10 detecting that that the limit of the ROI along the axis x has been
reached, in the N
th iteration of step 104, the controller 10 commands to the motion block 11 a displacement
command of d along to the axis y this time, from the current position so that the
spot 14 be displaced from O
2 relative to the sample 5 on platform 6, along y in direction of the limit of the
ROI defined by (O
3, O
4).
[0109] And the motion block 11, upon reception of the displacement command, displaces the
platform 6 accordingly to the received displacement command.
[0110] Then at the N+1
th iteration of step 103, the intensity Int(n
0,m
0)
2,N of the pixel P(n
0,m
0) is stored as the intensity of the pixel of position (2,N) in the matrix of pixels
of the super-resolution image 22.
[0111] In the super-resolution image 22, this pixel is adjacent according to the axis Y
to the pixel of intensity Int(n
0,m
0)
1,N previously stored.
[0112] Then at a N+1
th iteration of step 104, a displacement of d along to the axis x from the current position
so that the spot 14 be displaced relative to the sample 5 on platform 6, along x in
direction of the limit of the ROI defined by (O
1, O
4) is commanded by the controller 10 to the motion block 11.
[0113] And the motion block 11, upon reception of the displacement command, displaces the
platform 6 accordingly to the received displacement command.
[0114] Successive iterations of the group of steps 103 and 104 are applied, displacing the
spot 14 by step of d along the axis x towards the side (O
1, O
4) of the ROI, thus constructing the second rank of pixels of the super-resolution
image 22, from Int(n
0,m
0)
2,N to Int(n
0,m
0)
2,1 reached at the 2N
th iteration of step 103.
[0115] Then at a 2N
th iteration of step 104, similarly to the explanations here below, he controller 10
then detects that the limit of the ROI along the axis x has been reached, and consequently
in the 2N
th iteration of step 104, the controller 10 commands to the motion block 11 a displacement
command of d along to the axis y from the current position so that the spot 14 be
displaced from the current position along y in direction of the limit of the ROI defined
by (O
3, O
4).
[0116] And the motion block 11, upon reception of the displacement command, displaces the
platform 6 accordingly to the received displacement command.
[0117] And similar iterations are processed until all the ROI have been scanned by the spot
14, by successive displacements of distance d each one.
[0118] In the considered case wherein the ROI is rectangle, the resulting super-resolution
image 22 comprises M ranks and N columns of pixels, wherein

et M =

wherein E is the function "integer part" and ∥
PP'∥ is the function providing the distance between the two points P and P'.
[0119] But in other embodiments, the ROI can have various shapes and the controller 10 is
adapted to scan the whole ROI and to construct the super-resolution image for this
ROI.
[0120] In an embodiment, the scan in a plane (x,y) through sub diffraction displacements
and the construction of the super-resolution image by successive extracted pixel block
according to the invention is repeated for each plane z if collected (i.e. for successive
layers through the sample, thus constructing a 3D super-resolution image).
[0121] Here below, the provided super-resolution image correspond to a scan in the plane
defined by the axes x and y. In another embodiment, the scan and construction of super-resolution
image is achieved in a plane defined by the axes x and z, or by the axes y, and z.
Then the sub-diffraction displacements are achieved along the considered axes x and
z or y and z, instead of the axes x and y as described here below.
[0122] In the detailed embodiment disclosed here below, the number n in the block B
0 was considered to be equal to 1.
[0123] In case wherein the n is bigger than 1, the process is similar.
[0124] Thus when a first image and a second image are captured in successive steps 103,
the position of the platform 6 for the first image being separated from the position
of the platform 6 for the second image by a displacement d less than the diffraction
limit in a direction along the axis x (respectively y), the super-resolution image
will include a first block of pixels, said first block of pixels being the block of
pixels corresponding, in the first image, to the block B
0 of n sensors. And the super-resolution image further includes, adjacent to said first
block of pixels in the direction along the axis X (respectively Y), a second block
of pixels, said second block of pixels being the block of pixels corresponding, in
the second image, to the block B
0 of n sensors etc.: the super-resolution image is constructed by the blocks of pixels
successively captured by the block B0 after each sub-diffraction displacement. After
the series of images are collected and the fluorescence image is reconstructed as
stated (steps 103-104), in some embodiments a further step of removing a (flat) background
is implemented if required ; the data of the fluorescence image are then in an embodiment
cropped such that only a specified range of upper fluorescence intensity values are
plotted. This allows one to better discriminate between individual fluorophores by
removing/reducing the overlap of the fluorescence signal from neighbouring emitters.
[0125] For clarity, we consider that the fluorescence image which has been reconstructed
from the acquisition protocol (steps 103-104) has a normalized intensity range from
0 (minimum) to 1 (maximum). In the simplest case, a simple background can be removed
( for example, for each 2D sub-image of the super-resolution image, an estimation
of a background noise intensity, estimated in an embodiment from an image obtained
from a non fluorescence area) is subtracted to the pixel intensities of the 2D sub-
image. Then the resulting fluorescence image can be scaled from 1 to X%, where X represents
a user-defined normalized intensity value less than 1: the intensity value below X%
are replaced by a value equal to zero, the intensity value V beyond X% are transformed
into (1-X%)V.
[0126] This results in a plot that represents only the top X% (for example 30%) of the reconstructed
signal, where the different fluorophores are deemed not to greatly interfere with
each other, as in the example illustrated in Figure 9. The removal of the background,
in particular the value of X, is specific to each sample of interest. Such a value
is estimated in a preliminary stage.
[0127] Alternatively, a series of normalized intensity slices can be recreated, ranging
from 1 to Xa, Xa to Xb, Xb to Xc, etc., where a,b,c, etc. represent user defined intensity
values.
[0128] The sample considered in the embodiment detailed here below is a biological sample,
but the sample could be non-biological in another embodiment.
[0129] According to the embodiment described here below, the sample to be imaged is moved
relative to a fixed beam of collimated light and to a fixed camera. But in another
embodiment, the sample is fixed whereas the motion is applied to the assembly of the
collimated beam and the fixed camera.
[0130] The displacements can be applied by stepwise or by constant motion.
[0131] In an embodiment, given the size n of the block B
0, the sensor(s) of the block B
0 in the step 101 are chosen according to the following way: the sample 5 is displaced
so that the spot 14 is directed on a test area of the sample 5) containing fluorescence
beads (or a different test sample containing fluorescence beads is used). An image
is captured by the camera 9. The sections of image provided by all the blocks of n
pixels inside the image of the spot 14 are analysed in order to determine the section
that is the most in focus by examining the quality of the signal, and, for example,
with reference to fluorescence properties of known samples. Often fluorescent microspheres
(with defined diameters of between ca. 50 and 2000 nm) which provide a near perfect
Gaussian distribution (when not aggregated) of fluorescence intensity are used to
determine the section that is the most in focus. The block of sensors corresponding
to the block of pixels with the best focus is then chosen as the block B0.
[0132] This disposition prevents from any additional scattered light that would blemish
the super-resolution image.
[0133] The quality of the super-resolution image will be improved if the fluorescence sources
of the sample 5 result in discrete sources of emission, and are not so concentrated
that it results in a carpet of fluorescence intensity.
[0134] The method according to the invention was applied to the H-nS protein, a DNA-binding
protein, as sample, which is tagged with the fluorophore GFP (Green Fluorescent Protein),
in Escherichia (E.) coli cells. The images obtained according to the invention results
present a superior resolution compared to images obtained using a super-resolution
technique called STORM according to the results published in
Wang et al., Science (2011) 333, 1445-1449).
[0135] The resolution of the resulting XY(Z) image is dependent on the increment d of the
sub-diffraction-limited mechanical displacement, as well as the pixel size of the
2D detector. For example a resolution superior to 40 nm, as defined by
Donnert et al., Proc Natl Acad Sci USA. 2006 Aug 1;103(31):11440-5, is obtainable with an excitation wavelength of 488 nm with the following hardware:
optical microscope (21) - Nikon Ti; objective lens (4) - Nikon 100x CFI Plan Apochromat;
displacement block (11) - Physik Instrumente; nanopositioning stage P-733 and digital
camera (9) - Andor iXon Ultra 897.
[0136] Super-resolution according to this embodiment of the invention is basically achieved
by conjugating a normal epi-fluorescence microscope (installed in a vibration-free
environment) with a high-precision moving platform under the same experimental conditions
as classical fluorescence microscopy, and with the proper software to reconstruct
the images: it thus makes super-resolution accessible to a very large number of laboratories.
[0137] A particular embodiment has been described here above regarding fluorescence microscopy.
[0138] Of course, the invention is implementable for producing super-resolution images based
on different types of microscopy: for example, transmission, scattering, or absorption
microscopy.
[0139] In fluorescence microscopy the illumination light (excitation wavelength(s)) and
captured light (emission wavelength(s)) are not the same, whereas in transmission,
T, microscopy and absorption, A, microscopy they are identical. The ability to transmit
the light is expressed in terms of an absorbance, A, which is normally written as

where I and 10 are defined as the intensity (power per unit area) of the illumination
light (termed incident radiation) and the captured light (termed transmitted radiation),
respectively. In scattering microscopy, the illumination light is reflected (elastically
or inelastically) by the object and as captured light registered by the 2D detector.
[0140] Thus the invention enables to obtain a image of a sample with an effective resolution
superior to that of the Abbe diffraction limit, by means of sub-diffraction length
displacements in two (x, y), or three (x,y,z), dimensions while the object of study
is illuminated with a focused light source. During data acquisition, the sample of
study is housed on a xy(z)-stage which moves relative to a fixed beam of collimated
light. Alternatively, the xy(z)-stage may be fixed in the x and y axes while the laser
beam scans the object of study, by means of adaptive optics such as resonant scanning
mirrors.
[0141] The invention exploits the inhomogeneous distribution of light that is emitted from
within a focused, diffraction-limited, spot of monochromatic irradiation.
1. An imaging method for obtaining an super-resolution image (22) of an object (5), based
upon an optical microscope (21) adapted for capturing a image of the object, said
optical microscope including :
a support plate (6) for bearing the object,
an illumination source (1) adapted to produce an illumination beam for illuminating
the object,
an optical element (4) for focusing the illumination beam on the support plate, the
section of the object on the support plane currently illuminated by the focused illumination
beam (14) being called hereafter the target region,
a digital camera (9) including a matrix of sensors for capturing an image of the target
region, each sensor providing for a respective pixel value and
a displacement block (11) for displacing the support plate relative to the focused
illumination beam and to the digital camera along at least two displacement axes among
three axes x, y, z perpendicular to each other, wherein the axes x and y define the
plane of the support plate, said at least two displacement axes (x, y) defining two
corresponding perpendicular image axes (X, Y) of the super-resolution image (22);
said method being characterized in iterating the following set of steps :
- capturing, by the digital camera, a first image of the target region currently illuminated
by the focused illumination beam (14);
- extracting, from the captured first image, a first block of pixel values provided
by a sub-matrix (B0) of the matrix of sensors;
- storing said first bloc of pixel values as a first block of pixel values of the
super-resolution image;
- displacing, by a sub-diffraction limited distance, the support plate by the displacement
block along one of the displacement axis;
- capturing, by the digital camera, a second image of the target region currently
illuminated by the focused illumination beam (14);
- extracting, from the captured second image, a second block of pixel values provided
by said sub-matrix (B0) of the matrix of sensors;
- storing said second bloc of pixel values as a second block of pixel values of the
super-resolution image, said second block of pixel values being placed right next
to said first block of pixel values in the super-resolution image along the image
axis (X, Y) corresponding to said one displacement axis (x, y).
2. An imaging method for obtaining an super-resolution image (22) of an object (5) according
to claim 1, wherein the sub-matrix (B0) comprises only one sensor.
3. An imaging method for obtaining an super-resolution image (22) of an object (5) according
to claim 1 or 2, wherein a sensor of the matrix is selected as sensor of the sub-matrix
(B0) in a preliminary step only if said sensor captures a part of the image of the focused
illumination beam (14) on the support plate (6).
4. An imaging method for obtaining an super-resolution image (22) of an object (5) according
to claim 3, wherein the sensor(s) of the matrix is/are selected as sensor(s) of the
sub-matrix (B0) in a preliminary step as the one(s) among the sensors of the matrix capturing a
part of the image of the focused illumination beam (14) on the support plate wherein
the image is the most in focus.
5. An imaging method for obtaining an super-resolution image (22) of an object (5) according
to any one of claims 1 to 4, comprising a step of:
- applying a background removal to the pixel values of the super-resolution image
and/or including a step of cropping said pixel values that are outside at least one
specified range of upper intensity values, in order to remove from the pixel values
related to an individual fluorophore the overlapping intensities of neighbouring fluorophores.
6. An imaging method for obtaining an super-resolution image (22) of an object (5) according
to any one of claims 1 to 5, wherein the object includes at least one fluorescence
source, and the illumination source (1) is adapted to produce an illumination beam
for generating fluorescence from the illuminated object, the image of the target region
captured by the digital camera (9) being an image of the fluorescence generated by
the illuminated object, the obtained super-resolution image being a super-resolution
fluorescence image.
7. An imaging system for obtaining an super-resolution image (22) of an object (5), said
imaging system including a controller (10) and an optical microscope (21), said optical
microscope (21) including :
- a support plate (6) for bearing an object to be imaged,
- an illumination source (1) adapted to produce an illumination beam for illuminating
the object,
- an optical element (4) for focusing the illumination beam on the support plate,
the section of the object on the support plane currently illuminated by the focused
illumination beam (14) being called hereafter the target region,
- a digital camera (9) including a matrix of sensors for capturing an image of the
target region, each sensor providing for a respective pixel value, and
- a displacement block (11) for displacing the support plate relative to the focused
illumination beam and to the digital camera along at least two displacement axes among
three axes x, y, z perpendicular to each other, wherein the axes x and y define the
plane of the support plate, said at least two displacement axes (x, y) defining two
corresponding perpendicular image axes of the super-resolution image (22);
said imaging system being characterized in that the controller (10) is adapted for iterating the following operation's set of commanding
the digital camera to capture a first image of the target region currently illuminated
by the focused illumination beam (14), extracting, from the captured first image,
a first block of pixel values provided by a sub-matrix (B0) of the matrix of sensors, storing said first bloc of pixel values as a first block
of pixel values of a super-resolution image of the object, then commanding the displacement
block to displace, by a sub-diffraction limited distance, the support plate along
one of the displacement axis, then commanding the digital camera to capture a second
image of the target region currently illuminated by the focused illumination beam
(14), extracting, from the captured second image, a second block of pixel values provided
by said sub-matrix (B0) of the matrix of sensors, and storing said second bloc of pixel values as a second
block of pixel values of the super-resolution image, said second block of pixel values
being placed right next to said first block of pixel values in the super-resolution
image along the image axis (X, Y) corresponding to said one displacement axis (x,
y).
8. An imaging system according to claim 7, wherein the sub-matrix (B0) comprises only one sensor.
9. An imaging system according to claim 7 or 8, wherein the controller is adapted for
selecting a sensor of the matrix as sensor of the sub-matrix (B0) only if said sensor captures a part of the image of the focused illumination beam
(14) on the support plate (6).
10. An imaging system according to claim 9, wherein the controller is adapted for selecting
sensor(s) of the matrix as sensor(s) of the sub-matrix (B0) in a preliminary step as the one(s) among the sensors of the matrix capturing a
part of the image of the focused illumination beam (14) on the support plate wherein
the image is the most in focus.
11. An imaging system according to any one of claims 7 to 10, wherein the controller is
adapted for applying a background removal to the pixel values of the super-resolution
image and/or for cropping said pixel values that are outside at least one specified
range of upper intensity values, in order to remove from the pixel values related
to an individual fluorophore the overlapping intensities of neighbouring fluorophores.
12. An imaging system according to any one of claims 7 to 11, adapted for imaging an object
including at least one fluorescence source and for obtaining a super-resolution fluorescence
image, wherein the illumination source (1) is adapted to produce an illumination beam
for generating fluorescence from the illuminated object, the image of the target region
captured by the digital camera (9) being an image of the fluorescence generated by
the illuminated object.