(19)
(11)EP 3 602 110 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
03.08.2022 Bulletin 2022/31

(21)Application number: 18719236.4

(22)Date of filing:  21.03.2018
(51)International Patent Classification (IPC): 
G01S 7/4865(2020.01)
G01S 7/497(2006.01)
G01S 7/4863(2020.01)
(52)Cooperative Patent Classification (CPC):
G01S 7/4863; G01S 7/4865; G01S 7/497; G01S 17/894
(86)International application number:
PCT/GB2018/050728
(87)International publication number:
WO 2018/172767 (27.09.2018 Gazette  2018/39)

(54)

TIME OF FLIGHT DISTANCE MEASUREMENT SYSTEM AND METHOD

SYSTEM UND VERFAHREN ZUR ENTFERNUNGSMESSUNG NACH FLUGZEIT

SYSTÈME ET MÉTHODE DE MESURE DE LA DISTANCE PAR TEMPS DE VOL


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 21.03.2017 GB 201704452

(43)Date of publication of application:
05.02.2020 Bulletin 2020/06

(73)Proprietor: Photonic Vision Limited
Sevenoaks, Kent TN13 1XR (GB)

(72)Inventor:
  • MORCOM, Christopher John
    Sevenoaks Kent TN13 1XR (GB)

(74)Representative: Elkington and Fife LLP 
Prospect House 8 Pembroke Road
Sevenoaks, Kent TN13 1XR
Sevenoaks, Kent TN13 1XR (GB)


(56)References cited: : 
WO-A2-2013/040121
US-A1- 2015 285 625
US-A1- 2006 132 635
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    Field of Invention



    [0001] The invention relates to a time of flight distance sensor and method of use.

    Background to the Invention



    [0002] Accurate and fast surface profile measurement is a fundamental requirement for many applications including industrial metrology, machine guarding and safety systems as well as automotive driver assistance and collision warning systems.

    [0003] Many different approaches have been developed over the years with the objective of providing a low cost yet high performance surface profile solution.

    [0004] Sensors based on triangulation measurement techniques using structured illumination patterns have proven to be very successful in short range applications such as gesture control, gaming and industrial metrology but their inability to cope with high ambient light levels have tended to restrict their use to indoor applications or applications where the overall illumination level can be controlled.

    [0005] To overcome these constraints, much effort has been expended on developing pixelated focal plane arrays able to measure the time of flight of modulated or pulsed infra-red (IR) light signals and hence measure 2D or 3D surface profiles of remote objects. A common approach is to use synchronous or "lock-in" detection of the phase shift of a modulated illumination signal. In the simplest form of such devices, electrode structures within each pixel create a potential well that is shuffled back and forth between a photosensitive region and a covered region. By illuminating the scene with a modulated light source (either sine wave or square wave modulation has been used) and synchronising the shuffling process with the modulation, the amount of charge captured in each pixel's potential well is related to the phase shift and hence distance to the nearest surface in each pixel's field of view. By using charge coupled device technology, the shuffling process is made essentially noiseless and so many cycles of modulation can be employed to integrate the signal and increase the signal to noise ratio. This approach with many refinements is the basis of the time of flight focal plane arrays manufactured by companies such as PMD, Canesta (Microsoft) and Mesa Imaging.

    [0006] Whilst such sensors can provide high resolution their performance is limited by random noise sources including intrinsic circuit noise and particularly the shot noise generated by ambient light. Furthermore, the covered part of each pixel reduces the proportion of the area of each pixel able to receive light (the "fill factor"). This fill factor limitation reduces the sensitivity of the sensor to light and requires higher emission power to overcome. In addition, this technique can only make one distance measurement per pixel and so is unable to discriminate reflections from a far object and those from atmospheric effects such as fog, dust, rain and snow, restricting the use of such sensors technologies to indoor, covered environments.

    [0007] To overcome these problems companies such as Advanced Scientific Concepts Inc. have developed solutions whereby arrays of avalanche photodiodes (APD) are bump bonded to silicon readout integrated circuits (ROIC) to create a hybrid APD array/ROIC time of flight sensor. The APDs provide gain prior to the readout circuitry thus helping to reduce the noise contribution from the readout circuitry whilst the ROIC captures the full time of flight signal for each pixel allowing discrimination of atmospheric obscurants by range. However, the difficulties and costs associated with manufacturing dense arrays of APDs and the yield losses incurred when hybridising them with ROIC has meant that the resolution of such sensors is limited (e.g. 256 x 32 pixels) and their prices are very high.

    [0008] Some companies have developed systems using arrays of single photon avalanche detectors (SPAD) operated to detect the time of flight of individual photons. In principle such sensors can be manufactured at low cost using complementary metal-oxide semi-conductor (CMOS) processes. However, the quantum efficiency of such sensors is poor due to constraints of the CMOS process and their fill factor is poor due to the need for circuitry at each pixel to ensure correct operation. This combination of poor quantum efficiency and low fill factor this leads to very poor overall photon detection efficiency despite the very high gain of such devices.

    [0009] In addition, without care, avalanche multiplication based sensors can be easily damaged by optical overloads (such as from the sun or close specular reflectors in the scene) as the avalanche multiplication effect when applied to the overload will lead to extremely high current densities in the region of the overload, risking permanent damage to the device structure.

    [0010] US 2015/0285625 discloses an image sensor comprising a SPAD/Avalanche photodiode pixel array and charge storage, which can determine distance to an object by triangulation or by a time of flight measurement. That document proposes a calibration mode comprising a coarse scan, for reducing background noise.

    [0011] Triangulation measurements use a known distance between the light emitter and image sensor; and the angle of light incident on the image sensor, calculated from the region of the image sensor on which the light falls, to calculate the distance to an object.

    [0012] The time of flight measurement uses capacitor charge storage on the pixels to record the charging or discharging of the capacitor, initiated by the emission of a light pulse and terminated by its detection on the image sensor. The charge level of the capacitor carries information on the time of flight of the light pulse.

    [0013] An alternative approach that has been attempted is to provide each pixel with its own charge coupled or CMOS switched capacitor delay line, integrated within the pixel, to capture the time of flight signal. An advantage of this approach is that the time of flight can be captured at high frequency to provide good temporal and hence range resolution, but the signal read-out process can be made at a lower frequency, allowing a reduction in electrical circuit bandwidth and hence noise. However, if the delay lines have enough elements to capture the reflected laser pulse from long range objects with good distance resolution, then they occupy most of the pixel area leaving little space for a photosensitive area. Typically, this poor fill factor more than offsets the benefits of the slower speed readout and so high laser pulse power is still required, significantly increasing the total lidar sensor cost. To try to overcome this problem some workers have integrated an additional amplification stage between the photosensitive region and the delay line but this introduces noise itself, thus limiting performance.

    [0014] Thus, there is a need for a solution able to offer the low noise and extended operating range performance of hybrid APD array/ROIC based lidars but with the lower costs and more robust behaviour of monolithic focal plane time of flight arrays.

    Summary of the invention



    [0015] According to a first aspect of the invention, there is provided a time of flight distance measurement method according to claim 1. .By providing a self calibration step any inaccuracies in the exact location of the image illumination stripe can be corrected for and reliable distance measurement achieved.

    [0016] In one approach, the self-calibration step comprises:

    for a capturing period, leaving the photosensitive image region unclocked to capture an image of the image illumination stripe on the photosensitive image region, then

    clocking the image region (8), the storage region and the readout region to read out the captured image to identify the location of the image illumination stripe on the photosensitive image region (8); and

    calibrating the location of the image illumination stripe.



    [0017] By leaving the image region without clock signals during the capturing period the location of the image illumination stripe can be reliably determined.

    [0018] In a particular embodiment, the method may include determining that the image illumination stripe will be approximately in row Y;

    applying (N+Y-K) clock cycles to the image region and storage region, where K is a positive integer at least 2; and

    reading out 2K rows of the store section to determine the actual row Ya of the image illumination stripe.



    [0019] In this way, prior knowledge of the approximate position of the stripe can be used to speed up the calibration process.

    [0020] In embodiments, the method may include controlling independently the clocking of the photosensitive image region, the storage region, and the readout section; the method comprising:

    clearing the image region, storage region and readout section;

    emitting a first laser pulse;

    capturing a first illumination image on the image region;

    clocking the image region and storage region sufficiently to transfer the first illumination image to the storage region;

    emitting a second laser pulse;

    clocking the image region and the storage region at the predetermined clock frequency to capture a second illumination image and transfer it to the storage region;

    clocking the readout section to readout the storage section and identify the number of rows between the first and second illumination image; and

    determining the distance to the object from the number of rows between the first and second illumination image.



    [0021] The image region may be driven with static potentials during the self-calibration step.

    [0022] The self-calibration step and the distance determining step may be alternated for optimal calibration. Alternatively, calibration may be required less often, and the method may alternatively include carrying out a self-calibration step at less than 20% of the frequency of carrying out the distance determining step.

    [0023] In another aspect, the invention relates to a time of flight distance measurement system, according to claim 7.

    [0024] The time of flight sensor may be a charge coupled device. The use of a charge coupled device allows for a very high fill factor, i.e. a very large percentage of the area of the image section of the time of flight sensor may be sensitive to light. This increases efficiency, and allows for the use of lower power lasers.

    [0025] In particular embodiments, photons incident over at least 90% of the area of the photosensitive image region are captured by the photosensitive image region.

    Brief Description of the Drawings



    [0026] For a better understanding, embodiments of the invention will now be described with reference to the accompanying drawings, in which:

    Figure 1 shows a time of flight distance measuring system according to a first embodiment of the invention;

    Figure 2 illustrates a captured charge pattern captured by a time of flight distance measuring system according to embodiments of the invention;

    Figure 3 illustrates an embodiment of a focal plane array device used in the time of flight distance measuring system;

    Figure 4 illustrates clock signals used in a preferred embodiment of the invention;
    and

    Figure 5 illustrates a captured charge pattern according to alternative embodiments.


    Detailed Description



    [0027] The inventor has realised that by combining an optimised sensor architecture with a novel operating method the poor fill factor and high readout noise problems of the existing sensors can be overcome in a very low cost and commercially advantageous manner.

    [0028] One embodiment is shown in figure 1.

    [0029] Control electronics (1) are configured to control light source (2) and associated optical system (3) to emit a pattern of light with a pre-defined combination of spatial and temporal characteristics into the far field.

    [0030] In the simplest embodiment shown in figure 1, the spatial distribution of the emitted light is a fan beam (4) whose location in a direction orthogonal to the long axis of the beam is adjustable under control of the control electronics (1) and the temporal characteristics of the light are a short pulse, emitted at a time T0, also under control of the control electronics (1).

    [0031] This combination of spatial and temporal characteristics will create a pulsed stripe of illumination (5) across the surface of any remote object (6). This stripe on the object may be referred to as an object illumination stripe.

    [0032] Receive lens (7) is configured to collect and focus the reflected pulse of light from this stripe of illumination (5) onto the photosensitive image section (8) of a focal plane array (FPA) device (9) yielding a stripe of illumination (15) on the surface of the image area as illustrated schematically in figure 2. This stripe on the sensor will be referred to as an image illumination stripe.

    [0033] It will be appreciated by those skilled in the art that the optical arrangement may be more complex than a single receive lens (7) and any optical system capable of focussing the object illumination stripe onto the image section (8) to achieve the image illumination stripe may be used.

    [0034] By shifting the position of the fan beam, the vertical position of the intensity distribution at the image plane is also controllable.

    [0035] As illustrated in Figure 2, the image (8) section of the focal plane array (9) comprises an array of M columns and J rows of photosensitive pixels. The focal plane array device (9) also contains a store section (10) and readout section (11).

    [0036] The store section (10) comprises M columns by N rows of elements and is arranged to be insensitive to light. The image and store sections are configured so that charge packets generated by light incident upon the pixels in the image section can be transferred along each of the M columns from the image section (8) into the corresponding column of the store section (10) at a transfer frequency Ft by the application of appropriate clock signals from the control electronics (1).

    [0037] The image section (8) and store (10) section may be implemented as a charge coupled device (CCD) on a single die. The store section and readout section may be covered by a light shield. The use of a CCD allows for excellent fill factors, i.e. the percentage of area of the image sensor responsive to incoming photons. Efficient light collection is important and allows for the use of lower power lasers which may reduce the cost of the device. Efficient light collection also, in examples, means that there is no need for amplification and in particular no need for an avalanche photodiode. This can improve reliability and reduce noise and cost.

    [0038] In order to maximise the fill factor, the image section interconnects may be made of transparent conductor, for example indium tin oxide, or alternatively the die may be thinned and used from the back.

    [0039] In order to maximise the transfer frequency FT (described below) low impedance aluminium or other metallic connections may be used to transfer the clock signals from the outer to the centre of the image and store section regions.

    [0040] The readout section (11) is arranged to readout data from the M columns of the storage region at a readout frequency FR (as described below) and is also configured to be insensitive to light. The readout section (11) may be implemented on the same die as the image and store sections.

    [0041] Either one of a combination of the control electronics (1), processing electronics (2) and output circuitry (13) may also be fabricated on the same die as the image, store or readout sections.

    [0042] The sequence of operation is as follows:
    1. a) control electronics (1) commands the light source (2) and optical system (3) to set the location of the horizontal fan beam so that any light from the pulsed illumination stripe (5) that is reflected from a remote object (6) will be focussed by lens (7) upon the image section (8) as a corresponding stripe (15) to yield an intensity distribution (17) that is ideally a delta function with a peak centred upon row Y as illustrated schematically in figure 2. This means that each column X of the sensor will see an intensity distribution with a peak centred at row Y.
    2. b) The control electronics then operates image and store sections (8) and (10) to clear all charge from within them.
    3. c) The control electronics causes light source (1) to emit a light pulse at time TO and commences clocking the image (8) and store (10) sections at high frequency FT , i.e. the transfer frequency, to transfer charge captured in the image section (8) along each column and into the store section (10). Using its apriori knowledge of Y, the control electronics applies a total of N+Y clock cycles to the image and store sections.
    4. d) Whilst the image and store sections are being clocked, the pulsed fan beam (5) propagates out, reflects off remote objects (6) and is focussed onto the image area (8) to generate a set of charge packages in each column X along row Y at time T1 (X) where T1 (X) - TO is the time of flight of the part of the fan beam that is incident upon an individual column X. The high-speed clocking of the image (8) and store (9) sections causes each charge package to be captured at R(X) clock cycles after the start of clocking (T0) where:

    5. e) The clocking of the image and store sections causes the charge packages to moved N+Y rows down each column in a direction towards the store section, creating a spatially distributed set of charge packages within the store section, where the location of the centre of each charge packages R(X) corresponds to time of flight of the reflected light from a remote object (6) at the physical location in the far field corresponding to the intersection of column X and row Y.
    6. f) The control electronics then applies clock pulses to the store (10) and readout sections (11) at the readout frequency FR to readout the captured packages of charge, passing them to processing electronics (12).
    7. g) Processing electronics (12) then employ standard techniques to determine the location R(X) of the centre of any charge packages found within each column X of the array.
    8. h) Hence, it can be seen that this sensor architecture and operating method has enabled the time of flight information (temporal domain) from a reflection in the far field to be converted into the spatial domain within the store section (10) so that the distance to a remote object can be determined from the position R(X) of a charge packet within the store section and the location of the remote object is defined in the X axis by the number X of the column that the charge packet was found in and in the Y axis by the location of the illumination stripe (Y).

      For each column X, the processing electronics uses the speed of light ( c ) to calculate the distance D(X,Y) to each remote objects (6) illuminated by the fan beam (4) from the following equation:

      It should be noted that this process allows the distance to multiple objects at the same point X along the stripe of illumination to be determined, allowing reflections from snow, rain and fog to be captured and discriminated from reflections from true far objects (6).

    9. i) The control electronics then repeats steps a) to h) sequentially, moving the position of the illumination stripe to gather additional sets of D(X,Y) distance data. In this way, the sensor builds up a complete three dimensional point cloud comprising a set of distance data points D(X,Y) that is transmitted via output circuitry (13).


    [0043] It can be seen that this method of operation and the separation of the detector architecture into image, store and readout sections enables the whole of each image pixel to be photosensitive (i.e. 100% fill factor) because the charge to voltage conversion/readout process is physically remote on the detector substrate. In addition, the use of a store section enables the charge to voltage conversion/readout process to be carried out at a different time to the photon capture process.

    [0044] These two factors deliver very significant benefits over all other time of flight sensors that are constrained by the necessity for photon capture, charge to voltage conversion and, in some cases, time discrimination to occur within each pixel.
    1. i. The physical separation of the image section enables it to be implemented using well-known, low cost and highly optimised monolithic image sensor technologies such as charge coupled device (CCD) technology. This allows noiseless photon capture and transfer and, in addition to the 100% fill factor, very high quantum efficiency through the use of techniques such as back-thinning, back surface treatment and anti-reflection coating.
    2. ii. The temporal separation of the high-speed photon capture and charge to voltage/readout process and the physical separation of the readout circuitry allows the readout circuitry and readout process to be fully optimised independent of the high-speed time of flight photon capture process. For example the readout of the time of flight signal can be carried out at a significantly lower frequency (FR) than its original high speed capture (FT). This allows the noise bandwidth and hence the readout noise to be significantly reduced, but without the very poor fill factor and hence sensitivity losses encountered by other approaches that also seek to benefit from this option.


    [0045] The significance of these benefits is such that an optimised light radar sensor can provide long range, high resolution performance without needing costly and complicated avalanche multiplication readout techniques.

    [0046] In a preferred embodiment shown in figure 3, the readout electronics (11) are configured to allow readout from all columns to be carried out in parallel. Each column is provided with a separate charge detection circuit (18) and analogue to digital converter (19). The digital outputs (20) of each analogue to digital converter are connected to a multiplexer (21) that is controlled by an input (22) from the control electronics.

    [0047] The store (10) and readout (11) sections are covered by an opaque shield (23).

    [0048] During the readout operation, the control electronics applies control pulses to the store section (10) to sequentially transfer each row of photo-generated charge to the charge detectors (18). These convert the photo-generated charge to a voltage using standard CCD output circuit techniques such as a floating diffusion and reset transistor. The signal voltage from each column is then digitised by the analogue to digital converters (19) and the resultant digital signals (20) are sequentially multiplexed to an output port (24) by the multiplexor (21) under control of electrical interface (22).

    [0049] By carrying out the sensor readout for all columns in parallel, this architecture minimises the operating readout frequency (FR) and hence readout noise.

    [0050] In other embodiments, it can be advantageous for the temporal distribution of the light source to be a maximal length sequence or other form of signal whose autocorrelation function equates to a delta function. In this case, the processing electronics will carry out a cross-correlation operation on the data set from each column and the location R(X) that equates to the time of flight will be determined by the location of the cross-correlation peak.

    [0051] This process may be extended further so that the illumination comprises two (or more) stripes of illumination, with each stripe modulated in time by a different and orthogonal maximal length sequence. In this case, the processing electronics applies two (or more) cross correlation operations to the data from each column; each operation using the appropriate maximal length sequence as the correlation key to allow the time of flight data for each stripe of illumination to be decoded separately. The benefit of this technique is that it enables range data from multiple stripes to be captured simultaneously, speeding up the overall measurement process, albeit at the expense of additional computation.

    [0052] The approach described above works very well, particularly where the position of the laser stripe of illumination is generated using fully solid state approaches such as a laser stripe array and associated optical system. In this case, the position Y of the laser stripe is varied by the control electronics (1) pulsing one of the laser stripes in the array. Other approaches will use a micro-electro mechanical system (MEMS) mirror or optical phased array (OPA) to scan the beam.

    [0053] However, it does require very precise and repeatable control of and knowledge of the position of the laser fan beam (Y). This is because any deviation in position of the image of the fan beam (dY) from the expected position (Y) will results in an error dR in the distance measurement, where:



    [0054] In practice, there are several factors that may cause a deviation in the expected position Y including:
    1. a) temperature shifts or vibration may cause small deviations in the position of the beam;
    2. b) aberrations in the optics of the system may cause the illumination stripe to deviate from a straight line which will cause a deviation that varies for each column X.
    3. c) If the beam steering mechanism uses a micro-electronic mechanical system (MEMS) device or an optical phase array (OPA) then the position of the beam may deviate because the scanning drifts or is not precisely repeatable due to imperfections in these components.


    [0055] It can be seen from 3 that the magnitude of this error may be reduced by increasing the fast capture frequency FT, but it cannot be eliminated.

    [0056] However, the inventor has realised that this problem may be overcome by taking advantage of the sensor architecture and adjusting the timing of the sensor clocking with respect to the laser pulsing to enable the sensor to capture the actual position Ya of the illuminated stripe. From this the deviation dY in the stripe position from its expected position can be determined and used to calculate the error distance dR which may be subtracted from R(X,Y) to remove the error.

    [0057] In an embodiment this may be achieved as follows:
    1. a) Control electronics (1) applies static electrode potentials to the image section (8) of the time of flight focal plane array (9). This will put the image section into "integration mode": that is, the image section behaves as a series of image picture elements (pixels) where each element is sensitive to light and can capture electrons generated by photons incident upon the pixel.
    2. b) The control electronics then causes the light source (2) and optical system (3) to emit a pulse of light in the direction necessary to locate the image of the illumination stripe (15) on the focal plane at position Y.
    3. c) The control electronics maintains the image section (8) in the integration mode for a period of time greater than the maximum operating distance of the sensor. For example, if the maximum operating distance of the sensor is 300m, then the control electronics will maintain the integration mode for 2 microseconds. This allows reflections from the regions of the remote object (6) illuminated by stripe (5) to be collected as a pattern of photo-generated charge within the image section of the time of flight focal plane array.
    4. d) Once the integration period has completed, the control electronics applies N clock cycles to both the image and store sections to transfer the captured pattern of photo-generated charge into the store section.
    5. e) After this operation the store section will contain in each column charge patterns generated by reflected light from far objects (6) whose centres indicate the actual location (Ya) of the illumination stripe within each column (X).
    6. f) The control electronics (1) then applies the appropriate clock pulses to the store (10) and readout (11) sections to readout these charge pattern into the processing electronics (12) where standard processing techniques are used to determine the centre of the actual location (Ya) of the illumination stripe for each column X and hence the deviation dY(X) between the actual (Ya) and expected location (Y). Using these dY(X) values, the error dR(X) is calculated for each column.
    7. g) After gathering this set of range error data, the control electronics operates the focal plane array in a "time of flight mode" as explained above to measure the distances to the remote objects (6) and then uses the dR(X) value to correct the measurement for each column and yield an error free result for each range measurement R(X,Y).
    8. h) This process is repeated for each different Y location required to gather a sufficiently dense point cloud for the application.


    [0058] In this way, the time of flight sensor can be made completely self-calibrating for any errors in the laser stripe position or variation in laser stripe straightness or thickness providing the same processing technique is used for both the determination of the actual position of the stripe (Ya(X)) when in integration mode and the position of the reflection R(X) in time of flight mode.

    [0059] The frequency with which the error determining step is carried out will depend upon the stability of the laser position with time. For example, if the system shows only a slow drift in the error in position of the illumination stripe with time, then the error measurement process need only be carried out infrequently. Alternatively the error measurement process may be carried out repeatedly and the results averaged to improve the error measurement accuracy.

    [0060] If the time of flight system is operated in a dynamic environment where remote objects may be moving quickly, or the laser is being scanned quickly, then it is preferable to minimise the time difference between the measurement of the error dR(X) and the range R(X) to minimise the effect of any such movement.

    [0061] In the simple method described above, the minimum time interval between the range error and range measurements is determined by the time taken to readout the whole of the focal plane array.

    [0062] To reduce this time interval, it is preferable to use apriori knowledge of the approximate position Y of the image of the illumination stripe and rather than applying N clock cycles to the image and store sections to transfer the integration mode pattern of charge from the image into the store, to apply N+Y-K clock cycles. If there was no error in the position of the image of the illumination strip, this would transfer the image of the stripe to be K rows above the interface between the store and readout sections. However, by choosing K to be larger than the maximum difference between the actual position Ya and the expected position Y of the line, it is only necessary to readout 2K rows of the store section to capture the whole of the actual position of the line and to calculate the error dR(X).

    [0063] In this way, the time interval between the measurement of range error and range is reduced to the time taken to read out 2K rows, which will be significantly less than the time taken to readout all N rows of the store section and so offers a useful improvement.

    [0064] However, the time interval may be reduced to negligible levels using a further refinement to the approach that will be described with reference to Figure 4, an outline timing diagram.
    1. a) The control electronics initially applies the appropriate clock pulse sequences to the image, store and readout section clock electrodes (IØ, SØ, RØ respectively) to clear all these regions of charge.
    2. b) This process is stopped for the image section (25) and the static potentials necessary to switch in into integration mode are applied. At the same time or shortly afterwards, the light source, beam steering and optical systems are commanded to emit a light pulse (26). The control electronics maintains the integration mode long enough to capture the reflected signal from the far object as explained previously.
    3. c) The control electronics then stops the charge clearing process in the store section and applies Y clock cycles to the image and store sections (27) to transfer the captured charge pattern equating to the image of the reflected pulse of illumination into a region at the top of the store section.
    4. d) The control electronics then causes the light source to emit a second light pulse (28) and starts to apply high frequency (TF) clock cycles to the image section (8) to initiate the time of flight capture mode. After Y clock cycles have been applied to the image section (29) the control electronics applies a further N-K high frequency clock cycles to both the store and image section together (30) to complete the time of flight signal capture.
    5. e) This effect of the sequence of operations described above is that within each column (X) the store section now contains one charge pattern (32) generated by the reflected light of the far object captured during the integration mode that will be located K rows above the bottom of the store section if there has been no error in the illumination position; and a second charge pattern (3) generated by the reflected light from the same point on the far object but captured during the high speed clocking, time of flight mode and therefore shifted in spatial position by R(X) rows along column X. It will be understood that both the charge pattern captured during integration and time of flight modes will both be subject to any error in the Y position of the illumination source by the same amount and so the separation of the two charge patterns (R) is directly proportional to the time of flight and hence distance to the far object by equation 2, independent of any error in the illumination position.
    6. f) The control electronics then applies clock cycles to the store and readout sections of the device to convert and read out the charge patterns (32) and (33) as electrical signals.
    7. g) The processing electronics (12) then process each column X of signal data to determine the separation R(X) of the signals captured in time of flight mode and hence the distance D(X,Y) to the far object using equation 2.
      The processing electronics may also calculate the position J(X) of the signal related to the image of the reflection captured during the integration mode and use this to calculate the error dY(X) in the position of the fan beam in column X where:

      This information may be used to track any changes in the error dY(X) with Y position, time or environmental factors.
    8. h) The control electronics then repeats steps a) to g) shifting the position of the illumination fan beam in the Y axis each time as necessary to capture a full point cloud.



    Claims

    1. A time of flight distance measurement method comprising:

    emitting a pulsed fan beam from a light emitter to illuminate a remote object with an object illumination stripe (5);

    capturing an image of the object illumination stripe as an image illumination stripe (15) on a photosensitive image region (8) of a time of flight sensor (9) comprising an array of M columns of J rows of pixels, where both M and J are positive integers greater than 2,

    transferring data from the photosensitive image region (8) to a storage region (9) arranged not to respond to incident light, the storage region comprising M columns of N storage elements, along the M columns of the storage region from respective columns of the photosensitive image region; and

    reading out data in a readout section (11) from the M columns of the storage region (2); the method comprising

    operating a self-calibration step including capturing the image illumination stripe (15) on the photosensitive image region (8) during an integration period, followed by reading out the data from the image region (8) via the storage region (10) and the readout section (11) and identifying which row of the photosensitive image region has the image illumination stripe; and

    operating a distance determining step including determining the distance to the remote object by clocking the image region (8) while capturing the image of the object illumination stripe (15), reading out the data from the image region (8) via the storage region (10) and the readout section (11) and determining the distance to the object.


     
    2. A time of flight distance measurement method according to claim 1, comprising driving the image region with static potentials during the self-calibration step.
     
    3. A time of flight distance measurement method according to claim 1 or 2, comprising:

    determining that the image illumination stripe will be approximately in row Y;

    applying (N+Y-K) clock cycles to the image region and storage region, where K is a positive integer at least 2; and

    reading out 2K rows of the store section to determine the actual row Ya of the image illumination stripe.


     
    4. A time of flight distance measurement method according to claim 3, wherein the method comprises controlling independently the clocking of the photosensitive image region, the storage region, and the readout section; the method comprising:

    clearing the image region, storage region and readout section;

    emitting a first laser pulse;

    capturing a first illumination image on the image region;

    clocking the image region and storage region sufficiently to transfer the first illumination image to the storage region;

    emitting a second laser pulse;

    clocking the image region and the storage region at the predetermined clock frequency to capture a second illumination image and transfer it to the storage region;

    clocking the readout section to readout the storage section and identify the number of rows between the first and second illumination image; and

    determining the distance to the object from the number of rows between the first and second illumination image.


     
    5. A time of flight distance measurement method according to any preceding claim further comprising alternating the self-calibration step and the distance determining step.
     
    6. A time of flight distance measurement method according to any of claims 1 to 4 further comprising carrying out a self-calibration step at less than 20% of the frequency of carrying out the distance determining step.
     
    7. A time of flight distance measurement system, comprising:

    a light emitter (2,3) arranged to emit a pulsed fan beam for illuminating a remote object with a pulsed illumination stripe;

    a time of flight sensor (9) comprising:

    a photosensitive image region (8) comprising an array of M columns of P rows of pixels, where both M and P are positive integers greater than 2, arranged to respond to light incident on the photosensitive image region (8);

    a storage region (10) arranged not to respond to incident light, the storage region comprising M columns of N storage elements, arranged to transfer data along the M columns of storage from a respective one of the M pixels along column of N storage elements; and

    a readout section (11) arranged to read out data from the M columns of the storage region; and

    circuitry (1) for controlling the time of flight sensor (6) to capture image data of the pulsed illumination stripe along a row of pixels and to transfer the captured image data to the storage section;

    wherein the circuitry is arranged to operate the photosensitive image detector

    in a self-calibration step including capturing the image illumination stripe on the photosensitive image region (8) during an integration period, followed by reading out the data from the image region (8) via the storage region (10) and the readout section (11) and identifying which row of the photosensitive image region has the image illumination stripe; and

    in a distance determining step including determining the distance to the remote object by clocking the image region while capturing the image of the object illumination stripe, reading out the data from the image region (8) via the storage region (10) and the readout section (11) and determining the distance to the object.


     
    8. A time of flight distance measurement system according to claim 7, wherein the time of flight sensor is a charge coupled device.
     
    9. A time of flight distance measurement system according to claim 7 or 8, wherein photons incident over at least 90% of the area of the photosensitive image region are captured by the photosensitive image region.
     


    Ansprüche

    1. Verfahren zur Entfernungsmessung nach Flugzeit, umfassend:

    Emittieren eines gepulsten Fächerstrahls aus einem Lichtemitter, um ein entferntes Objekt mit einem Objektbeleuchtungsstreifen (5) zu beleuchten;

    Erfassen eines Bildes des Objektbeleuchtungsstreifens als Bildbeleuchtungsstreifen (15) auf einem lichtempfindlichen Bildbereich (8) eines Flugzeitsensors (9) umfassend ein Array von M Spalten von J Pixelreihen, wobei sowohl M als auch J positive ganze Zahlen von größer als 2 sind,

    Übertragen von Daten von dem lichtempfindlichen Bildbereich (8) an einen Speicherbereich (9), der angeordnet ist, um nicht auf einfallendes Licht anzusprechen, wobei der Speicherbereich M Spalten von N Speicherelementen umfasst, entlang der M Spalten des Speicherbereichs aus jeweiligen Spalten des lichtempfindlichen Bildbereichs; und

    Auslesen von Daten in einem Ausleseabschnitt (11) aus den M Spalten des Speicherbereichs (2); wobei das Verfahren umfasst

    Durchführen eines Selbstkalibrierungsschritts einschließlich des Erfassens des Bildbeleuchtungsstreifens (15) auf dem lichtempfindlichen Bildbereich (8) während einer Integrationsperiode, gefolgt vom Auslesen der Daten aus dem Bildbereich (8) über den Speicherbereich (10) und den Ausleseabschnitt (11) und Identifizieren, welche Reihe des lichtempfindlichen Bildbereichs den Bildbeleuchtungsstreifen aufweist; und

    Durchführen eines Entfernungsbestimmungsschritts einschließlich des Bestimmens der Entfernung zu dem entfernten Objekt durch Takten des Bildbereichs (8) während des Erfassens des Bildes des Objektbeleuchtungsstreifens (15), Auslesens der Daten aus dem Bildbereich (8) über den Speicherbereich (10) und den Ausleseabschnitt (11) und Bestimmens der Entfernung zu dem Objekt.


     
    2. Verfahren zur Entfernungsmessung nach Flugzeit nach Anspruch 1, umfassend das Ansteuern des Bildbereichs mit statischen Potenzialen während des Selbstkalibrierungsschritts.
     
    3. Verfahren zur Entfernungsmessung nach Flugzeit nach Anspruch 1 oder 2, umfassend:

    Bestimmen, dass der Bildbeleuchtungsstreifen ungefähr in Reihe Y sein wird;

    Anwenden von (N+Y-K) Taktzyklen auf den Bildbereich und Speicherbereich, wobei K eine positive ganze Zahl von mindestens 2 ist; und

    Auslesen von 2K Reihen des Speicherabschnitts, um die tatsächliche Reihe Ya des Bildbeleuchtungsstreifens zu bestimmen.


     
    4. Verfahren zur Entfernungsmessung nach Flugzeit nach Anspruch 3, wobei das Verfahren das unabhängige Steuern des Taktens des lichtempfindlichen Bildbereichs, des Speicherbereichs und des Ausleseabschnitts umfasst; wobei das Verfahren umfasst:

    Löschen des Bildbereichs, Speicherbereichs und Ausleseabschnitts;

    Emittieren eines ersten Laserpulses;

    Erfassen eines ersten Beleuchtungsbildes auf dem Bildbereich;

    ausreichendes Takten des Bildbereichs und Speicherbereichs, um das erste Beleuchtungsbild an den Speicherbereich zu übertragen;

    Emittieren eines zweiten Laserpulses;

    Takten des Bildbereichs und des Speicherbereichs mit der vorbestimmten Taktfrequenz, um ein zweites Beleuchtungsbild zu erfassen und es an den Speicherbereich zu übertragen;

    Takten des Ausleseabschnitts, um den Speicherabschnitt auszulesen und die Anzahl von Reihen zwischen dem ersten und zweiten Beleuchtungsbild zu identifizieren; und

    Bestimmen der Entfernung zu dem Objekt aus der Anzahl der Reihen zwischen dem ersten und zweiten Beleuchtungsbild.


     
    5. Verfahren zur Entfernungsmessung nach Flugzeit nach einem vorhergehenden Anspruch, ferner umfassend das Abwechseln des Selbstkalibrierungsschritts und des Entfernungsbestimmungsschritts.
     
    6. Verfahren zur Entfernungsmessung nach Flugzeit nach einem der Ansprüche 1 bis 4, ferner umfassend das Ausführen eines Selbstkalibrierungsschritts mit weniger als 20 % der Häufigkeit des Ausführens des Entfernungsbestimmungsschritts.
     
    7. System zur Entfernungsmessung nach Flugzeit, umfassend:

    einen Lichtemitter (2,3), der angeordnet ist, um einen gepulsten Fächerstrahl zum Beleuchten eines entfernten Objekts mit einem gepulsten Beleuchtungsstreifen zu emittieren;

    einen Flugzeitsensor (9), umfassend:

    einen lichtempfindlichen Bildbereich (8) umfassend ein Array von M Spalten von P Pixelreihen, wobei sowohl M als auch P positive ganze Zahlen von größer als 2 sind, die dafür angeordnet sind, auf Licht anzusprechen, das auf den lichtempfindlichen Bildbereich (8) einfällt;

    einen Speicherbereich (10), der dafür angeordnet ist, nicht auf einfallendes Licht anzusprechen, wobei der Speicherbereich M Spalten von N Speicherelementen umfasst, der dafür angeordnet ist, Daten entlang der M Speicherspalten aus einem jeweiligen der M Pixel entlang einer Spalte von N Speicherelementen zu übertragen; und

    einen Ausleseabschnitt (11), der dafür angeordnet ist, Daten aus den M Spalten des Speicherbereichs auszulesen; und

    Schaltung (1) zum Steuern des Flugzeitsensors (6), um Bilddaten des gepulsten Beleuchtungsstreifens entlang einer Reihe von Pixeln zu erfassen und die erfassten Bilddaten an den Speicherabschnitt zu übertragen;

    wobei die Schaltung angeordnet ist zum Betreiben des lichtempfindlichen Bilddetektors in einem Selbstkalibrierungsschritt einschließlich des Erfassens des Bildbeleuchtungsstreifens auf dem lichtempfindlichen Bildbereich (8) während einer Integrationsperiode, gefolgt vom Auslesen der Daten aus dem Bildbereich (8) über den Speicherbereich (10) und den Ausleseabschnitt (11) und Identifizieren, welche Reihe des lichtempfindlichen Bildbereichs den Bildbeleuchtungsstreifen aufweist; und

    in einem Entfernungsbestimmungsschritt einschließlich des Bestimmens der Entfernung zu dem entfernten Objekt durch Takten des Bildbereichs während des Erfassens des Bildes des Objektbeleuchtungsstreifens, Auslesen der Daten aus dem Bildbereich (8) über den Speicherbereich (10) und den Ausleseabschnitt (11) und Bestimmen der Entfernung zu dem Objekt.


     
    8. System zur Entfernungsmessung nach Flugzeit nach Anspruch 7, wobei der Flugzeitsensor ein ladungsgekoppeltes Bauelement ist.
     
    9. System zur Entfernungsmessung nach Flugzeit nach Anspruch 7 oder 8, wobei Photonen, die über mindestens 90 % der Fläche des lichtempfindlichen Bildbereichs einfallen, von dem lichtempfindlichen Bildbereich erfasst werden.
     


    Revendications

    1. Procédé de mesure de distance de temps de vol comprenant :

    l'émission d'un faisceau en éventail puisé à partir d'un émetteur de lumière pour éclairer un objet distant avec une bande d'éclairage d'objet (5) ;

    la capture d'une image de la bande d'éclairage d'objet sous la forme d'une bande d'éclairage d'image (15) sur une région d'image photosensible (8) d'un capteur de temps de vol (9) comprenant un réseau de M colonnes de J rangées de pixels, où à la fois M et J sont des entiers positifs supérieurs à 2,

    le transfert de données depuis la région d'image photosensible (8) vers une région de stockage (9) agencée pour ne pas répondre à la lumière incidente, la région de stockage comprenant M colonnes de N éléments de stockage, le long des M colonnes de la région de stockage à partir des colonnes respectives de la région d'image photosensible ; et

    la lecture de données dans une section de lecture (11) à partir des M colonnes de la région de stockage (2) ; le procédé comprenant

    l'exécution d'une étape d'auto-étalonnage comprenant la capture de la bande d'éclairage d'image (15) sur la région d'image photosensible (8) pendant une période d'intégration, suivie de la lecture des données de la région d'image (8) via la région de stockage (10) et la section de lecture (11) et l'identification de la rangée de la région d'image photosensible qui a la bande d'éclairage d'image ; et

    l'exécution d'une étape de détermination de distance comprenant la détermination de la distance jusqu'à l'objet distant en synchronisant la région d'image (8) tout en capturant l'image de la bande d'éclairage d'objet (15), en lisant les données à partir de la région d'image (8) via la région de stockage (10) et la section de lecture (11) et en déterminant la distance jusqu'à l'objet.


     
    2. Procédé de mesure de distance de temps de vol selon la revendication 1, comprenant l'excitation de la région d'image avec des potentiels statiques pendant l'étape d'auto-étalonnage.
     
    3. Procédé de mesure de distance de temps de vol selon la revendication 1 ou 2, comprenant :

    la détermination que la bande d'éclairage d'image sera approximativement dans la rangée Y;

    l'application de (N+Y-K) cycles d'horloge à la région d'image et à la région de stockage, où K est un entier positif d'au moins 2; et

    la lecture de 2K rangées de la section de stockage pour déterminer la rangée effective Ya de la bande d'éclairage d'image.


     
    4. Procédé de mesure de distance de temps de vol selon la revendication 3, dans lequel le procédé comprend la commande indépendante de la synchronisation de la région d'image photosensible, de la région de stockage et de la section de lecture ; le procédé comprenant :

    l'effacement de la région d'image, la région de stockage et la section de lecture ;

    l'émission d'une première impulsion laser ;

    la capture d'une première image d'éclairage sur la région d'image ;

    la synchronisation suffisante de la région d'image et la région de stockage pour transférer la première image d'éclairage vers la région de stockage ;

    l'émission d'une deuxième impulsion laser ;

    la synchronisation de la région d'image et la région de stockage à la fréquence d'horloge prédéterminée pour capturer une deuxième image d'éclairage et la transférer vers la région de stockage ;

    la synchronisation de la section de lecture pour lire la section de stockage et identifier le nombre de rangées entre la première et la deuxième image d'éclairage ; et

    la détermination de la distance à l'objet à partir du nombre de rangées entre la première et la deuxième image d'éclairage.


     
    5. Procédé de mesure de distance de temps de vol selon l'une quelconque des revendications précédentes, comprenant en outre l'alternance de l'étape d'auto-étalonnage et de l'étape de détermination de distance.
     
    6. Procédé de mesure de distance de temps de vol selon l'une quelconque des revendications 1 à 4, comprenant en outre l'exécution d'une étape d'auto-étalonnage à moins de 20 % de la fréquence d'exécution de l'étape de détermination de distance.
     
    7. Système de mesure de distance de temps de vol, comprenant :

    un émetteur de lumière (2, 3) agencé pour émettre un faisceau en éventail puisé pour éclairer un objet distant avec une bande d'éclairage puisée ;

    un capteur de temps de vol (9) comprenant :

    une région d'image photosensible (8) comprenant un réseau de M colonnes de P rangées de pixels, où à la fois M et P sont des entiers positifs supérieurs à 2, agencée pour répondre à la lumière incidente sur la région d'image photosensible (8) ;

    une région de stockage (10) agencée pour ne pas répondre à la lumière incidente, la région de stockage comprenant M colonnes de N éléments de stockage, agencée pour transférer des données le long des M colonnes de stockage à partir de l'un respectif des M pixels le long de la colonne de N éléments de stockage ; et

    une section de lecture (11) agencée pour lire les données des M colonnes de la région de stockage ; et

    un circuit (1) pour commander le capteur de temps de vol (6) pour capturer des données d'image de la bande d'éclairage pulsé le long d'une rangée de pixels et pour transférer les données d'image capturées à la section de stockage ;

    dans lequel le circuit est agencé pour faire fonctionner le détecteur d'image photosensible dans une étape d'auto-étalonnage comprenant la capture de la bande d'éclairage d'image sur la région d'image photosensible (8) pendant une période d'intégration, suivie par la lecture des données de la région d'image (8) via la région de stockage (10) et la section de lecture (11) et l'identification de la rangée de la région d'image photosensible qui a la bande d'éclairage d'image ; et

    dans une étape de détermination de distance comprenant la détermination de la distance jusqu'à l'objet distant en synchronisant la région d'image tout en capturant l'image de la bande d'éclairage d'objet, en lisant les données de la région d'image (8) via la région de stockage (10) et la section de lecture (11) et en déterminant la distance jusqu'à l'objet.


     
    8. Système de mesure de distance de temps de vol selon la revendication 7, dans lequel le capteur de temps de vol est un dispositif à couplage de charge.
     
    9. Système de mesure de distance de temps de vol selon la revendication 7 ou 8, dans lequel des photons incidents sur au moins 90 % de la surface de la région d'image photosensible sont capturés par la région d'image photosensible.
     




    Drawing




















    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description