BACKGROUND
1. Technical Field
[0001] The present invention relates to a driver assistance system and method for providing
forward collision warning.
2. Description of Related Art
[0002] During the last few years camera based driver assistance systems (DAS) have been
entering the market; including lane departure warning (LDW), Automatic High-beam Control
(AHC), pedestrian recognition, and forward collision warning (FCW).
[0003] Lane departure warning (LDW) systems are designed to give a warning in the case of
unintentional lane departure. The warning is given when the vehicle crosses or is
about to cross the lane marker. Driver intention is determined based on use of turn
signals, change in steering wheel angle, vehicle speed and brake activation.
[0004] In image processing, the Moravec corner detection algorithm is probably one of the
earliest corner detection algorithms and defines a corner to be a point with low self-similarity.
The Moravec algorithm tests each pixel in the image to see if a corner is present,
by considering how similar a patch centered on the pixel is to nearby largely overlapping
patches. The similarity is measured by taking the sum of squared differences squared
differences (SSD) between the two patches. A lower number indicates more similarity.
An alternative approach to corner detection in images is based on a method proposed
by Harris and Stephens, which is an improvement of the method by Moravec. Harris and
Stephens improved upon Moravec's corner detector by considering the differential of
the corner score with respect to direction directly, instead of using nearby patches
of Moravec.
[0005] In computer vision, a widely used differential method for optical flow estimation
was developed by Bruce D. Lucas and Takeo Kanade. The Lucas-Kanade method assumes
that the flow is essentially constant in a local neighborhood of the pixel under consideration,
and solves the basic optical flow equations for all the pixels in that neighborhood,
by the least squares criterion. By combining information from several nearby pixels,
the Lucas-Kanade method can often resolve the inherent ambiguity of the optical flow
equation. It is also less sensitive to image noise than point-wise methods. On the
other hand, since it is a purely local method, it cannot provide flow information
in the interior of uniform regions of the image.
BRIEF SUMMARY
[0006] The present invention seeks to provide an improved method and system for providing
forward collision warning.
[0007] According to an aspect of the present invention, there is provided a method of providing
a forward collision warning as specified in claim 1 or 9.
[0008] According to another aspect of the present invention, there is provided a system
for providing a forward collision warning as specified in claim 6 or 14.
[0009] According to features of the present invention various methods are provided for signaling
a forward collision warning using a camera mountable in a motor vehicle. Multiple
image frames are acquired at known time intervals. An image patch may be selected
in at least one of the image frames. Optical flow may be tracked of multiple image
points of the patch between the image frames. The image points may be fit to at least
one model. Based on the fit of the image points, it may be determined if a collision
is expected and if so a time-to-collision (TTC) may be determined. The image points
may be fit to a road surface model and a portion of the image points may be modeled
as imaged from a road surface. It may be determined that a collision is not expected
based on the fit of the image points to the road surface model. The image points may
be fit to a vertical surface model, in which a portion of the image points may be
modeled to be imaged from a vertical object. A time to collision TTC may be determined
based on the fit of the image points to the vertical surface model. Image points may
be fit to a mixed model where a first portion of the image points may be modeled to
be imaged from a road surface and a second portion of the image points may be modeled
to be imaged from a substantially vertical or upright object not lying in the road
surface.
[0010] In the image frames, a candidate image of a pedestrian may be detected where the
patch is selected to include the candidate image of the pedestrian. The candidate
image may be verified as an image of an upright pedestrian and not an object in the
road surface when the best fit model is the vertical surface model. In the image frames
a vertical line may be detected, where the patch is selected to include the vertical
line. The vertical line may be verified as an image of a vertical object and not of
an object in the road surface when the best fit model is the vertical surface model.
[0011] In the various methods, a warning may be issued based on the time-to-collision being
less than a threshold. In the various methods, a relative scale of the patch may be
determined based on the optical flow between the image frames and the time-to-collision
(TTC) may be determined which is responsive to the relative scale and the time intervals.
The method may avoid object recognition in the patch prior to determining the relative
scale.
[0012] According to features of the present invention a system is provided including a camera
and a processor. The system may be operable to provide a forward collision warning
using a camera mountable in a motor vehicle. The system may also be operable to acquire
multiple image frames at known time intervals, to select a patch in at least one of
the image frames; to track optical flow between the image frames of multiple image
points of the patch; to fit the image points to at least one model and based on the
fit of the image points to the at least one model, to determine if a collision is
expected and if so to determine the time-to-collision (TTC). The system may be further
operable to fit the image points to a road surface model. It may be determined that
a collision is not expected based on the fit of the image points to the road surface
model
[0013] According to other embodiments of the present invention, a patch in an image frame
may be selected which may correspond to where the motor vehicle will be in a predetermined
time interval. The patch may be monitored; if an object is imaged in the patch then
a forward collision warning may be issued. It may be determined if the object is substantially
vertical, upright or not in the road surface by tracking optical flow between the
image frames of multiple image points of the object in the patch. The image points
may be fit to at least one model. A portion of the image points may be modeled to
be imaged from the object. Based on the fit of the image points to the at least one
model, it may be determined if a collision is expected and if so a time-to-collision
(TTC) may be determined. A forward collision warning may be issued when the best fit
model includes a vertical surface model. The image points may be fit to a road surface
model. It may be determined that a collision is not expected based on the fit of the
image points to the road surface model.
[0014] According to features of the present invention a system is provided for providing
a forward collision warning in a motor vehicle. The system includes a camera and a
processor mountable in the motor vehicle. The camera may be operable to acquire multiple
image frames at known time intervals. The processor may be operable to select a patch
in an image frame corresponding to where the motor vehicle will be in a predetermined
time interval and to monitor the patch. If an object is imaged in the patch, a forward
collision warning may be issued if the object is found to be upright and/or not in
the road surface. The processor may further be operable to track optical flow between
the image frames of multiple image points of the object in the patch and fit the image
points to one or more models. The models may include a vertical object model, a road
surface model and/or a mixed model including one or more image points assumed to from
the road surface and one or more image points from an upright object not in the road
surface. Based on the fit of the image points to the models, it is determined if a
collision is expected and if a collision is expected a time to collision (TTC) is
determined. The processor may be operable to issue a forward collision warning based
on TTC being less than a threshold.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] Embodiments of the present invention are described below, by way of example only,
with reference to the accompanying drawings, wherein:
Figures 1a and 1b show schematically two images captured from a forward looking camera
mounted inside a vehicle as the vehicle approaches a metal fence, according to an
embodiment of the present invention.
Figure 2a shows a method for providing a forward collision warning using a camera
mounted in host vehicle, according to an embodiment of the present invention.
Figure 2b shows further details of determining time to collision step shown in Figure
2a, according to an embodiment of the present invention.
Figure 3a shows an image frame of an upright surface (the back of a van), according
to an embodiment of the present invention.
Figure 3c shows a rectangular region which is primarily of a road surface, according
to an embodiment of the present invention.
Figure 3b shows the vertical motion of points δy as a function of vertical image position
(y) for figure 3a, according to an embodiment of the present invention.
Figure 3d shows the vertical motion of points δy as a function of vertical image position
(y) for Figure 3c, according to an embodiment of the present invention.
Figure 4a shows an image frame which includes an image of a metal fence with horizontal
lines and a rectangular patch, according to an embodiment of the present invention.
Figures 4b and 4c show more details of the rectangular patch shown in Figure 4a, according
to an embodiment of the present invention.
Figure 4d illustrates a graph of vertical motion of points (δy) versus vertical point position (y), according to an embodiment of the present invention.
Figure 5 illustrates another example of looming in an image frame, according to an
embodiment of the present invention.
Figure 6 shows a method for providing a forward collision warning trap, according
to an embodiment of the present invention.
Figures 7a and 7b show examples of a forward collision trap warning being triggered
on walls, according to an exemplary embodiment of the present invention.
Figure 7c shows an example of a forward collision trap warning being triggered on
boxes, according to an exemplary embodiment of the present invention.
Figure 7d shows an example of a forward collision trap warning being triggered on
sides of a car, according to an exemplary embodiment of the present invention.
Figure 8a shows an example of objects with strong vertical lines on a box, according
to an embodiment of the present invention.
Figure 8b shows an example of objects with strong vertical lines on a lamp post, according
to an embodiment of the present invention.
Figures 9 and 10 illustrate a system which includes a camera or image sensor mounted
in a vehicle, according to an embodiment of the present invention.
DETAILED DESCRIPTION
[0016] Reference will now be made in detail to embodiments of the present invention, examples
of which are illustrated in the accompanying drawings, wherein like reference numerals
refer to the like elements throughout. The features are described below to explain
the present invention by referring to the figures.
[0017] Before explaining features of the invention in detail, it is to be understood that
the invention is not limited in its application to the details of design and the arrangement
of the components set forth in the following description or illustrated in the drawings.
The invention is capable of other features or of being practiced or carried out in
various ways. Also, it is to be understood that the phraseology and terminology employed
herein is for the purpose of description and should not be regarded as limiting.
[0018] By way of introduction, embodiments of the present invention are directed to a forward
collision (FCW) system. According to
US patent 7113867, an image of a lead vehicle is recognized. The width of the vehicle may be used to
detect a change in scale or relative scale
S between image frames and the relative scale is used for determining time to contact.
Specifically, for example width of the lead vehicle, have a length (as measured for
example in pixels or millimeters) in the first and second images represented by w(t1)
and w(t2) respectively. Then, optionally the relative scale is S(t) = w(t2)/w(t1).
[0019] According to the teachings of
US patent 7113867, the forward collision (FCW) system depends upon recognition of an image of an obstruction
or object,
e.g. lead vehicle for instance, as recognized in the image frames. In the forward collision
warning system, as disclosed in
US patent 7113867, the scale change of a dimension,
e.g width, of the detected object
e.g. vehicle is used to compute time-to-contact (TTC). However, the object is first detected
and segmented from the surrounding scene. This disclosure describes a system which
using the relative scale change is based on optical flow to determine the time to
collision TTC and likelihood of collision and issue an FCW warning if required. Optical
flow causes the looming phenomenon in perception of images which appear larger as
objects being imaged get closer. Object detection and/or recognition may be performed
or object detection and/or recognition may be avoided, according to different embodiments
of the present invention.
[0020] The looming phenomenon has been widely studied in biological systems. Looming appears
to be a very low level visual attention mechanism in humans and can trigger instinctive
reactions. There have been various attempts in computer vision to detect looming and
there was even a silicon sensor design for detection of looming in the pure translation
case.
[0021] Looming detection may be performed in real world environments with changing lighting
conditions, complex scenes including multiple objects and host vehicle which includes
both translation and rotation.
[0022] The term "relative scale" as used herein refers to the relative size increase (or
decrease) of an image patch in an image frame and a corresponding image patch in a
subsequent image frame.
[0023] Reference is now made to Figures 9 and 10 which illustrate a system
16 including a camera or image sensor
12 mounted in a vehicle
18, according to an embodiment of the present invention. Image sensor
12, imaging a field of view in the forward direction delivers images in real time and
the images are captured in a time series of image frames
15. An image processor
14 may be used to process image frames
15 simultaneously and/or in parallel to serve a number of driver assistance systems.
The driver assistance systems may be implemented using specific hardware circuitry
with on board software and/ or software control algorithms in storage
13. Image sensor
12 may be monochrome or black-white, i.e. without color separation or image sensor
12 may be color sensitive. By way of example in Figure 10, image frames
15 are used to serve pedestrian warning (PW)
20, lane departure warning (LDW)
21, forward collision warning (FCW)
22 based on target detection and tracking according to the teachings of
US patent 7113867, forward collision warning based on image looming (FCWL)
209 and/or forward collision warning
601 based on an FCW trap (FCWT)
601. Image processor
14 is used to process image frames
15 to detect looming in an image in the forward field of view of camera
12 for forward collision warning
209 based on image looming and FCWT
601. Forward collision warning
209 based on image looming and forward collision warning based on traps (FCWT)
601 may be performed in parallel with conventional FCW
22 and with the other driver assistance functions, pedestrian detection (PW)
20, lane departure warning (LDW)
21, traffic sign detection, and ego motion detection. FCWT
601 may be used to validate the conventional signal from FCW
22. The term "FCW signal" as used herein refers to a forward collision warning signal.
The terms "FCW signal", "forward collision warning" , and "warning" are used herein
interchangeably.
[0024] An embodiment of the present invention is illustrated in Figure 1a and 1b which show
an example of optical flow or looming. Two images captured are shown from a forward
looking camera
12 mounted inside a vehicle
18 as vehicle
18 approaches a metal fence
30. The image in Figure 1a shows a field and a fence
30. The image in Figure 1b shows the same features with vehicle
18 closer if a small rectangular p
32 of the fence (marked in dotted line) is viewed it may be possible see that the horizontal
lines
34 appear to spread out as vehicle
18 approaches fence
30 in Figure 1b.
[0025] Reference is now made to Figure 2a which shows a method
201 for providing a forward collision warning
209 (FCWL
209) using camera
12 mounted in host vehicle
18, according to an embodiment of the present invention. Method
201 does not depend on object recognition of an object in the forward view of vehicle
18. In step
203 multiple image frames
15 are acquired by camera
12. The time interval between capture of image frames is
Δt. A patch
32 in image frame
15 is selected in step
205 and a relative scale (
S) of patch
32 is determined in step
207. In step
209, the time-to-collision (TTC) is determined based on the relative scale (
S) and the time interval (
Δt) between frames
15.
[0026] Reference is now made to Figure 2b which shows further details of determining time
to collision step
209 shown in Figure 2a, according to an embodiment of the present invention. In step
211 multiple image points in a patch
32 may be tracked between image frames
15. In step
213 the image points of may be fit to one or more models. A first model may be a vertical
surface model which may include objects such as a pedestrian, a vehicle, a wall, bushes,
trees or a lamp post. A second model may be a road surface model which considers features
of image points on the road surface. A mixed model may include one or more image points
from the road and one or more image points from an upright object For models which
assume at least a portion of the image points of an upright object multiple time-to-collision
(TTC) values may be computed. In step
215, the best fit of the image points to a road surface model, a vertical surface model,
or a mixed model enables selection of the time-to-collision (TTC) value. A warning
may be issued based on the time-to-collision (TTC) being less than a threshold and
when the best fit model is the vertical surface model or a mixed model.
[0027] Alternatively, step
213 may also include in the image frames
15, the detection of a candidate image. The candidate image may be a pedestrian or a
vertical line of a vertical object such as lamppost for example. In either case of
a pedestrian or a vertical line, patch
32 may be selected so as to include the candidate image. Once patch
32 has been selected it may then be possible to perform a verification that the candidate
image is an image of an upright pedestrian and/ or a vertical line. The verification
may confirm that the candidate image and is not an object in the road surface when
the best fit model is the vertical surface model.
[0028] Referring back to Figures 1a and 1b, sub-pixel alignment of patch
32 from the first image shown in Figure 1a to the second image shown in Figure 1b may
produce a size increase or increase in relative scale
S by 8% (
S = 1.08) (step
207). Given the time difference between the images of
Δt = 0.5 sec, the time to contact (TTC) can be computed (step
209) using equation 1 below:

[0029] If vehicle
18 speed ν in known (ν = 4.8 m/s), the distance
Z to the target can also be computed using equation 2 below:

[0030] Figures 3b and 3d show the vertical motion of points δy as a function of vertical
image position (
y), according to an feature of the present invention. Vertical motion δy is zero at
the horizon and negative values are below the horizon. Vertical motion of points δy
is shown in equation 3 below.

[0031] Equation (3) is a linear model relating y and δy and has effectively two variables.
Two points may be used to solve for the two variables.
[0032] For vertical surfaces the motion is zero at the horizon (
y0) and changes linearly with image position since all the points are at equal distance
as in the graph shown in Figure 3b. For road surfaces, points lower in the image are
closer (Z is smaller) as shown in equation 4 below:

and so the image motion (δy) increases at more than linear rate as shown in equation
5 below and in the graph of Figure 3d.

[0033] Equation (5) is a restricted second order equation with effectively two variables.
[0034] Again, two points may be used to solve for the two variables.
[0035] Reference is now made to Figure 3a and 3c which represent different image frames
15. In Figures 3a and 3c two rectangular regions are shown by dotted lines. Figure 3a
shows an upright surface (the back of a van). The square points are points that were
tracked (step
211) and the motion matches (step
213) the motion model for an upright surface shown in the graph of image motion (δ
y) versus point height y in Figure 3b. The motion of the triangular points in Figure
3a do not match the motion model for an upright surface. Reference is now made to
Figure 3c which shows a rectangular region primarily of a road surface. The square
points are points that match a road surface model shown in the graph of image motion
(δ
y) versus point height
y in Figure 3d. The motion of triangular points do not match the motion model for the
road surface and are outliers. The task in general therefore is to determine which
points belong to the model (and to which model) and which points are outliers which
may be performed by a robust fit approach as explained below.
[0036] Reference is now made to Figures 4a, 4b, 4c and 4d which show a typical scenario
of a mixture of two motion models found in an image, according to an embodiment of
the present invention. Figure 4a shows an image frame
15 including an image of a metal fence
30 with horizontal lines
34 and rectangular patch
32a. Further detail of patch
32a are shown in Figures 4b and 4c. Figure 4b shows detail of patch
32a of a previous image frame
15 and Figure 4c shows detail
32a in a subsequent image frame
15 when vehicle
18 is closer to fence
30. Image points are shown as squares, triangles and circles in Figures 4c and 4d on
the upright obstacle
30 and image points are shown on the road surface leading up to the obstacle
30. Tracking points inside the rectangular region
32a show that some points in the lower part of region
32a correspond to a road model and some points in the upper part of region
32a correspond to an upright surface model. Figure 4d illustrates a graph of vertical
motion of points (δ
y) versus vertical point position (
y). The recovered model shown graphically in Figure 4d has two parts a curved (parabolic)
section
38a and a linear section
38b. The transition point between sections
38a and
38b corresponds to the bottom of upright surface
30. The transition point is also marked by a horizontal dotted line
36 shown in Figure 4c. There are some points shown by triangles in the Figures 4b and
4c that were tracked but did not match the model, some tracked points which did match
the model are shown by squares and some points that did not track well are shown as
circles.
[0037] Reference is now made to Figure 5 which illustrates another example of looming in
an image frame
15. In image frame
15 of Figure 5, there is no upright surface in patch
32b, only clear road ahead and the transition point between the two models is at the horizon
marked by dotted line
50.
ESTIMATION OF THE MOTION MODEL AND TIME TO COLLISION (TTC)
[0038] The estimation of the motion model and time to contact (TTC) (step
215) assumes we are provided a region
32, e.g. a rectangular region in image frame
15. Examples of rectangular regions are rectangles
32a and
32b shown in Figures 3 and 5 for example. These rectangles may be selected based on detected
objects such as pedestrians or based on the host vehicle
18 motion.
- 1. Tracking Points (step 211):
- (a) A rectangular region 32 may be tessellated into 5x20 grid of sub-rectangles.
- (b) For each sub-rectangle, an algorithm may be performed to find a corner of an image,
for instance by using the method of Harris and Stephens and this point may be tracked.
Using the best 5x5 Harris Point the eigenvalues of the matrix below may be considered,

and we look for two strong eigenvalues.
(c) Tracking may be performed by exhaustive search for the best some of squared differences
(SSD) match in a rectangular search region of width W and height H. The exhaustive
search at the start is important since it means that a prior motion is not introduced
and the measurements from all the sub-rectangles are more statistically independent.
The search is followed by fine tuning using an optical flow estimation using for instance
the method Lukas Kanade. The Lukas Kanade method allows for sub-pixel motion.
- 2. Robust Model Fitting (step 213):
- (a) Pick two or three points randomly from the 100 tracked points.
- (b) The number of pairs (Npairs) picked depends the vehicle speed (v) and is given for

instance by:
where ν is in meter/second. The number of triplets (Ntriplets) is given by:

- (c) For two points, two models may be fit (step 213). One model assumes the points are on an upright object. The second model assumes they
are both on the road.
- (d) For three points two models may also be fit. One model assumes the top two points
are on an upright object and the third (lowest) point is on the road. The second model
assumes the upper point is on an upright object and the lower two are on the road.
Two models may be solved for three points by using two points to solve for the first
model (equation 3) and then using the resulting y0 and the third point to solve for the second model (equation 5).
(e) Each model in (d) gives a time-to-collision TTC value (step 215). Each model also gets a score based on how well the 98 other points fit the model.
The score is given by the Sum of the Clipped Square of the Distance (SCSD) between
the y motion of the point and predicted model motion. The SCSD value is converted into
a probability like function:

where is the number of points (N=98).
(f) Based on the TTC value, vehicle 18 speed and assuming the points are on stationary objects, the distance to the points:
Z = v × TTC may be computed. From the x image coordinate of each image point distance, the lateral position in world coordinates
may


be computed:
(g) The lateral position at time TTC is computed thus. A binary Lateral Score requires
that at least one of the points from the pair or triplet must be in the vehicle 18 path.
- 3. Multiframe Scores: At each frame 15 new models may be generated, each with its associated TTC and score. The 200 best
(highest scoring) models may be kept from the past 4 frames 15 where the scores are weighted:
where n = 0.3 is the age of the score and α = 0:95.

- 4. FCW Decision: the actual FCW warning is given if any of the following three conditions
occurs:
- (a) The TTC for the model with the highest score is below the TTC threshold and the
score is greater than 0.75 and

- (b) The TTC for the model with the highest score is below the TTC threshold and

- (c)

Figures 3 and 4 have shown how to robustly provide a FCW warning for points inside
a given rectangle 32. How the rectangle is defined depends on the application as shown by other exemplary
features of Figures 7a-7d and 8a, 8b.
FCW TRAP FOR GENERAL STATIONARY OBJECTS
[0039] Reference is now made to Figure 6 which shows a method
601 for providing a forward collision warning trap (FCWT)
601, according to an embodiment of the present invention. In step
203 multiple image frames
15 are acquired by camera
12. In step
605, patch
32 is selected in an image frame
15 which corresponds to where motor vehicle
18 will be in a predetermined time interval. Patch
32 is then monitored in step
607. In decision step
609 if a general object is imaged and detected in patch
32, a forward collision warning is issued in step
611. Otherwise capturing of images frames continues with step
203.
[0040] Figures 7a and 7b show examples of the FCWT
601 warning being triggered on walls
70, in Figure 7d sides of a car
72 and in Figure 7c on boxes
74a and
74b, according to an exemplary embodiment of the present invention. Figures 7a - 7d are
examples of general stationary objects which require no prior class based detection.
The dotted rectangular region is defined as a target W = 1m wide at a distance where
the host vehicle will be in
t = 4 seconds.

[0041] Where ν is the vehicle
18 speed, H is the height of camera
12 and w and y are a rectangle width and vertical position in the image respectively.
The rectangular region is an example of an FCW trap. If an object "falls" into this
rectangular region, the FCW Trap may generate a warning if the
TTC is less than a
Threshold.
IMPROVING PERFORMANCE USING MULTIPLE TRAPS:
[0042] In order to increase the detection rate, the FCW trap may be replicated into 5 regions
with 50% overlap creating a total trap zone of 3m width.
[0043] Dynamic position of the FCW trap may be selected (step
605) on yaw rate: the trap region
32 may be shifted laterally based on the vehicle
18 path determined from a yaw rate sensor, the vehicle
18 speed and dynamical model of the host vehicle
18.
FCW TRAP FOR VALIDATING FORWARD COLLISION WARNING SIGNALS
[0044] Special classes of objects such as vehicles and pedestrians can be detected in image
15 using pattern recognition techniques. According to the teachings of
US patent 7113867, these objects are then tracked over time and an FCW
22 signal can be generated using the change in scale. However, before giving a warning
it is important to validate the FCW
22 signal using an independent technique. Validating the FCW
22 signal using an independent technique, for instance using method
209 (Figure 2b) may be particularly important if system
16 will activate the brakes. In Radar/vision fusion systems the independent validation
can come from the radar. In a vision only based system
16, the independent validation comes from an independent vision algorithm.
[0045] Object (
e.g. pedestrian, lead vehicle) detection is not the issue. Very high detection rate can
be achieved with a very low false rate. A feature of the present invention is to generate
a reliable FCW signal without too many false alarms that will irritate the driver,
or worse, cause the driver to brake unnecessarily. A possible problem with conventional
pedestrian FCW systems is to avoid false forward collision warnings as the number
of pedestrians in the scene is large but the number of true forward collision situations
is very small. Even a 5% false rate would mean the driver would get frequent false
alarms and probably never experience a true warning.
[0046] Pedestrian targets are particular challenging for FCW systems because the targets
are non-rigid making tracking (according to the teachings of
US patent 7113867) difficult and scale change in particular is very noisy. Thus the robust model (method
209) may be used to validate the forward collision warning on pedestrians. The rectangular
zone
32 may be determined by a pedestrian detection system
20. A FCW signal may be generated only if target tracking performed by FCW
22, according to
US patent 7113867 and the robust FCW (method
209) give a
TTC smaller than one or more threshold values which may or may not be previously determined.
Forward collision warning FCW
22, may have a different threshold value from the threshold used in the robust model
(method
209).
[0047] One of the factors that can add to the number of false warning is that pedestrians
typically appear in less structured roads where the drivers driving pattern can be
quite erratic including sharp turns and lane changes. Thus some further constraints
may need to be included on issuing a warning:
When a curb or lane mark is detected the FCW signal is inhibited if the pedestrian
is on the far side of the curb or and lane and neither of the following conditions
occur:
- 1. The pedestrian is crossing the lane mark or curb (or approaching very fast). For
this it may be important to detect the pedestrian's feet.
- 2. The host vehicle 18 is not crossing the lane mark or curb (as detected by an LDW 21 system for example).
[0048] The drivers intentions are difficult to predict. If the driver is driving straight,
has not activated turn signals and there are no lane markings predicting otherwise
it is reasonable to assume that the driver will continue straight ahead. Thus, if
there is a pedestrian in path and TTC is below threshold an FCW signal can be given.
However if the driver is in a turn it is equally likely that he/she will continue
in the turn or break out of the turn and straighten out. Thus, when yaw rate is detected,
an FCW signal may only be given if the pedestrian is in path assuming the vehicle
18 will continue at the same yaw and also the pedestrian is in path if the vehicle straightens
out.
[0049] The FCW trap
601 concept can be extended to objects consisting mainly of vertical (or horizontal lines).
A possible problem with using the point based techniques on such objects is that the
good Harris (corner) points are most often created by the vertical lines on the edge
of the object intersecting horizontal lines on the distant background. The vertical
motion of these points will be like the road surface in the distance.
[0050] Figures 8a and 8b show examples of objects with strong vertical lines
82 on a lamp post
80 in Figure 8b and on a box
84 in Figure 8a. Vertical lines
82 are detected in the trap zone
32. The detected lines
82 may be tracked between images. Robust estimation may be performed by pairing up lines
82 from frame to frame and computing a TTC model for each line pair, assuming a vertical
object, and then giving a score based on the SCSD of the other lines
82. Since the number of lines may be small, often all combinatorially possibly line pairs
are tested. Only line pairs where there is significant overlap are used. In the case
of horizontal lines, triplets of lines are also giving two models as with points.
[0051] The indefinite articles "a", "an" is used herein, such as "an image", "a rectangular
region" have the meaning of "one or more" that is "one or more images" or "one or
more rectangular regions".
[0052] Although selected embodiments of the present invention have been shown and described,
it is to be understood the present invention is not limited to the described features.
Instead, it is to be appreciated that changes may be made to these features without
departing from the principles and spirit of the invention, the scope of which is defined
by the claims and the equivalents thereof.
1. A method of providing a forward collision warning using a camera mountable in a motor
vehicle, the method including the steps of:
acquiring a plurality of image frames at known time intervals;
selecting a patch in at least one of the image frames;
tracking optical flow between the image frames of a plurality of image points of said
patch;
fitting the image points to at least one model; and
based on the fit of the image points to said at least one model, determining a time-to-collision
(TTC) if a collision is expected.
2. A method according to claim 1, including the steps of:
fitting the image points to a road surface model, wherein at least a portion of the
image points is modeled to be imaged from a road surface;
determining a collision is not expected based on the fit of the image points to the
model.
3. A method according to claim 1 or 2, including the steps of:
fitting the image points to a vertical surface model, wherein at least a portion of
the image points is modeled to be imaged from a vertical object; and
determining the TTC based on the fit of the image points to the vertical surface model.
4. A method according to claim 3, including the steps of:
detecting in the image frames a candidate image of a pedestrian wherein said patch
is selected to include the candidate image of the pedestrian; and
verifying the candidate image as an image of an upright pedestrian and not an object
in the road surface when the best fit model is the vertical surface model.
5. A method according to any preceding claim, wherein said at least one model includes
a mixed model wherein a first portion of the image points is modeled to be imaged
from a road surface and a second portion of the image points is modeled to be imaged
from a substantially vertical object.
6. A system including a camera and a processor mountable in a motor vehicle, the system
operable to provide a forward collision warning, the system being operable to:
acquire a plurality of image frames at known time intervals;
select a patch in at least one of the image frames;
track optical flow between the image frames of a plurality of image points of said
patch;
fit the image points to at least one model; and
based on the fit of the image points to the at least one model, determine a time-to-collision
(TTC) if a collision is expected.
7. A system according to claim 6, further operable to:
fit the image points to a road surface model;
determine a collision is not expected based on the fit of the image points to the
road surface model.
8. A method of providing a forward collision warning using a camera and a processor mountable
in a motor vehicle, the method including the steps of:
acquiring a plurality of image frames at known time intervals;
selecting a patch in an image frame corresponding to where the motor vehicle will
be in a predetermined time interval; and
monitoring said patch, if an object is imaged in said patch then issuing a forward
collision warning.
9. A method according to claim 8, including the steps of:
determining if said object includes a substantially vertical portion.
10. A method according to claim 9,wherein said determining is performed by:
tracking optical flow between the image frames of a plurality of image points of said
patch; and
fitting the image points to at least one model.
11. A method according to claim 9 or 10, wherein at least a portion of the image points
is modeled to be imaged from a vertical object; and
based on the fit of the image points to said at least one model, determining a time-to-collision
(TTC), if a collision is expected.
12. A method according to any one of claims 9 to 11, wherein said at least one model includes
a road surface model, the method including the steps of:
fitting the image points to a road surface model;
determining a collision is not expected based on the fit of the image points to the
road surface model.
13. A method according to any one of claims 9 to 12, including the step of:
issuing said warning when the best fit model includes a vertical surface model.
14. A system for providing a forward collision warning in a motor vehicle, the system
including:
a camera mountable in the motor vehicle, said camera operable to acquire a plurality
of image frames at known time intervals;
a processor operable to:
select a patch in an image frame corresponding to where the motor vehicle will be
in a predetermined time interval;
to monitor said patch; and
if an object is imaged in said patch, then issue a forward collision warning.
15. A system according to claim 14, wherein the processor is operable to determine if
said object includes a substantially vertical portion by:
tracking optical flow between the image frames of a plurality of image points of the
obj ect in the patch;
fitting the image points to at least one model; and
based on the fit of the image points to said at least one model, to determine a time
to collision (TTC) if a collision is expected.