[0001] The present invention relates to an apparatus and a method for detecting an object
in a surveillance area of a vehicle and, in particular, to an advanced turning-assistant
system for commercial vehicles.
Background
[0002] Lane changes or turning maneuvers are potential critical traffic situations for commercial
vehicles where other traffic participants are easily overseen. In passenger cars blind
spot assist systems are used to prevent accidents in such situations by supporting
the driver, for example, during lane changes or turns. The driver in passenger cars
who is warned by such turn assist systems is in most cases able to confirm such warnings
by a visual inspection of the respective area, for example by moving or turning her/his
head. On the other hand, for commercial vehicles this is in most cases not possible
because the driver has no view, for example, at the passenger side of the vehicle
and thus cannot confirm a warning given by a turning assist system by visual inspection.
[0003] Therefore, if conventional systems known for passenger cars are implemented in commercial
vehicles, the level of awareness of the driver will be significantly lower - in particular
for turns to the side opposite to the driver of the vehicle. Thus, the known systems
should be extended to give the driver also for commercial vehicles a same level of
confidence of traffic situations, where the driver cannot receive a visual confirmation
of a possibly dangerous situation.
[0004] Conventional systems available for commercial vehicles are based on various sensors
(e.g. a radar sensor or a laser scanner or a camera) and provide, for example, a blinking
light (e.g. in a rearview mirror) or an audible warning for the driver. These systems
do not provide the same level of confidence, as it is known for passenger cars.
[0005] For example,
DE 10 2010 048 144 A1 discloses a vehicle which has a sensor to detect objects in a surrounding area of
the vehicle, wherein the surrounding area extends along one long side of the vehicle.
Furthermore,
DE 10 2009 041 556 A1 discloses a blind spot assist system based on ultrasonic sensors, which are arranged
along a side and a corner region of the vehicle.
DE 10 2012 010 876 A1 discloses another drive assist system based on image sensors which are arranged between
the front axle and the rear axle in a lateral region of the vehicle main portion.
Further driver assist systems are disclosed in
WO 2014/037064 A1 with sensors arranged at a vehicle trailer and in
WO 2014/041023, wherein a collision warning system for lane changes is described.
[0006] All these conventional systems provide some support for the driver to make sure that
no object is present in a blind spot, when making turns with a commercial vehicle.
However, these conventional systems are disadvantageous in that they rely on multiple
sensors to be installed at the vehicle or in that all sensors are subject to the same
limitations as the systems known from passenger cars and cannot ensure a desired level
of safety.
[0007] Therefore, there is a need of providing an apparatus for detecting an object in a
surveillance area of a vehicle, which overcomes the disadvantages of the conventional
systems as described before and provides, in particular, an increased level of safety
and robustness for commercial vehicles.
Summary of the Invention
[0008] The present invention solves the afore-mentioned problem by providing an apparatus
according to claim 1 and a method according to claim 15. The dependent claims refer
to specifically advantageous realizations of the subject matter of claim 1.
[0009] The present invention relates to an apparatus for detecting an object in a surveillance
area of a vehicle. The apparatus comprises at least one camera for capturing image
data of the surveillance area, at least one radar sensor (or radar unit) for providing
information indicative of a distance to the object, and a control unit configured
to receive the image data and the information from the radar sensor(s) to generate
a visual representation of the object based on the received image data and/or on the
information indicative of the distance.
[0010] The surveillance area can be any area surrounding the vehicle and may cover, in particular,
an adjacent driving lane or a sidewalk or a bicycle lane. Each sensor has it own coverage
area from where the sensor can take sensor data and which is possibly larger than
the surveillance area so that both sensors cover a common area that can be identified
as the surveillance area or it can be part thereof. Therefore, the surveillance area
is part of both coverage areas.
[0011] The distance may be defined as the distance to the object measured from the respective
sensor or from the vehicle (or any particular part of the vehicle). The present invention
does not rely on any particular coordinate system. Rather, the position, distance
or angle may be defined with respect to any desired coordinate system (e.g. the distance(s)
or angle(s) can be measured with respect to the sensor units).
[0012] Information indicative of a distance shall be interpreted very broadly and shall
contain any information which can be used to derive therefrom a distance between the
vehicle (or the radar sensor) and the object. This information may include in particular
time delay information between a transmitted and a reflected radar signal measured
by the radar sensor or the corresponding moments in time of the emission and reception
of the used (RF) signals. Based on the time delay or the two measured different moments
in time or any other information, the control unit can then determine or calculate
the distance to and/or the position of the object.
[0013] The above-mentioned problem is thus solved by the present invention by providing
a redundancy of two independent sensor units of different types that enable a visual
feedback to the driver while avoiding false detections (e.g. due to abnormal weather
conditions affecting one sensor type).
[0014] For this feedback, the apparatus may further comprise a display unit that is configured
to receive the visual representation of the object from the control unit and to show
at least part of the surveillance area together with the object to the driver of the
vehicle. The display may be configured to show to the driver the image captured by
the at least one camera as one image and, as a second image, to display a radar picture
obtained from the information provided by at the at least one radar sensor.
[0015] To obtain a radar picture, in yet another embodiment, the at least one radar sensor
may be configured to transmit a radio frequency (RF) signal and to receive a reflected
RF signal from the object. The at least one radar sensor may further be configured
to determine an angle or an angular range from where the RF signal was received. The
control unit may be configured to determine a position of the object based on the
determined angle or angular range and the information indicative of the distance.
[0016] Hence, according to embodiments of the present invention, two visual representations
are available, one referring to the image captured by the camera(s) and the second
derived from the information received from the radar sensor(s). Both images may either
be overlaid on the display or can be displayed as separate images, e.g. adjacent to
each other.
[0017] However, the radar sensor may not distinguish between the object of interest and
background objects (e.g. trees or buildings which the driver is aware of). Therefore,
in order to detect an object of interest (i.e. to distinguish it from the background)
subsequent positioning steps may be implemented. Similarly, multiple objects of interest
may be present in the surveillance area. To identify correctly those objects, the
at least one radar sensor may be configured to repeatedly measure a distance between
one or more candidate objects and the vehicle. The control unit may receive the repeatedly
measured distances and, based thereon, can determine a relative motion of the detected
objects to the vehicle. If the objects move relative to each other, the reflected
radar signals can be identified as signals originating from different objects. Similarly,
background objects can be detected as static objects at a larger distance (e.g. beyond
a certain threshold). As a result, the apparatus is able to eliminate background objects
and/or to detect one or more further objects of interest.
[0018] Same applies to the at least one camera. When only one camera is available, there
may be a problem in identifying an object being different from the background. For
example, the ground or the driving lane may have certain patterns, which are typically
not objects of any interest to the driver and thus should be eliminated in the captured
images. As for the radar sensor, this can be achieved by taking subsequent images
so that any relative motion of the vehicle with respect to the object and with respect
to the background can be identified. Hence, any background object can be identified
and thus eliminated from the captured images.
[0019] Therefore, in yet another embodiment, the at least one camera may be configured to
capture subsequent images. The control unit may further be configured to process the
subsequent images received from the at least one camera using a processing algorithm
to detect the object in the surveillance area and/or to eliminate background objects
and/or to detect one or more further objects.
[0020] The same aim could be achieved, if multiple cameras are available so that at least
two pictures can be taken from different angles, which allows to identify objects
related to the background or further distant objects.
[0021] As mentioned already above, the control unit might be configured to fuse (combine)
both sensor signals (or sensor images) into one single sensor image. This can, for
example, be done by highlighting the object in the visual representation of the image
captured by the camera, wherein the highlighted object is related to the object detected
by the at least one radar sensor. Hence, it is not necessary to detect the object
by the camera(s). Instead, the position of an object detected by the radar sensor
or the control unit can be simply be highlighted in the image taken by the camera(s)
so that the driver can decide about the relevance of this object. Therefore, in yet
another embodiment the control unit is configured to combine information received
from the at least one radar sensor and the at least one camera and to highlight the
object or multiple objects in the image shown to the driver of the vehicle.
[0022] In yet another embodiment the control unit may be configured to predict a path of
the vehicle and/or to predict a path of the object and is further configured to issue
a warning, if the path of the vehicle intersects the path of the object. Also, the
relative path between the object and the vehicle can be determined. The warning can
be done by a separate warning module (for example a loudspeaker or a warning light),
which may or may not be part of the control unit and may or may not be already included
in the vehicle, but may be controlled by the control unit.
[0023] The warning may comprise an acoustic signal and/or a haptic signal and/or an optical
signal. The haptic signal may be related to any kind of vibration, which the driver
is able to recognize.
[0024] In yet another embodiment the control unit may further be configured to issue a brake
signal to enforce a braking and/or a steer signal to enforce a steering of the vehicle
in case the path of the vehicle and the path of object indicate a collision for a
predetermined period of time (e.g. if the driver ignores the warning and a collision
is going to happen, when the vehicle stays on its path). The predetermined period
can be selected such that the remaining time to the collision is as long as the vehicle
needs to prevent the collision (i.e. it may define the latest moment). During the
enforced braking or steering, the optional warning module can issue a constant warning
signal upon which the driver can interact to override the enforced braking and/or
steering of the vehicle by taking appropriate actions to avoid the collision of the
object.
[0025] In yet another embodiment the at least one camera may comprise a first and a second
camera, and the at least one radar sensor comprises a first and a second radar sensor,
wherein the first camera and the first radar sensor are installed on one side of the
vehicle and the second camera and the second radar sensor are installed on another
side of the vehicle to detect objects on different sides of the vehicle.
[0026] In yet another embodiment the at least one camera comprises a fish-eye lens camera
to capture image data of a large area (e.g. extending the viewing angle of more than
90° or up to 180°).
[0027] In yet another embodiment the control unit may further be configured to transform
the image data received from the at least one camera in a bird's eye view (e.g. by
using a conversion method).
[0028] The present invention relates also to a vehicle with one of the described apparatus
and the surveillance area may include an area of a side of the vehicle that extends
over an angular range of more than 90° or up to 170° measured about the side of the
vehicle.
[0029] The present invention relates also to a method for detecting an object in a surveillance
area of a vehicle. The method comprises: capturing image data of the surveillance
area by at least one camera; providing information indicative of a distance to the
object by at least one radar sensor; receiving the image data and the information
by a control unit; and generating, by the control unit, a visual representation of
the object based on the received image data and/or the information indicative of the
distance.
[0030] This method may also be implemented in software or as a computer program product
and the order of steps may be not important to achieve the desired effect. In addition,
all functions described in conjunction with the apparatus may be implemented in further
embodiments of the method.
Brief Description of the Drawings
[0031] Various embodiments of the present invention will be described in the following by
way of examples only, and with respect to the accompanying drawings, in which:
- Fig. 1
- depicts an apparatus for detecting an object in a surveillance area according to an
embodiment of the present invention, when installed on a commercial vehicle;
- Fig. 2
- depicts further optional components of the embodiment depicted in Fig. 1;
- Fig. 3
- depicts another embodiment with multiple radar sensors and multiple cameras;
- Fig. 4
- shows a visual representation of the surveillance area as seen by the driver of the
commercial vehicle;
- Fig. 5
- shows a bird's eye view of the visual representation of the surveillance area as seen
by the driver of the commercial vehicle; and
- Fig. 6
- illustrates a flow chart of a method for detecting an object in a surveillance area
according to an embodiment of the present invention.
Detailed Description
[0032] Fig. 1 depicts an apparatus for detecting an object 10 in a surveillance area 30,
50 of a vehicle 70. The apparatus comprises one camera 110 for capturing image data
of the surveillance area 30, one radar sensor 120 for providing a distance d to the
object 10, and a control unit 130. The control unit 130 is configured to receive the
image data and the information from the radar sensor 120 to generate a visual representation
of the object 10 based on the received image data and/or the distance d.
[0033] In the embodiment of Fig. 1, the apparatus is installed on an exemplary commercial
vehicle 70 with a towing vehicle 71 (e.g. a tractor) and a trailer 72, wherein only
one camera 110 and only one radar sensor 120 are installed at a right-hand side of
the towing vehicle 71 in the driving direction (to the left in Fig. 1). The control
unit 130 and an optional display 140 may further be installed on the towing vehicle
71. The control unit 130 is connected to the camera 110, to the radar sensor 120 and
to the display 140 to receive the signals from the camera 110 and the radar sensor
120 and to supply corresponding visual data to the display 140. The radar sensor 120
may provide an angular resolution of the detected object 10 so that the control unit
130 is able to determine a position for the object 10 (e.g. given by the distance
d and the angle α).
[0034] The camera 110 may be sensitive to capture images in the coverage area 30 and the
radar sensor 120 has a range indicated by the dotted line 50. In particular, both
coverage areas 30, 50 overlap significantly over each other and the overlap may define
the surveillance area. The range 50 of the radar 120 is limited by the used radar
signal. For example, the one or more radar sensors 120 may operate with radio frequency
signals with exemplary frequencies between 10 and 100 GHz or about 24 GHz and/or at
about 77/79 GHz. For example, one radar sensor may operate at 24 GHz whereas another
may operate at 77/79 GHz. The detection range may go up to 100 meters, although in
the target operating mode the range may be around 25 meters. Such range is of advantage,
because it allows to cover a whole side the vehicle 70 with one radar sensor 120,
which can thus be installed on the towing vehicle 71 - there is no need to install
any sensor on the trailer 72 (hence the system operates trailer-independent). In addition,
in this case the radar sensor provides more accurate information of the detected objects.
[0035] In further embodiments, other or additional radar units may be used as, for example,
ultrasonic devices or any other electromagnetic radiation can be used, which is suitable
for measuring a distance between the radar sensor 120 and the object 10 (e.g. laser
light). Therefore, in further embodiments, the radar coverage area 50 can also be
smaller than the camera coverage area 30. However, it is of advantage to use electromagnetic
wave signals, which can travel a sufficiently long distance to extend therewith the
coverage area 50 so that only one radar sensor 120 would suffice to cover the whole
side of the vehicle 70, and there is no need to distribute many radar sensors over
the whole long side of the vehicle 70 (as in conventional systems).
[0036] The camera 110 and the radar sensor 120 may be installed at different locations.
For example, the camera 110 may be installed at an upper portion of the driver cabin
and the radar unit 120 may be installed at a lower portion (e.g. between two axles).
Consequently, the viewing direction of the camera 110 is downward, which limits the
coverage area 30 of the camera 110. On the other hand, the radar unit 120 may emit
signals parallel to the ground so that the coverage area 50 of the radar unit 120
is merely limited by the range of the used signals. Hence, as shown in Fig. 1, the
range 50 of the radar sensor 120 may be larger than the coverage area 30 covered by
the camera 110 (although this area can be changed by changing the orientation of the
camera 110).
[0037] Fig. 2 depicts a further embodiment of the apparatus according to the present invention,
which differs from the embodiment as shown in Fig. 1 in that the camera 110 is attached
near or at a corner of the towing vehicle 71. With the camera 110 installed in the
corner region of the towing vehicle 71 it becomes possible to extend the coverage
area 30 of the camera 110 to cover also the front side of the vehicle 70. All other
features as depicted in Fig. 2 are the same as in Fig. 1 so that a repetition of the
description is not needed here.
[0038] As it will be described in the following, already one radar sensor 120 can be used
to obtain a radar image, which is suitable to be shown to the driver as an addition
visual representation of the surveillance area.
[0039] As said before, the radar sensor 120 is configured to measure the distance d between
the radar sensor 120 and the object 10. In order to measure this distance d the radar
sensor may emit an RF-signal, which is reflected by the object 10 and returns to the
radar sensor 120. The radar sensor 120 measures the time difference between the emission
of the RF-signal to the object 10 and the reception of the return signal. From this
time difference, while taking into account the propagation speed of the wave signal,
the control unit 130 (or the radar sensor 120 itself) can determine the distance d
from the radar sensor to the object 10.
[0040] The radar sensor 120 may further be configured to provide an angular resolution of
the detected return signals. For example, the radar signal 120 can scan the coverage
area 50, for example, starting on the left hand side in Fig. 2 and subsequently scanning
the area up to the right hand side of Fig. 2 (or vice versa). When the radar sensor
120 receives a return signal, the radar sensor 120 can determine the corresponding
angle α associated with a reflecting object. For example, the radar sensor 120 may
be configured to emit pulses of RF-signals, for example, for each angular value (or
angular range) one pulse and if a reflected return signal is received, the radar sensor
can assign the corresponding distance d and angle α to the detected object 10.
[0041] If multiple objects are present in the coverage area 50 (e.g. a first object 10a
and a second object 10b), the radar sensor may receive multiple reflected signals
from the multiple objects 10a, 10b. However, if the multiple objects 10 do not move
relative to each other, the radar sensor 120 (or the control unit 130) cannot distinguish,
whether the multiple reflected signals originate from different parts of only one
object or whether two objects are present in the coverage area 50. However, the speed
of each object can be determined by performing subsequent scans over the coverage
area 50. If the independent objects move relative to each other they can be identified
a different objects, and the radar sensor 120 can assign the corresponding distances
d1, d2 and angles α1, α2 to the detected first and second objects 10a, 10b. For example,
multiple objects can be detected or identified (by scanning the angular range) if
the relative velocity and/or relative position between the multiple objects exceed
a certain threshold. These objects may be (visually) separated and identified as separate
objects. The field of view of these sensors may go up to 170 degrees.
[0042] As a result, the at least one radar sensor 120 can provide sufficient information
to detect the object(s) in the coverage area 50 and to determine the position of the
object(s) 10. The determined position may then be used to provide a further visual
feedback to the driver. Optionally, this representation may be combined with the picture
captured with the camera 110. For example, the control unit 130 can overlay both pictures
and highlight a depicted object 10. In addition, since the distance to the object
10 is known any background object can be eliminated (e.g. simply by ignoring object
beyond a particular threshold).
[0043] Fig. 3 depicts a further embodiment of the apparatus installed on a vehicle. This
embodiment comprises a first radar sensor 121, a second radar sensor 122 and a third
radar sensor 123. The first radar sensor 121 is installed on the right-hand side of
the towing vehicle 71. The second radar sensor 122 is installed on the left-hand side
of the towing vehicle 71. The third radar sensor 123 is installed at a front side
of the towing vehicle 71. In addition, the embodiment comprises a first camera 111
and a second camera 112. The first camera 111 is installed at the front right corner
of the towing vehicle. The second camera 112 is attached at the front left corner
of the towing vehicle 71.
[0044] With such an installed apparatus it is possible to cover not only the front side
of the vehicle 70 but also both sides, the right-hand side as well as the left-hand
side of the vehicle. For example, the first camera 111 and the first radar sensor
121 may cover the right-hand side of the vehicle 70 (as it was described in conjunction
with Fig. 1). The second camera 112 and the second radar sensor 122 cover the left-hand
side of the vehicle. This coverage is an analogy to the embodiment as described with
Fig. 1. The first camera 111 has a first coverage area 31 (around the right corner)
and the second camera 112 has a second coverage area 32 (around the left corner).
In addition, the embodiment of Fig. 3 also covers the front side of the vehicle by
the first camera 111 and/or the second camera 112 in combination with the third radar
sensor 123. The third radar sensor 123 covers the range 53 in front of the vehicle
70, which overlaps with the radar coverage area on the right-hand side 51 and the
radar coverage area on the left-hand side 52.
[0045] Hence, Fig. 3 shows an embodiment, which provides a maximum extension of all sides
of the moving vehicle. The display unit 140 can thus depict three pictures (or visual
representations) of the three sides of the vehicle 70, wherein each visual representation
is obtained in the same way as described in conjunction with Figs. 1, 2.
[0046] Optionally, further cameras may also be installed at the same side of the vehicle
and/or further radar sensors can also be installed at the same side of the vehicle,
in which case the resolution can be further increased. In addition, any further (hidden)
object (e.g. behind the one object as shown in Fig. 1 or Fig. 2) can be detected if
multiple radar sensors are present. For example, with further radar sensors installed
at the same side of the vehicle, it is possible to make several distance measurements
between the radar sensor and the object. As a result, the position of the object 10
can be determined with better accuracy.
[0047] In further embodiments, not only the towing vehicle 71 is used for attaching the
at least one camera 110 and the at least one radar sensor 120. It is also possible
that further, optional, cameras and/or radar sensors are attached to the trailer 172.
However, installing the camera(s) and the radar sensor(s) merely on the towing 71
provide the advantage that the trailer can be changed freely while ensuring the correct
operation of the turning-assistant system.
[0048] Fig. 4 shows an example for the visual feedback to the driver, for example shown
on the display 140 of the system architecture as shown in Figs. 1 and 2. As it is
shown in Fig. 4, the driver can see the object 10 (for example a bicycle rider) as
a marked object beside the vehicle. The object marking can either be generated based
on the captured image data of the at least one camera 110 or in combination with the
at least one radar sensor 120 which also detects an object 10 travelling at the side
of the vehicle 70. Thus, both sensor units 110, 120 may identify the same object 10,
which is marked in the visual feedback to the driver. This marking may be interpreted
by the driver as a confirmation that both systems 110, 120 have detected the same
object at the same position. Hence, the driver obtains a high level of confidence
in the situation depicted on the display 140.
[0049] Fig. 4 shows a particular raw camera view, i.e. a view as seen from the position
of the camera 110. However, it may be of advantage to manipulate the camera view such
that it represents the bird's eye view.
[0050] Such a converted picture is shown in Fig. 5, which again shows the exemplary bicycle
rider as object 10, who uses a bike lane parallel to the traffic direction of the
vehicle 70. Again, this bird's eye view can be shown to the driver in combination,
alternatively or selectable to the driver of the vehicle 70 so that the driver can
select the way of viewing which s/he may prefer.
[0051] Fig. 5 further shows the area 50 covered by the radar 120 as a dotted line so that
the driver can see, whether the detected object is reliably detected by both sensors
or whether the object is outside the area of coverage 50 of the radar sensor 120.
[0052] Fig. 6 depicts a method for detecting an object 10 in a surveillance area 30, 50
of a vehicle 70. The method comprises the steps of: capturing S110 image data of said
surveillance area 30 by at least one camera 110; providing S120 information indicative
of a distance to said object 10 by at least one radar sensor 120; receiving S130 said
image data and said information by a control unit 130; and generating S140, by said
control unit 130, a visual representation of said object 10 based on said received
image data and/or said information indicative of said distance.
[0053] This method may also be a computer-implemented method. A person of skill in the art
would readily recognize that steps of various above-described methods might be performed
by programmed computers. Embodiments are also intended to cover program storage devices,
e.g., digital data storage media, which are machine or computer readable and encode
machine-executable or computer-executable programs of instructions, wherein the instructions
perform some or all of the acts of the above-described methods, when executed on the
a computer or processor.
[0054] Advantageous aspects of the various embodiments can be summarized as follows:
Embodiments of the present invention relate to a turning assist system that gives
visual information of a relevant area and marks objects on a camera view, for example,
displayed to the driver. The indication of objects in the camera view can, for example,
be based on a processing algorithm for the camera image and the radar sensor signals.
The system may further warn the driver and reacts if necessary. The warning may be
visual, audible or haptic. If the driver does not react on the warning, an intervention
can be initiated to avoid a predicted collision, which may include for example a braking
and/or steering of the vehicle.
[0055] The visual feedback (for example the displayed camera view) to the driver may allow
the driver to have a better understanding of the situation to assist the driver in
finding the correct action to avoid any accident. With a single camera view the driver
is already able to observe the critical area around the vehicle without the need of
an object detection feature implemented optionally by the disclosed system. When using
the system only in a visual mode, the number of accidents caused by collisions related
to objects in blind spots around the commercial vehicles can already be reduced significantly.
[0056] The disclosed system comprises, for example, at least one camera 110, at least one
radar sensor 120 and a visual feedback 140 to the driver. Optionally, the commercial
vehicle specific turning-assistant provides the possibility of visually observing
critical areas by the driver, to help the driver to recognize relevant objects. This
recognition can, for example, be done by marking them on the camera view and/or by
warning for the occurrence of such objects. Finally, the system may also initiate
a braking and/or steering intervention, if the driver does not act accordingly.
[0057] The robustness of the system is increased by the combination or fusion of two different
sensor signals (the radar signal and the camera images), which cover the same surveillance
area. This may also extend the operational range of the system, because it can be
applied at various environmental circumstances, for example at night, at day, or under
severe weather conditions.
[0058] The redundancy provided by two different sensor units is a particular advantage of
embodiments of the present invention. The second, parallel sensor can compensate for
insufficiencies of one sensor type. For example, the camera may not take sufficient
pictures if the weather conditions are bad or at nighttime. Similarly, radar sensors
might be affected by strong rain or snow falls, or falling leaves may generate false
depiction signals of phantom objects. However, by combining a picture-taking sensor
as a camera 110 with a radar sensor 120, the resulting apparatus will operate reliably
during all weather conditions. This enables the sensor 110, 120 to operate under all
circumstances during the daytime and at night and for all weather conditions, such
as for example clear weather, fog, rain or snow.
[0059] In particular, the possibility of providing the driver with a clear visual feedback,
for example by a camera view, the driver gains confidence about the actual driving
traffic situation even at the side opposite to the driver's side.
[0060] As shown in Figs. 1, 2 and 3, the camera as well as the radar sensor may be mounted
on the towing vehicle (tractor), whereas the towed vehicle (for example a trailer)
is not involved in the system installation. This provides the advantage that the trailer
can be freely exchanged and can be used for any type of towing vehicle without compromising
the safety.
[0061] For example, the at least one camera 110 can be mounted high enough, for example
around the top of the cabin, to provide a good freedom of view of the relevant area.
The radar sensor(s) 120 may, for example, be installed between the front axle and
the rear axle of the towing vehicle 70. The radar signal may be transmitted into the
surveillance area 50 parallel to the ground, thereby avoiding any false detection
originated by reflection from the ground. Any type of wave signal can be used for
the radar sensor, for example, any type of electromagnetic waves as for example RF
waves or ultrasonic waves.
[0062] Further advantageous realizations of the present invention relate to a turning assist
apparatus comprising: at least one camera 110; and at least one radar sensor 120 (both
installed on the relevant side of the vehicle, rider side - opposite to the driver
side); and a display 140 showing the view of the camera to the driver.
[0063] In yet another realization, the turning-assistant apparatus is characterized in that
the relevant objects 10 are indicated to the driver.
[0064] In yet another realization, the turning-assistant apparatus is characterized in that
the camera view is processed by image processing algorithms to detect objects 10.
[0065] In yet another realization, the turning-assistant apparatus is characterized in that
the objects 10 detected by the radar sensor 120 are indicated to the driver.
[0066] In yet another realization, the turning-assistant apparatus is characterized in that
the sensor fusion is realized between the two sensor types (camera 110 & radar sensor
120) realizing the object detection.
[0067] In yet another realization, the turning-assistant apparatus is characterized in that
the object indication to the driver is realized by marking the objects 10 on the camera
view displayed to the driver.
[0068] In yet another realization, the turning-assistant apparatus is characterized in that
the system can enable further warning methods (e.g. audible, haptic) to warn the driver
of critical situations.
[0069] In yet another realization, the turning-assistant apparatus is characterized in that
the camera view is a fish-eye lens camera.
[0070] In yet another realization, the turning-assistant apparatus is characterized in that
the system is capable of converting the camera view into a bird's eye view using point
of view conversion method.
[0071] In yet another realization, the turning-assistant apparatus is characterized in that
the system can predict the path of the vehicle.
[0072] In yet another realization, the turning-assistant apparatus is characterized in that
the system can distinguish between stationary and moving objects and predict the path
of the object.
[0073] In yet another realization, the turning-assistant apparatus is characterized in that
the system can predict intersection of vehicle and object path.
[0074] In yet another realization, the turning-assistant apparatus is characterized in that
the system can brake and/or steer to avoid the collision.
[0075] In yet another realization, the turning-assistant apparatus is characterized in that
the system can be optionally extended to other sides of the vehicle by adding further
cameras and radar sensors.
[0076] The description and drawings merely illustrate the principles of the disclosure.
It will thus be appreciated that those skilled in the art will be able to devise various
installed that, although not explicitly described or shown herein, embody the principles
of the disclosure and are included within its scope.
[0077] Furthermore, while each embodiment may stand on its own as a separate example, it
is to be noted that in other embodiments the defined features can be combined differently,
i.e. a particular feature descripted in one embodiment may also be realized in other
embodiments. Such combinations are covered by the disclosure herein unless it is stated
that a specific combination is not intended.
List of reference signs
[0078]
- 10, 10a, 10b
- object(s)
- 30, 50
- surveillance area
- 70
- vehicle
- 110, 111, 112
- cameras
- 120, 121, 122
- radar sensors
- 130
- control unit
- 140
- display unit
- d, d1, d2
- distance(s) to the object(s)
- α, α1, α2
- angle(s) to the object(s)
1. An apparatus for detecting an object (10) in a surveillance area (30, 50) of a vehicle
(70), the apparatus comprising:
at least one camera (110) for capturing image data of said surveillance area (30);
at least one radar sensor (120) for providing information indicative of a distance
to said object (10); and
a control unit (130) configured to receive said image data and said information and
to generate a visual representation of said object (10) based on said received image
data and/or said information indicative of said distance (d).
2. The apparatus according to claim 1, further comprising a display unit (140) configured
to receive said visual representation of said object (10) from said control unit (130)
and to show at least part of said surveillance area (30, 50) together with said object
(10).
3. The apparatus according to claim 1 or claim 2,
said at least one radar sensor (120) is configured to transmit a radio frequency (RF)
signal and to receive a reflected RF signal from said object (10) and is further configured
to determine an angle (α) or an angular range from where said RF signal was received,
and wherein said control unit (120) is configured to determine a position of said
object (10) based on said determined angle (α) or angular range and said information
indicative of said distance (d).
4. The apparatus according to claim 2 or claim 3, wherein
said at least one radar sensor (120) is configured to repeatedly measure a distance
between one or more candidate objects and said vehicle (70), and
wherein said control unit (130) is further configured to receive said repeatedly measured
distances and, based thereon, to detect a relative motion of said detected object
(10) to said vehicle (70) to enable an elimination of background objects and/or a
detection of one or more further objects of interest (10a, 10b).
5. The apparatus according to one of the preceding claims,
wherein said at least one camera (110) is configured to capture subsequent images,
and
said control unit (130) is further configured to process said subsequent images received
from said at least one camera (110) using a processing algorithm to detect said object
(10) in said surveillance area (30) to enable an elimination of background objects
and/or a detection of one or more further objects of interest (10a, 10b).
6. The apparatus according to one of claims 2 to 5, wherein said control unit (130) is
configured to combine information received from said at least one radar sensor (120)
and said at least one camera (110) and to highlight said object (10) or multiple objects
(10a, 10b) in the shown part of the surveillance area (30, 50).
7. The apparatus according to one of the preceding claims, wherein said control unit
(130) is further configured to predict a collision of said vehicle (70) and said object
(10) and to issue a collision warning, wherein said collision prediction is performed
based a path of said object (10) and/or on a path of said vehicle (70) and/or a relative
path between said vehicle (70) and said object (10).
8. The apparatus according to claim 7, wherein said warning comprises an acoustic signal
and/or a haptic signal and/or an optical signal.
9. The apparatus according to claim 7 or claim 8, wherein said control unit (130) is
further configured to issue a brake signal to enforce a braking and/or a steer signal
to enforce a steering of said vehicle (70) in case said collision is unavoidable if
no breaking and/or steering is performed.
10. The apparatus according to one of the preceding claims, wherein said at least one
camera (110) comprises a first and a second camera (111, 112), and wherein said at
least one radar sensor (120) comprises a first and a second radar sensor (121, 122),
wherein said first camera (111) and said first radar sensor (121) are installed on
one side of said vehicle (70), and said second camera (112) and said second radar
sensor (122) are installed on another side of said vehicle (70) to detect objects
on different sides of the said vehicle (70).
11. The apparatus according to one of the preceding claims, wherein said at least one
camera (110) comprises a fish-eye lens.
12. The apparatus according to one of the preceding claims, wherein said control unit
(130) is further configured to transform said image data received from said at least
one camera (110) in a bird's eye view.
13. A vehicle with an apparatus according to one of the preceding claims.
14. The vehicle according to claim 13, wherein said surveillance area (30, 50) includes
an area of a side of said vehicle (70) that extends over an angular range of more
than 90° measured about said side of said vehicle (70).
15. Method for detecting an object (10) in a surveillance area (30, 50) of a vehicle (70),
comprising:
capturing (S110) image data of said surveillance area (30) by at least one camera
(110);
providing (S120) information indicative of a distance to said object (10) by at least
one radar sensor (120);
receiving (S130) said image data and said information by a control unit (130); and
generating (S140), by said control unit (130), a visual representation of said object
(10) based on said received image data and/or said information indicative of said
distance.