(19)
(11) EP 3 306 590 A1

(12) EUROPEAN PATENT APPLICATION

(43) Date of publication:
11.04.2018 Bulletin 2018/15

(21) Application number: 16192921.1

(22) Date of filing: 07.10.2016
(51) International Patent Classification (IPC): 
G08G 1/16(2006.01)
(84) Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
MA MD

(71) Applicant: Autoliv Development AB
447 83 Vårgårda (SE)

(72) Inventors:
  • Bjärkefur, Jon
    58249 Linköping (SE)
  • Forslund, David
    58432 Linköping (SE)

(74) Representative: Müller Verweyen 
Patentanwälte Friedensallee 290
22763 Hamburg
22763 Hamburg (DE)

   


(54) DRIVER ASSISTANCE SYSTEM AND METHOD FOR A MOTOR VEHICLE


(57) A driver assistance system (10) for a motor vehicle comprises an imaging apparatus (12) adapted to capture images (13) of the surrounding of a motor vehicle, and a processing device 14) adapted to perform image processing on the captured images and to control a driver assistance device (18, 19) based on the result of said image processing. The processing device (14) comprises a risk assessment estimator (30) adapted to estimate a traffic risk level (40). The traffic risk level (40) is estimated by said risk assessment estimator (30) on the basis of input source data (13, 36) from at least one input source (12, 20), wherein said input source data comprises image data (13) captured by said imaging apparatus (12).




Description


[0001] The invention relates to a driver assistance system for a motor vehicle, comprising an imaging apparatus adapted to capture images of the surrounding of a motor vehicle, and a processing device adapted to perform image processing on the captured images and to control a driver assistance device based on the result of said image processing, wherein said processing device comprises a risk assessment estimator adapted to estimate a traffic risk level.

[0002] In driving situations the risk level varies substantially. For example, an empty highway has much lower risk than a highly trafficked city.

[0003] Assessing the risk level can be beneficial for informing the driver so that the driver can adapt to an adequate alertness level. The functionality is beneficial in different ways for the range of manually controlled to semi-autonomous to fully autonomous vehicles.

[0004] US 7,672,764 B2 discloses a driver assistance system including various sensors for recording driver's conditions and vehicle operations by a driver. The driver's conditions in combination with the vehicle operations are used to determine a risky situation of the vehicle based inter alia on an averaged driver's condition.

[0005] The problem underlying the present invention is to provide a driver assistance system with improved traffic risk level estimation.

[0006] The invention solves this problem with the features of the independent claims.

[0007] The invention describes a driver assistance system where the risk assessment estimator estimates the traffic risk level on the basis of input source data from at least one input source including data captured by the imaging apparatus. The image data captured by the imaging apparatus contains valuable information in addition to sensor data from other vehicle sensors used in the prior art. By using this image data in the risk assessment estimator for the estimation of the traffic risk level, the accuracy and reliability of the traffic risk level estimation can be enhanced, contributing to a better performance of the driver assistance system.

[0008] In the present application, traffic risk level refers to the amount of alertness required by the driver. For example the traffic risk level would be low when driving on an empty highway where no threats are near. On the other hand, a risky situation (high risk level) can for example be a situation where it is common for pedestrians to be near the road and/or where the speed of the ego vehicle is relatively high given the regulations of the current road.

[0009] Preferably the appearance of the scene surrounding the motor vehicle extracted from captured images is used by said risk assessment estimator. Herein, scene refers to the landscape or background scenery, as opposed to detected objects like other vehicles, pedestrians, cyclists, animals etc. By analysing the appearance of the scene surrounding the motor vehicle, it can be determined in what type of environment the vehicle is located in, which closely correlates with the traffic risk level. For example, the environment of the vehicle may preferably be classifiable by the risk assessment estimator into one of city environment, highway environment, rural road environment.

[0010] Preferably the risk assessment estimator considers the whole image in the determination of said traffic risk level. This holistic approach is advantageous over prior arts systems detecting and analyzing objects in the images, only, which form only small parts of the whole image. The considering of the whole image makes use of the complete image information contained in the image, where valuable information can also be contained in image parts not forming discrete objects like pedestrians, other vehicles, pavement, cross walk signs, bus stops etc.

[0011] In a preferred embodiment of the invention, the risk assessment estimator considers each pixel of a captured image as a separate input source in the determination of said traffic risk level. Through the analysis of the captured images on an individual pixel level, maximum information can be used in the determination of the traffic risk level, avoiding any inaccuracies originating from averaging of the image over multipixel areas.

[0012] Preferably the risk assessment estimator is trained by using a learning system, in particular a deep learning system, like a deep neural network. The network is preferably directly connected to each pixel of the captured images, and to other sensors used as input sources for said traffic risk level estimation. The learning system is preferably trained using training data rated by a plurality of persons with respect to the estimated traffic risk level. That is, each person rates the estimated traffic risk level of each training image according to his best evaluation, and for example the average of all ratings is taken as the true traffic risk level. The learning system could automatically learn to recognize aggressive driving, pedestrians (zebra) crossings, school areas, etc. Alternatively or in addition, the classifier could be trained by using publically available databases of accidents, containing corresponding GPS positions, and recording data at those locations. The classifier can then learn to recognize environments that look similar to actual environments where real accidents happen.

[0013] In order to improve the accuracy of the traffic risk level estimation, the use of a plurality of input sources for the risk assessment estimator, generally as many input sources as possible correlating with the traffic risk level, is desirable.

[0014] A first category of preferred input sources provide ego vehicle data, in particular ego vehicle dynamics data, as input source data. Possible examples of input sources falling under this category are acceleration sensors, yaw sensor, roll sensor, pitch sensor, speed sensor, braking sensor, steering wheel angle sensor.

[0015] A second category of preferred input sources provide data of other objects in the environment of the motor vehicle, in particular other object's dynamics data. Other (discrete) objects may for example be moving objects like other vehicles, pedestrians, bicyclists, large animals. Immovable objects like traffic signs, poles, buildings, trees etc. are also possible. Possible examples of input sources falling under this category are the imaging apparatus, a radar apparatus, a LIDAR apparatus, a backward looking camera. Also, it can be preferable to use a data memory or storing information about detected other objects as an input source.

[0016] Other possible input sources provide input source data comprising: speed limits obtained by sign recognition or a satellite navigation receiver; biometric information; road conditions; ambient conditions, like ambient temperature or ambient humidity; stored data of previous hazard areas or conditions; location of crosswalks obtained by image recognition or a satellite navigation receiver.

[0017] The assessed or estimated traffic risk level can be used in different manners. In one embodiment, the driver assistance system comprises a risk level indicator adapted to indicate an estimated risk level to the driver. For example, one or more diodes could be used to indicate the risk level to the driver via color coding, e.g. covering the color spectrum between green (low risk level) to yellow (middle risk level) to red (high risk level). Alternatively or in addition, the estimated risk level may preferably be used for requesting safety-relevant actions by the driver depending on said estimated risk level. For example, at a first (lower) risk level, the driver could be requested to have his hands on the driving wheel, while at a second (higher) risk level, the driver could be requested to have his eyes on the road. Also in autonomous driving applications, the estimated risk levels may be used by a corresponding driver assistance device.

[0018] Also, information could be downloaded from the cloud during driving, e.g. traffic jam data, using driving dynamics of other vehicles (similar to electronic maps available in the internet), or average driver aggressiveness which may vary from region to region and from time to time of the day.

[0019] In the following the invention shall be illustrated on the basis of preferred embodiments with reference to the accompanying drawings, wherein:
Fig. 1
shows a schematic drawing of a driver assistance system in a motor vehicle; and
Fig.2
shows a schematic drawing of a risk assessment classifier in such a driver assistance system.


[0020] The driver assistance system 10 is mounted in a motor vehicle and comprises an imaging apparatus 11 for capturing images of a region surrounding the motor vehicle, for example a region in front of the motor vehicle. Preferably the imaging apparatus 11 comprises one or more optical imaging devices 12, in particular cameras, preferably operating in the visible and/or infrared wavelength range, where infrared covers near IR with wavelengths below 5 microns and/or far IR with wavelengths beyond 5 microns. In some embodiments the imaging apparatus 11 comprises a plurality imaging devices 12 in particular forming a stereo imaging apparatus 11. In other embodiments only one imaging device 12 forming a mono imaging apparatus 11 can be used.

[0021] The imaging apparatus 11 is coupled to a data processing device 14 adapted to process the image data received from the imaging apparatus 11. The data processing device 14 is preferably a digital device which is programmed or programmable and preferably comprises a microprocessor, microcontroller a digital signal processor (DSP), and/or a microprocessor part in a System-On-Chip (SoC) device, and preferably has access to, or comprises, a data memory 25. The data processing device 14 may comprise a dedicated hardware device, like a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC), or an FPGA and/or ASIC part in a System-On-Chip (SoC) device, for performing certain functions, for example controlling the capture of images by the imaging apparatus 11, receiving the electrical signal containing the image information from the imaging apparatus 11, rectifying or warping pairs of left/right images into alignment and/or creating disparity or depth images. The data processing device 14, or part of its functions, can be realized by a System-On-Chip (SoC) device comprising, for example, FPGA, DSP, ARM and/or microprocessor functionality. The data processing device 14 and the memory device 25 are preferably realised in an on-board electronic control unit (ECU) and may be connected to the imaging apparatus 11 via a separate cable or a vehicle data bus. In another embodiment the ECU and one or more of the imaging devices 12 can be integrated into a single unit, where a one box solution including the ECU and all imaging devices 12 can be preferred. All steps from imaging, image processing to possible activation or control of driver assistance device 18 are performed automatically and continuously during driving in real time.

[0022] Image and data processing carried out in the processing device 14 advantageously comprises identifying and preferably also classifying possible objects (object candidates) in front of the motor vehicle, such as pedestrians, other vehicles, bicyclists and/or large animals, tracking over time the position of objects or object candidates identified in the captured images, and activating or controlling at least one driver assistance device 18 depending on an estimation performed with respect to a tracked object, for example on an estimated collision probability. The driver assistance device 18 may in particular comprise a display device to display information relating to a detected object. However, the invention is not limited to a display device. The driver assistance device 18 may in addition or alternatively comprise a warning device adapted to provide a collision warning to the driver by suitable optical, acoustical and/or haptic warning signals; one or more restraint systems such as occupant airbags or safety belt tensioners, pedestrian airbags, hood lifters and the like; and/or dynamic vehicle control systems such as braking or steering control devices. Information about detected objects may be stored in said data memory 25.

[0023] The processing device 14 has access, for example via a vehicle data bus, to data from vehicle sensors 20 (21, 22, 23, ...) other than the imaging apparatus 12, which are called other sensors 20 in the following for simplicity. The other sensors 20 comprise for example one or more acceleration sensors, a yaw sensor, a roll sensor, a pitch sensor, a speed sensor, a braking sensor, a steering wheel angle sensor, a radar apparatus, a LIDAR apparatus, a backward looking camera, a satellite navigation receiver, a biometric sensor adapted to obtain biometric information of the driver, ambient temperature sensor, ambient humidity sensor.

[0024] In the processing device 14 a risk assessment classifier 30 is realized, for example by software, which is adapted to classify the traffic risk level, as defined above, into one of a plurality of at least three, preferably at least five, more preferably at least ten, for example 256 possible traffic risk level values ranging from (very) low risk to (very) high risk. The risk assessment classifier 30 is shown in more detail in Figure 2. The risk assessment classifier 30 comprises an artificial neural network 31 (deep neural network) depicted only schematically in Figure 2. The network 31 uses as separate input sources every pixel of images 13 captured by the imaging apparatus 12 as well as other sensor data 36, i.e., sensor data 32, 33, 34, 35 originating from the other sensors 20 in Figure 1 (vehicle sensors 21, 22, 23, ...).

[0025] In an initial procedure, which may be executed in the development of the driver assistance system 10, the network 31 learns how to assess the traffic risk level of a traffic situation. This is done by feeding reference data into the network 31, composed of reference images 13 taken for example by driving a test car, and reference sensor data 36 measured at the time of the corresponding reference images 13. All reference images 13 have been assessed by a group of test persons in advance, who rate the traffic risk level of the traffic situation shown in each reference image 13. The true traffic risk level may be calculated as an average of all traffic risk levels estimated by the test persons, and is fed to the network 31 as set data together with the corresponding reference data.

[0026] Similarly to reference input fed to the network 31 during training, reference output 13 could be obtained by risk level annotations (fed into the network 31 as signal 40) performed by the marking team at the developer of the vision system 10.

[0027] After the network 31 has been trained how to assess the traffic risk level of numerous traffic situations, representative of essentially all traffic situations occurring in practice, the expert network 31 is implemented into cars for everyday usage. In the car, the images 13 captured by the imaging apparatus 12 together with the corresponding sensor data 36 measured at the time of capturing the images 13 are fed into the network 31. Based on all input sources, the network calculates online and outputs the traffic risk level 40 belonging to the specific traffic situation shown in the image 13 under consideration and corresponding sensor data 36.

[0028] The calculated traffic risk level can be used for different applications. In Figure 1, for example, a schematic traffic risk indicator 19 is shown with three indicator LEDs, where a green LED indicates low risk, a yellow LED indicates middle risk and a red LED indicates high risk. Of course, one multicolor LED can be used instead of multiple single color LEDs. Also, more than three colors, for example up to 256 colors can be used for indicating the traffic risk level in a more differentiated manner.


Claims

1. A driver assistance system (10) for a motor vehicle, comprising an imaging apparatus (12) adapted to capture images (13) of the surrounding of a motor vehicle, and a processing device 14) adapted to perform image processing on the captured images and to control a driver assistance device (18, 19) based on the result of said image processing, wherein said processing device (14) comprises a risk assessment estimator (30) adapted to estimate a traffic risk level (40), characterized in that said traffic risk level (40) is estimated by said risk assessment estimator (30) on the basis of input source data (13, 36) from at least one input source (12, 20), wherein said input source data comprises image data (13) captured by said imaging apparatus (12).
 
2. The driver assistance system as claimed in claim 1, characterized in that the appearance of the scene surrounding the motor vehicle extracted from captured images (13) is used by said risk assessment estimator (30).
 
3. The driver assistance system as claimed in any one of the preceding claims, characterized in that said risk assessment estimator (30) considers the whole image (13) in the determination of said traffic risk level (40).
 
4. The driver assistance system as claimed in any one of the preceding claims, characterized in that said risk assessment estimator (30) considers each pixel of a captured image (13) as a separate input source in the determination of said traffic risk level (40).
 
5. The driver assistance system as claimed in any one of the preceding claims, characterized in that said risk assessment estimator (30) comprises a learning network (31), in particular an artificial neural network.
 
6. The driver assistance system as claimed in any one of the preceding claims, characterized in that said learning network (31) is trained using training data rated by a plurality of persons with respect the estimated traffic risk level.
 
7. The driver assistance system as claimed in any one of the preceding claims, characterized in that input source data (36) comprises ego vehicle data, in particular ego vehicle dynamics data.
 
8. The driver assistance system as claimed in any one of the preceding claims, characterized in that said at least one input source (20) comprises one or more of an acceleration sensor, yaw sensor, roll sensor, pitch sensor, speed sensor, braking sensor, steering wheel angle sensor.
 
9. The driver assistance system as claimed in any one of the preceding claims, characterized in that said input source data (36) comprises data of other objects in the environment of the motor vehicle, in particular other objects dynamics data.
 
10. The driver assistance system as claimed in any one of the preceding claims, characterized in that said driver assistance system (10) comprises a data memory (25) for storing information about detected other objects.
 
11. The driver assistance system as claimed any one of the preceding claims, characterized in that said at least one input source (20) comprises one or more of said imaging apparatus, a radar apparatus, a LIDAR apparatus, a backward looking camera.
 
12. The driver assistance system as claimed in any one of the preceding claims, characterized in that input source data (13, 36) comprises one or more of:

- speed limits obtained by sign recognition or a satellite navigation receiver;

- biometric information;

- road conditions;

- ambient conditions, like ambient temperature or ambient humidity;

- stored data of previous hazard areas or conditions;

- location of crosswalks obtained by image recognition or a satellite navigation receiver.


 
13. The driver assistance system as claimed in any one of the preceding claims, characterized in that the driver assistance system (10) comprises a risk level indicator (19) adapted to indicate an estimated risk level (40) to the driver.
 
14. The driver assistance system as claimed in any one of the preceding claims, characterized in that the estimated risk level (40) is used for requesting safety-relevant actions by the driver depending on said estimated risk level.
 
15. A driver assistance method for a motor vehicle, comprising an imaging apparatus (12) adapted to capture images (13) of the surrounding of a motor vehicle, and a processing device (14) adapted to perform image processing on the captured images (13) and to control a driver assistance device (18, 19) based on the result of said image processing, wherein said processing device (14) comprises a risk assessment estimator (30) adapted to estimate a traffic risk level (40), characterized in that said traffic risk level (40) is estimated by said risk assessment estimator (30) on the basis of input source data (13, 36) from at least one input source (12, 20), wherein said input source data comprises image data (13) captured by said imaging apparatus (12).
 




Drawing










Search report









Search report




Cited references

REFERENCES CITED IN THE DESCRIPTION



This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

Patent documents cited in the description