(19)
(11) EP 3 799 752 B1

(12) EUROPEAN PATENT SPECIFICATION

(45) Mention of the grant of the patent:
06.07.2022 Bulletin 2022/27

(21) Application number: 19465569.2

(22) Date of filing: 02.10.2019
(51) International Patent Classification (IPC): 
A42B 3/04(2006.01)
G08G 1/16(2006.01)
(52) Cooperative Patent Classification (CPC):
A42B 3/046; A42B 3/042; G08G 1/166; G08G 1/161; G08G 1/163

(54)

EGO MOTORCYCLE ON-BOARD AWARENESS RAISING SYSTEM, METHOD FOR DETECTING AND DISPLAYING PRESENCE OF AUTONOMOUS VEHICLES

EGO-MOTORRAD-ON-BOARD-SENSIBILISIERUNGSSYSTEM, VERFAHREN ZUM ERFASSEN UND ANZEIGEN DES VORHANDENSEINS VON AUTONOMEN FAHRZEUGEN

SYSTÈME DE SENSIBILISATION À BORD D'UNE MOTOCYCLETTE EGO, PROCÉDÉ DE DÉTECTION ET D'AFFICHAGE DE LA PRÉSENCE DE VÉHICULES AUTONOMES


(84) Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(43) Date of publication of application:
07.04.2021 Bulletin 2021/14

(73) Proprietor: Continental Automotive GmbH
30165 Hannover (DE)

(72) Inventors:
  • Caruntu, Constantin-Florin
    300704 Timisoara (RO)
  • Puscasu, Alexandru-Daniel
    300704 Timisoara (RO)

(74) Representative: Continental Corporation 
c/o Continental Automotive GmbH Intellectual Property Postfach 83 01 16
81701 München
81701 München (DE)


(56) References cited: : 
US-A1- 2013 311 075
US-B1- 10 219 571
US-A1- 2016 232 790
   
       
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    Field of the invention



    [0001] The invention is related to increasing road safety. In particular the invention is related to a system and a method for increasing the awareness of motorcycle riders in respect to the presence of autonomous vehicles driving in their field of view as well as a computer program for carrying out steps of the method. By increasing the awareness of motorcycle riders, safety of the road participants is increased.

    Background of the invention



    [0002] It is known that motorcycle riders are more vulnerable to the consequences of road accidents than persons travelling in vehicles, as the motorcycle, by its mere construction is less stable and protects less the human in case of an accident than a vehicle with at least four wheels.

    [0003] One way to reduce vulnerability of motorcycle riders is by increasing their awareness in respect to other traffic participants.

    [0004] For example, the invention US 5251333 A presents a simplified display system that is mounted on the helmet of the motorcycle rider.

    [0005] The invention US 20130305437 A1 proposes a helmet with a look-down micro-display that projects a virtual image in-line with the helmet's chin bar.

    [0006] The invention US 8638237 B2 discloses a system that alerts a vehicle driver about a motorcycle approaching from the rear. The system consists in a unit located in the car and a unit located on the motorcycle. The second unit transmits signals toward the traveling lane of the motorcycle and the car unit is responsible of receiving the transmitted signals and alerting the driver of the approaching motorcycle from the rear.

    [0007] The invention disclosed in US 20140273863 A1 provides a system which establishes a communication between a smart helmet and a mobile phone/communicator. It consists in a computer processor, a microphone and one speaker, all connected and integrated in the helmet to be used by the motorcycle rider for mobile calls.

    [0008] The invention US 20160075338 A1 presents a safety device for motorcycle riders that includes a safety helmet and a camera device, which is situated on the motorcycle rider's helmet; the camera device is connected to a warning device that outputs a warning as a function of data collected by the camera device related to the state of the motorcycle rider, e.g., fatigue monitoring.

    [0009] The invention US 20170176746 A1 presents a system with one or more cameras that are physically coupled to a helmet, where each camera is configured to generate a video feed, which is presented to a user by projecting it onto a surface, such as the visor of the helmet, thereby enabling enhanced situational awareness for the user of the helmet to the surroundings, but no processing is done on the received images, they being presented directly as captured by the cameras.

    [0010] The invention US 10,219,571 discloses a motorcycle helmet comprising a plurality of electronic components, including internally mounted sensors for detecting objects present in a blind spot of a wearer.

    [0011] The invention JP 2017004426 A describes a traffic safety system which includes a traffic light system with a plurality of wireless tags that measures the distance and the position of the communication devices detected by their radio tags owned by pedestrians, bicycles, motorcycles, wheelchairs, automobiles in its proximity and then sends this position to the other participants that have radio tags. The system operates based on a combination of sensors using radio signals and light laser/LED emission to detect the position of the vehicles around and to communicate it via V2X (Vehicle-to-X). The traffic information is displayed by means of a head unit display HUD that associates the received position information with the actual position on the road surface.

    [0012] One special category of traffic participants refers to the autonomous vehicles.

    [0013] There is an increasing interest in letting autonomous vehicles ride in many world jurisdictions. The advanced driver assistance systems used by autonomous vehicles are still far from providing complete and accurate information to enable autonomous vehicles to be at least as safe as human-conducted vehicles.

    Disadvantages of prior art



    [0014] Prior art does not disclose a solution capable of distinguishing among the vehicles driving in the ego motorcycle rider's field of view which vehicle is autonomous and which one is not. Fig. 1 depicts a plurality of vehicles driving in the ego motorcycle rider's field of view without any hint of which vehicle among them is autonomous.

    [0015] It is a disadvantage the fact that prior art does not disclose any solution to detect the autonomous vehicles driving in motorcycle rider's field of view because said motorcycle rider cannot take his precautions in respect to said vehicles if he so desires.

    Problem solved by the invention



    [0016] The problem solved by the invention is to provide a system and a method of detecting and displaying the presence of autonomous vehicles driving in the field of view of the ego motorcycle rider for the purpose of alerting said ego motorcycle rider about such presence enabling him to decide on the precautions to take in respect to the detected autonomous vehicles.

    Summary of the invention



    [0017] In order to solve the problem, the inventors conceived in a first aspect of the invention an ego motorcycle on-board awareness raising system placed in an ensemble consisting in an ego motorcycle and an ego motorcycle rider smart helmet, said smart helmet comprising an advanced driver assistance systems camera acquiring video images having the field of view facing forward and said smart helmet comprising a smart helmet visor for displaying said video images to the motorcycle rider as well as other awareness information, said ego motorcycle on-board awareness raising system further comprising:
    • a radar module placed in front of the ego motorcycle facing forward, configured to measure the distance between the ego motorcycle and the vehicles driving in the ego motorcycle's field of view and configured to send the measurements to an ego motorcycle on-board detection processing unit;
    • a gyroscope placed within the smart helmet configured to determine the direction and inclination of the field of view of the smart helmet depending on the direction in which the rider looks and configured to send the measurements to the ego motorcycle on-board detection processing unit;
    • a GPS sensor placed in the ego motorcycle configured to determine the geographical position of the ego motorcycle and configured to send the measurements to the ego motorcycle on-board detection processing unit;
    • an acceleration sensor placed in the ego motorcycle, configured to measure the acceleration signal of the ego motorcycle on three axes X, Y Z and configured to send the measurements to the ego motorcycle on-board detection processing unit;
    • a speedometer placed in the ego motorcycle, configured to measure the speed of the ego motorcycle and configured to send the measurements to the ego motorcycle on-board detection processing unit;
    • an ego motorcycle on-board unit configured:
      • to communicate via the Vehicle-to-Everything V2X communication channel with other vehicles provided with a corresponding vehicle on-board unit for the purpose of receiving the geographical position, speed and acceleration signal on three axes X, Y Z of each said another vehicle provided with a corresponding vehicle on-board unit VOBU;
      • to receive and read an autonomous vehicle identifier sent via the Vehicle-to-Everything V2X communication channel by the corresponding vehicle on-board unit of each respective autonomous vehicle together with the targeted path and estimation of position, speed and acceleration on three axes X, Y Z in a subsequent pre-determined prediction interval for each autonomous vehicle for which the autonomous vehicle identifier was received and read;
      • to send the measurements to the ego motorcycle on-board detection processing unit;
    • the ego motorcycle on-board detection processing unit configured to carry out detection of the autonomous vehicles driving in the ego motorcycle's field of view and further configured to send said result of detection to the smart helmet;
    • at least one ego motorcycle bus system, configured to interconnect the advanced driver assistance camera, the radar module, the gyroscope, the GPS sensor, the acceleration sensor, the speedometer, the ego motorcycle on-board unit, the ego motorcycle on-board detection processing unit and the smart helmet visor.


    [0018] In a second aspect of the invention, it is provided a method for detecting and displaying presence of autonomous vehicles driving in the field of view of an ego motorcycle rider wearing an ego motorcycle rider smart helmet provided with a smart helmet visor, using the ego motorcycle on-board awareness raising system having the following repetitive sequence of eight steps of the method carried out at regular intervals of time tn:

    Step 1 Broadcasting by an ego motorcycle on-board unit of messages through Vehicle-to-Everything V2X communication channel, said messages having the purpose to send to other vehicles provided with corresponding vehicle on-board units data about the position of the ego motorcycle and having the purpose to gather data about the presence of said other vehicles provided with corresponding vehicle on-board units and to gather data about which ones of said other vehicles are provided with a corresponding autonomous vehicle identifier.

    Step 2 Receiving by an ego motorcycle on-board detection processing unit of input data from the sensors via the at least one ego motorcycle bus system:

    • video streams acquired from an advanced driver assistance systems camera;
    • distances to all vehicles situated in front of the ego vehicle from a radar module provided that said all vehicles are driving within the range of the radar module;
    • speed of the ego motorcycle from a speedometer;
    • acceleration signals from X, Y and Z axes of the ego motorcycle from an accelerometer;
    • orientation of the smart helmet of the ego motorcycle from a gyroscope;
    • ego motorcycle geographical position from a GPS sensor;
    • data from the ego motorcycle on-board unit regarding the geographical position, speed and acceleration signals from X, Y and Z axes of other vehicles provided with corresponding vehicle on-board units including autonomous vehicles;
    • data from the ego motorcycle on-board unit for each autonomous vehicle: the autonomous vehicle identifier, the targeted path and the estimation of position, speed and acceleration signals from X, Y and Z axes in a subsequent pre-determined prediction interval.

    Step 3 Performing by the ego motorcycle on-board detection processing unit (DPU) the processing of video stream acquired in step 2 from the advanced driver assistance systems camera:

    • Applying image segmentation to the video stream for the purpose of identifying in it all the relevant objects;
    • Labelling of the relevant objects in the segmented image, resulting a processed video stream with the relevant objects labelled.

    Step 4 Creating by the ego motorcycle on-board detection processing unit of a fused environment road model based on the previous processed video stream and on the data received from the radar module.

    Step 5 Based on the fused road environment model of step 4 and based on part of the input data received in step 2:

    • inclination of the helmet received from the gyroscope;
    • geographical position from the GPS sensor;
    • the lateral, longitudinal acceleration signal and the yaw rate from the accelerometer and
    • speed from the speedometer,
    applying by the ego motorcycle on-board detection processing unit of simultaneous localization and mapping algorithm with the purpose of localizing the ego motorcycle in the fused road environment model and localizing the corresponding orientation of the smart helmet,
    resulting the simultaneous localization and mapping of the motorcycle rider and its smart helmet (SH) in the fused environment road model.

    Step 6 Based on:

    • the processed video stream with the relevant objects labelled of step 3;
    • the simultaneous localization and mapping of the motorcycle rider and its smart helmet SH in the fused environment road model of step 5 and
    • on the data from the ego motorcycle on-board unit received in step 2 regarding the other vehicles provided with corresponding vehicle on-board unit) including autonomous vehicles;
    • comparing and correlating the data regarding simultaneous localization and mapping of the motorcycle rider and its smart helmet in the fused environment road model as resulted from step 5 with the data received in step 2 from the ego motorcycle on-board unit regarding the other vehicles provided with corresponding vehicle on-board unit including autonomous vehicles,
    • detecting in said processed video stream of each autonomous vehicle provided with the corresponding autonomous vehicle identifier, and
    • marking each detected autonomous vehicle in the processed video stream,
    resulting a processed video stream with detected marked autonomous vehicles driving in the field of view of the motorcycle rider.

    Step 7 Applying correction to the marking of each detected autonomous vehicle in the processed video stream for ensuring a greater accuracy of marking on the processed video stream of the detected autonomous vehicles and sending via the ego motorcycle bus system data regarding said marked autonomous vehicles the smart helmet visor,
    resulting a corrected video stream with detected marked autonomous vehicles driving in the field of view of the motorcycle rider.

    Step 8 Displaying on the smart helmet visor of the marked autonomous vehicles in an understandable manner by the motorcycle rider



    [0019] In a third aspect of the invention it is provided a computer program comprising instructions which, when the program is executed on an ego motorcycle on-board detection processing unit cause said ego motorcycle on-board detection processing unit to execute the steps 2 to 7 of the method.

    Advantages of the invention



    [0020] The main advantages of this invention are the following:
    • Providing the motorcycle rider with the possibility to become aware of the autonomous vehicles that drive in his field of view so that he can take special precautions.
    • Carrying out of all computations onboard in the ego motorcycle detection processing unit DPU without need of any infrastructure components such as radio frequency tags, smart signs traffic, light emitting instruments, which makes the system and the method a self-contained solution.

    Brief description of the drawings



    [0021] 

    Fig. 1 refers to the representation of the smart helmet visor of the vehicles driving in motorcycle rider's field of view in the prior art;

    Fig. 2 refers to a diagram of the system and method according to the invention;

    Fig. 3 refers to the representation of the smart helmet visor of the vehicles driving in motorcycle rider' s field of view with the displaying of the autonomous vehicles as a consequence of applying the method of the invention in the system according to the invention.



    [0022] List of references in the drawings:
    SH
    Smart helmet of the motorcycle rider
    ADASC
    Advanced driver assistance systems camera
    RM
    Radar module
    GYRO
    Gyroscope
    GPSS
    Global Positioning System GPS sensor
    ACC
    Acceleration sensor
    SPEEDO
    Speedometer
    MDU
    Motorcycle Dynamics Unit = Acceleration sensor + Speedometer
    DPU
    On-board detection processing unit of the ego motorcycle
    V2X
    Vehicle to Everything communication channel
    MOBU
    Ego motorcycle on-board-unit for communicating through Vehicle to Everything communication channel
    VOBU
    Vehicle on-board unit for communicating through Vehicle to Everything communication channel
    AVI
    Autonomous vehicle identifier
    tn
    regular interval of times for the sequences of the method.

    Detailed description and example of realization



    [0023] With reference to Fig. 2, the ego motorcycle on-board awareness raising system is placed in an ensemble consisting in an ego motorcycle and an ego motorcycle rider smart helmet SH. Some components of the system are being placed in the ego motorcycle, whereas other components are being placed in the smart helmet SH of the motorcycle rider, as it will be hereafter detailed.

    [0024] Said ego motorcycle rider's smart helmet SH comprises an advanced driver assistance systems camera ADASC, alternatively called camera, acquiring video images and a smart helmet SH visor for displaying said video images as well as other awareness information such as images and/or warning messages addressed to the motorcycle rider.

    [0025] The advanced driver assistance systems camera ADASC used in this invention has at least the following characteristics: 2 MP (megapixels) and a rate of 30 fps (frames per second) . Said camera is mounted in/on the motorcycle rider' s smart helmet SH in a known way.

    [0026] The advanced driver assistance systems camera ADASC used in this invention has the field of view facing forward, that is in the direction of movement of the motorcycle rider. The inclination of the head of the motorcycle rider and/ or its rotation shall have as effect the change of the field of view.

    [0027] Said ego motorcycle on-board awareness raising system according to the invention further comprises the following components:
    • a radar/radar module RM;
    • a gyroscope GYRO;
    • a GPS sensor GPSS;
    • an acceleration sensor ACC;
    • a speedometer SPEEDO;
    • an ego motorcycle on-board unit MOBU;
    • an ego motorcycle on-board detection processing unit DPU;
    • at least one ego motorcycle bus system BS.


    [0028] The definition of the field of view of the motorcycle rider facing forward includes the angle of the field of view and the distances in respect to the vehicles driving ahead of the motorcycle rider. Both the angle and the distances are pre-determined depending on the characteristics of the camera and, respectively of the radar module RM. The angle of the field of view may be up to 150° inclusively measured in horizontal plane.

    [0029] The radar module RM is placed in front of the ego motorcycle facing forward. It is configured to measure the distance between the ego motorcycle and the vehicles driving in the ego motorcycle field of view, said distance typically ranging between 5-200m inclusively. Any radar used in the automotive industry may be used in the invention provided that it is configured to send the measurements to the ego motorcycle on-board detection processing unit DPU.

    [0030] The gyroscope GYRO is placed within the smart helmet SH. It is configured to determine the direction and inclination of the field of view of the smart helmet depending on the direction in which the rider looks. Any gyroscope used in the automotive industry may be used in the invention, provided that it fits within the smart helmet SH and provided that it is configured to send the result of determinations to the ego motorcycle on-board detection processing unit.

    [0031] The GPS sensor GPSS is configured to determine the geographical position of the ego motorcycle. Any GPS sensor used in the automotive industry may be used in the invention, provided that it is configured to send the result of determinations to the ego motorcycle on-board detection processing unit DPU.

    [0032] In a preferred embodiment, the GPS sensor GPSS may be placed in the ego motorcycle as a component of said motorcycle as provided by the manufacturer of motorcycles.

    [0033] In an alternative preferred embodiment, ego motorcycle is not provided with GPS sensor GPSS, but the ego motorcycle on-board unit MOBU is provided by its manufacturer with a GPS sensor GPSS. In this case, said GPS sensor GPSS of the ego motorcycle on-board unit MOBU is the one configured to determine the geographical position of the ego motorcycle and to send the result of determinations to the ego motorcycle on-board detection processing unit DPU.

    [0034] In an alternative preferred embodiment, neither ego motorcycle nor ego motorcycle on-board unit MOBU is provided with GPS sensor GPSS. In this case, the GPS sensor GPSS may be provided by a rider's smartphone configured to determine the geographical position of the ego motorcycle and to send the result of determinations to the ego motorcycle on-board detection processing unit DPU.

    [0035] The multiple examples of GPS sensor GPSS as shown in the alternative embodiments above have the advantage of providing flexibility to the system and allowing it to be used in a wider range of situations, irrespective of whether the ego motorcycle is provided with its built-in GPS sensor GPSS.

    [0036] The acceleration sensor ACC is placed in the ego motorcycle in the usual place(s). It is configured to measure the acceleration signal of the ego motorcycle on all three axes X, Y Z. Any acceleration sensor used in the automotive industry measuring the acceleration signal of the ego motorcycle on all three axes X, Y Z. may be used in the invention provided that it is configured to send the measurements to the ego motorcycle on-board detection processing unit DPU.

    [0037] The speedometer SPEEDO is placed in the ego motorcycle in the usual place (s) . It is configured to measure the speed of the ego motorcycle. Any speedometer used in the automotive industry may be used in the invention provided that is configured to send the measurements to the ego motorcycle on-board detection processing unit DPU.

    [0038] It is possible to combine the acceleration sensor ACC and the speedometer SPEEDO in a single sensor, in this way making economy of space, which is an advantage given the general shortage of space in a motorcycle. This is shown in Fig 2 as motorcycle dynamics unit MDU. This has the advantage of using less space in the motorcycle and the advantage of synchronizing the rate of providing data by the acceleration sensor ACC and the speedometer SPEEDO.

    [0039] The use of the motorcycle dynamics unit MDU can be combined with any of the possibilities of use of the GPS sensor GPSS within the system, having the advantage of flexibility.

    [0040] The ego motorcycle on-board unit MOBU is configured to communicate via the Vehicle-to-Everything V2X communication channel with other vehicles, including other motorcycle riders.

    [0041] In general, communication via the Vehicle-to-Everything V2X communication channel requires that all vehicles communicating be provided with a corresponding vehicle on-board units VOBU, said vehicle on-board units VOBU allowing sending and receiving messages through said Vehicle-to-Everything V2X communication channel.

    [0042] The initial configuration of the ego motorcycle on-board unit MOBU is the one commonly known in the state of art for the vehicle on-board unit VOBU. The ego motorcycle on-board unit MOBU used in the invention is further configured to send via the vehicle bus VB the data received from other vehicles' vehicle on-board unit VOBU to the ego motorcycle on-board detection processing unit DPU.

    [0043] It is known that autonomous vehicles are provided with vehicle on-board units VOBU. Additionally, it is known that each autonomous vehicle must be provided with an autonomous vehicle identifier AVI.

    [0044] In the invention, the ego motorcycle on-board unit MOBU is configured to read the autonomous vehicle identifier AVI of each autonomous vehicle.

    [0045] The ego motorcycle on-board unit MOBU is configured to receive periodically from other vehicles' vehicle on-board units VOBU, including autonomous vehicles the following information, for each vehicle: geographical position; speed; acceleration signal on the three axes.

    [0046] In addition, the ego motorcycle on-board unit MOBU is configured to receive periodically, apart from the above-captioned information, the following information referring to autonomous vehicles, for each autonomous vehicle for which the autonomous vehicle identifier was received and read: targeted path; estimation of position, speed and acceleration in a subsequent pre-determined prediction interval.

    [0047] Fig. 2 places schematically the ego motorcycle on-board unit MOBU in the same category as the sensors, as the data received by the ego motorcycle on-board unit MOBU about other vehicles provided with vehicle on-board unit VOBU, including autonomous vehicles, is used by the method according to the invention in the same way as the information from the other sensors.

    [0048] The ego motorcycle on-board detection processing unit DPU is specially configured to carry out the detection of the autonomous vehicles. Whilst all the other components of the system according to the invention already exist either on the ego motorcycle or on the ego motorcycle rider smart helmet SH, being specially adapted for the invention, the ego motorcycle on-board detection processing unit DPU does not exist in the absence of the invention.

    [0049] The ego motorcycle on-board detection processing unit DPU is configured to receive input from all the categories of sensors, configured to detect autonomous vehicles driving in the ego motorcycle's field of view and further configured to send said result of detection of the autonomous vehicles to the smart helmet SH visor in order to be displayed, as it will be further detailed in the description of the steps of the method.

    [0050] In order to carry out said majority of the steps of the method, the ego motorcycle on-board detection processing unit DPU comprises a dedicated processor having a processing power above 1000MHz and a capacity to store information of at least 1-2Gb and also comprises at least one non-volatile memory.

    [0051] The ego motorcycle on-board detection processing unit DPU is placed in the motorcycle, for example, in a location suitable for electronic control units.

    [0052] The at least one ego motorcycle bus system BS is configured to interconnect all the components of the system: the advanced driver assistance camera ADASC, the radar module RM, the gyroscope GYRO, the GPS sensor GPSS, the acceleration sensor ACC, the speedometer SPEEDO, the ego motorcycle on-board unit MOBU, the ego motorcycle on-board detection processing unit DPU and the smart helmet SH visor.

    [0053] The at least one ego motorcycle bus system BS may be any kind of vehicle bus used in the automotive industry, using communication protocols such as but not limited to: CAN bus, FlexRay, Ethernet, Bluetooth, etc. Depending on the particular configuration of said components of the system, more than one motorcycle bus system BS may be used to interconnect various components of the system, including the case when various communication protocols are used respectively for each bus system BS.

    [0054] In a second aspect of the invention it is provided a method for detecting and displaying presence of autonomous vehicles driving in the field of view of an ego motorcycle rider using the ego motorcycle on-board awareness raising system. The method consists of sequences of 8 steps carried out at regular intervals of time tn when said motorcycle rider is in traffic.

    [0055] The method is described below in connection with examples of values of the parameters. Said values shall be considered for exemplification only and shall not be considered limiting the invention, because the range of the parameters' values depend on the configuration of each of the components of the system.

    [0056] A non-limiting example of regular intervals of time tn is between 20ms and 100ms.

    [0057] In the first step of the method, the ego motorcycle on-board unit MOBU broadcasts messages through Vehicle-to-Everything V2X communication channel, said messages having the purpose to send to other vehicles provided with corresponding vehicle on-board units VOBU data about the position of the ego motorcycle and having the purpose to gather data about the presence of said other vehicles provided with corresponding vehicle on-board units VOBU and to gather data about which ones of said other vehicles are provided with a corresponding autonomous vehicle identifier AVI.

    [0058] The broadcasting is carried out regularly covering a pre-determined broadcasting range. In a non-limiting example, broadcast is carried out every 100ms and the broadcasting range is of around 200m around the ego motorcycle in all directions, thus including the field of view.

    [0059] In the second step of the method, the ego motorcycle on-board detection processing unit DPU receives via the at least one ego motorcycle bus system BS signals as input data from the sensors:
    • video streams acquired from an advanced driver assistance systems camera ADASC. In a non-limiting example, said video streams are received at an acquisition rate of up to 33ms inclusively;
    • the distances to all vehicles situated in the range of a radar module RM from said radar module. In a non-limiting example, data from the radar module RM is received at equally-spaced time intervals of 50-60ms inclusively and refers to the distances to all the vehicles, irrespectively of whether they are provided or not with corresponding vehicle on-board units VOBU as the radar module RM is not configured to distinguish which vehicle is provided with vehicle on-board units VOBU.
    • speed of the ego motorcycle from a speedometer SPEEDO. In a non-limiting example, data about the speed is received at equally-spaced time intervals of 10ms;
    • acceleration signals from X, Y and Z axes of the ego motorcycle from an accelerometer ACC. In a non-limiting example, data about acceleration signals is received at equally-spaced time intervals of 10ms. It is customary in the automotive industry to synchronize the receiving of data from the speedometer SPEEDO and the accelerometer ACC, irrespective of whether the speedometer SPEEDO and the accelerometer ACC are combined in one sensor.
    • orientation of the smart helmet of the ego motorcycle from a gyroscope GYRO. In a non-limiting example, data about the orientation is received at equally-spaced time intervals of 10ms-15ms;
    • ego motorcycle geographical position from a GPS sensor GPSS. In a non-limiting example, data about geographical position is received at equally-spaced time intervals of 50-60ms inclusively;
    • data from the ego motorcycle on-board unit MOBU regarding the geographical position, speed and acceleration signals from X, Y and Z axes of other vehicles provided with corresponding vehicle on-board units VOBU including autonomous vehicles;
    • data from the ego motorcycle on-board unit MOBU for each autonomous vehicle: the autonomous vehicle identifier AVI, the targeted path and the estimation of position, speed and acceleration signals from X, Y and Z axes in the subsequent pre-determined prediction interval. Typically, the pre-determined future interval ranges between 10s and 20s from the moment data is sent by the ego motorcycle on-board unit MOBU and refers to a best prediction of the position, speed and acceleration signals from X, Y and Z axes of said autonomous vehicle at the end of said subsequent pre-determined prediction interval.


    [0060] Sensors send data to the ego motorcycle on-board detection processing unit DPU at different rates, depending on the specific configurations of the components. It is not mandatory for the invention that the rates be synchronized.

    [0061] In the third step of the method, the ego motorcycle on-board detection processing unit DPU performs the processing of video stream acquired in step 2 from the advanced driver assistance systems camera ADASC, including steps such as:
    • Applying image segmentation to the video stream for the purpose of identifying in it all the relevant objects;
    • Labelling of the relevant objects in the segmented image.


    [0062] Image segmentation and labelling, in general, aim to reveal the relevant objects in the image corresponding to each video stream. In the invention, the relevant objects are all vehicles from the field of view irrespectively of whether they are provided or not with corresponding vehicle on-board units VOBU.

    [0063] The processing of video stream is carried out with the same rate as the acquisition rate of said video stream. The resulting processed video stream with the relevant objects labelled does not contain yet the detection of the autonomous vehicles.

    [0064] In the fourth step of the method, the ego motorcycle on-board detection processing unit DPU creates a fused environment road model based on the processed video stream carried out in the third step and on the data received from the radar module RM in the second step.

    [0065] The fused environment road model has thus more information than the processed video stream, as the distance to the relevant objects revealed in the processed video stream is now added, still the detection of the autonomous vehicles is not yet carried out.

    [0066] In the fifth step of the method, based on the fused environment road model created in step 4 and based on the following input data received in step 2:
    • inclination of the smart helmet SH received from the gyroscope GYRO,
    • geographical position from the GPS sensor GPSS,
    • the lateral, longitudinal acceleration signal and the yaw rate from the accelerometer ACC and
    • speed from the speedometer SPEEDO,
    the ego motorcycle on-board detection processing unit DPU applies simultaneous localization and mapping SLAM algorithm with the purpose of localizing the ego motorcycle in the fused road environment model and localizing the corresponding orientation of the smart helmet SH.

    [0067] Simultaneous localization and mapping SLAM algorithm includes correlation by the ego motorcycle on-board detection processing unit DPU with data received from the sensors: inclination of the helmet received from the gyroscope GYRO, geographical position from the GPS sensor GPSS, the lateral, longitudinal acceleration signal and the yaw rate from the accelerometer ACC and the speed from the speedometer SPEEDO. One non-limiting example includes the correlation of orientation of the inclination of the helmet received from the gyroscope GYRO with the results provided by the radar module RM that are already processed in the fused environment road model because when the motorcycle rider moves his head, the inclination of the smart helmet SH changes and the video acquired by the camera changes.

    [0068] The result of this step is the simultaneous localization and mapping of the motorcycle rider and its smart helmet SH in the fused environment road model, that includes information about the distances to all vehicles driving in the field of view of the motorcycle rider as measured by the radar module RM, still the detection of the autonomous vehicles is not yet carried out. Bv

    [0069] In the sixth step of the method, the ego motorcycle on-board detection processing unit DPU detects in the video stream of step 3 the autonomous vehicles provided with the corresponding autonomous vehicle identifier AVI.

    [0070] In order to do so, the ego motorcycle on-board detection processing unit DPU compares and correlates the data regarding simultaneous localization and mapping of the motorcycle rider and its smart helmet SH in the fused environment road model as resulted from step 5 with the data received in step 2 from the ego motorcycle on-board unit MOBU regarding the other vehicles provided with corresponding vehicle on-board unit VOBU including autonomous vehicles of step 2. Specifically,
    • simultaneous localization and mapping of the motorcycle rider and its smart helmet SH refer to the localization of the motorcycle rider and its smart helmet SH in respect to all vehicles, autonomous or not, provided with vehicle on-board unit VOBU driving in the field of view of the motorcycle rider within the range of the radar module RM. In this case the distances from the motorcycle rider are measured by the radar module RM,
    • the data provided by the ego motorcycle on-board unit MOBU refer to the presence of only the vehicles provided with corresponding vehicle on-board unit VOBU including autonomous vehicles running within said pre-determined broadcasting range. In this case the distances from the motorcycle rider are determined as a difference between the GPS location of each vehicle provided with corresponding vehicle on-board unit VOBU and the GPS location of the ego motorcycle.


    [0071] The comparison and the correlation have as result the detection in said processed video stream of the autonomous vehicles driving in the field of view of the motorcycle rider that satisfy both conditions:
    • the autonomous vehicles are placed within said pre-determined broadcasting range, because in the contrary their autonomous vehicle identifier AVI cannot be read by the ego motorcycle on-board unit MOBU and
    • the autonomous vehicles are placed within the range of the radar module RM, reason for which said autonomous vehicles are found among the relevant objects labelled in step 3 based on which the fused environment road model was created in step 4 serving for the application of the of simultaneous localization and mapping SLAM algorithm in step 5.


    [0072] The autonomous vehicles detected are then marked by known marking techniques on the processed video stream. The result of this step is a processed video stream with detected marked autonomous vehicles driving in the field of view of the motorcycle rider.

    [0073] In the seventh step of the method, corrections are applied to the detection carried out in previous step.

    [0074] Such corrections are needed for various reasons, such as but not limited to:
    • the sensors send information at different rates to the ego motorcycle on-board detection processing unit DPU;
    • during the time elapsed between carrying out step 1 of the method and step 6 of the method all vehicles, ego motorcycle rider moved their position;
    • the distance is measured by the radar module RM and respectively determined as difference of GPS location as measured by GPS sensor GPSS.


    [0075] The correction of the position marked for each autonomous vehicle detected is carried out by using a general kinematic estimation algorithm taking into account the values of the subsequent pre-determined prediction interval and has the purpose of ensuring a greater accuracy of marking on the processed video stream of the detected autonomous vehicles. In case of discrepancies between the determination of the distances measured the radar module RM and respectively determined as difference of GPS location as measured by GPS sensor GPSS exceeding a pre-determined ration, the measurement of the radar module RM is considered as the more accurate and the determinations of the GPS sensor GPSS must be adjusted to match the measurements of the radar module RM.

    [0076] The result of this step is a corrected video stream with detected and marked autonomous vehicles driving in the field of view of the motorcycle rider.

    [0077] Once the correction carried out, the ego motorcycle on-board detection processing unit DPU sends via the at least one ego motorcycle bus system BS to the smart helmet SH visor the corrected video stream with detected marked autonomous vehicles driving in the field of view of the motorcycle rider.

    [0078] In the eighth step of the method, the smart helmet SH visor displays, in an understandable manner by the motorcycle rider, the autonomous vehicles provided with corresponding autonomous vehicle identifier AVI.

    [0079] Comparing Fig 1 illustrating the image as displayed on the smart helmet SH visor in the absence of the invention, with Fig. 3 illustrating the image as displayed on the smart helmet SH visor of the motorcycle rider by applying the invention, one can see that in Fig. 3, autonomous vehicles appear represented with dotted line, said dotted line being one of the non-limiting examples of displaying the detected and marked autonomous vehicles in an understandable manner by the motorcycle rider.

    [0080] In a third aspect of the invention it is provided a computer program, which, when the program is executed on an ego motorcycle on-board detection processing unit DPU of any of the preferred embodiments, cause said ego motorcycle on-board detection processing unit DPU to execute the steps 2 to 7 of the method.

    [0081] While the description of the invention was disclosed in detail in connection to preferred embodiments, those skilled in the art will appreciate that changes may be made to adapt a particular situation without departing from the scope of the invention as defined by the claims.


    Claims

    1. Ego motorcycle on-board awareness raising system placed in an ensemble consisting in an ego motorcycle and an ego motorcycle rider smart helmet (SH), said smart helmet (SH) comprising an advanced driver assistance systems camera (ADASC) acquiring video images having the field of view facing forward and said smart helmet (SH) comprising a smart helmet (SH) visor for displaying said video images to the motorcycle rider as well as other awareness information, characterized in that said ego motorcycle on-board awareness raising system further comprises:

    - a radar module (RM) placed in front of the ego motorcycle facing forward, configured to measure the distance between the ego motorcycle and the vehicles driving in the ego motorcycle's field of view and configured to send the measurements to an ego motorcycle on-board detection processing unit (DPU);

    - a gyroscope (GYRO) placed within the smart helmet (SH) configured to determine the direction and inclination of the field of view of the smart helmet (SH) depending on the direction in which the rider looks and configured to send the measurements to the ego motorcycle on-board detection processing unit (DPU);

    - a GPS sensor (GPSS) placed in the ego motorcycle configured to determine the geographical position of the ego motorcycle and configured to send the measurements to the ego motorcycle on-board detection processing unit (DPU);

    - an acceleration sensor (ACC) placed in the ego motorcycle, configured to measure the acceleration signal of the ego motorcycle on three axes (X, Y Z) and configured to send the measurements to the ego motorcycle on-board detection processing unit (DPU);

    - a speedometer (SPEEDO) placed in the ego motorcycle, configured to measure the speed of the ego motorcycle and configured to send the measurements to the ego motorcycle on-board detection processing unit (DPU);

    - an ego motorcycle on-board unit (MOBU) configured:

    - to communicate via the Vehicle-to-Everything V2X communication channel with other vehicles provided with a corresponding vehicle on-board unit (VOBU) for the purpose of receiving the geographical position, speed and acceleration signal on three axes (X, Y Z) of each said other vehicle provided with a corresponding vehicle on-board unit (VOBU);

    - to receive and read an autonomous vehicle identifier (AVI) sent via the Vehicle-to-Everything V2X communication channel by the corresponding vehicle on-board unit (VOBU) of each respective autonomous vehicle together with the targeted path and estimation of position, speed and acceleration on three axes (X, Y Z) in a subsequent pre-determined prediction interval for each autonomous vehicle for which the autonomous vehicle identifier was received and read;

    - to send the measurements to the ego motorcycle on-board detection processing unit (DPU);

    - the ego motorcycle on-board detection processing unit (DPU) configured to carry out detection of the autonomous vehicles driving in the ego motorcycle's field of view and further configured to send said result of detection to the smart helmet (SH);

    - at least one ego motorcycle bus system (BS) configured to interconnect the advanced driver assistance camera (ADASC), the radar module (RM), the gyroscope (GYRO), the GPS sensor (GPSS), the acceleration sensor (ACC), the speedometer (SPEEDO), the ego motorcycle on-board unit (MOBU), the ego motorcycle on-board detection processing unit (DPU) and the smart helmet (SH) visor.


     
    2. Ego motorcycle on-board awareness raising system of claim 1, wherein the GPS sensor (GPSS) may be placed in the ego motorcycle as a component of said motorcycle.
     
    3. Ego motorcycle on-board awareness raising system of claim 1, wherein the GPS sensor (GPSS) is a component of the ego motorcycle on-board unit (MOBU).
     
    4. Ego motorcycle on-board awareness raising system of claim 1, wherein the GPS sensor (GPSS) is a component of a motorcycle rider's smartphone.
     
    5. The ego motorcycle on-board awareness raising system of any of the claims 1 to 4, wherein the acceleration sensor (ACC) and the speedometer (SPEEDO) are combined in a single sensor forming a motorcycle dynamics unit (MDU).
     
    6. Method for detecting and displaying presence of autonomous vehicles driving in the field of view of an ego motorcycle rider wearing an ego motorcycle rider smart helmet (SH) provided with a smart helmet (SH) visor, using the ego motorcycle on-board awareness raising system, characterized in that the following repetitive sequence of steps of the method is carried out at regular intervals of time tn:

    Step 1 Broadcasting by an ego motorcycle on-board unit (MOBU) of messages through Vehicle-to-Everything V2X communication channel, said messages having the purpose to send to other vehicles provided with corresponding vehicle on-board units (VOBU) data about the position of the ego motorcycle and having the purpose to gather data about the presence of said other vehicles provided with corresponding vehicle on-board units (VOBU) and to gather data about which ones of said other vehicles are provided with a corresponding autonomous vehicle identifier (AVI).

    Step 2 Receiving by an ego motorcycle on-board detection processing unit (DPU) of input data from the sensors via the at least one ego motorcycle bus system (BS):

    - video streams acquired from an advanced driver assistance systems camera (ADASC);

    - distances to all vehicles situated in front of the ego vehicle from a radar module (RM) provided that said all vehicles are driving within the range of the radar module (RM);

    - speed of the ego motorcycle from a speedometer (SPEEDO);

    - acceleration signals from X, Y and Z axes of the ego motorcycle from an accelerometer (ACC);

    - orientation of the smart helmet (SH) of the ego motorcycle from a gyroscope (GYRO);

    - ego motorcycle geographical position from a GPS sensor (GPSS);

    - data from the ego motorcycle on-board unit (MOBU) regarding the geographical position, speed and acceleration signals from X,Y and Z axes of other vehicles provided with corresponding vehicle on-board units (VOBU) including autonomous vehicles;

    - data from the ego motorcycle on-board unit (MOBU) for each autonomous vehicle: the autonomous vehicle identifier (AVI), the targeted path and the estimation of position, speed and acceleration signals from X,Y and Z axes in a subsequent pre-determined prediction interval.

    Step 3 Performing by the ego motorcycle on-board detection processing unit (DPU) the processing of video stream acquired in step 2 from the advanced driver assistance systems camera (ADASC):

    - Applying image segmentation to the video stream for the purpose of identifying in it all the relevant objects;

    - Labelling of the relevant objects in the segmented image. resulting a processed video stream with the relevant objects labelled.

    Step 4 Creating by the ego motorcycle on-board detection processing unit (DPU) of a fused environment road model based on the previous processed video stream and on the data received from the radar module (RM).

    Step 5 Based on the fused road environment model of step 4 and based on part of the input data received in step 2:

    - inclination of the helmet received from the gyroscope (GYRO);

    - geographical position from the GPS sensor (GPSS);

    - the lateral, longitudinal acceleration signal and the yaw rate from the accelerometer (ACC) and

    - speed from the speedometer (SPEEDO),

    applying by the ego motorcycle on-board detection processing unit (DPU) of simultaneous localization and mapping (SLAM) algorithm with the purpose of localizing the ego motorcycle in the fused road environment model and localizing the corresponding orientation of the smart helmet (SH), resulting the simultaneous localization and mapping of the motorcycle rider and its smart helmet (SH) in the fused environment road model.

    Step 6 Based on:

    - the processed video stream with the relevant objects labelled of step 3;

    - the simultaneous localization and mapping of the motorcycle rider and its smart helmet SH in the fused environment road model of step 5 and

    - on the data from the ego motorcycle on-board unit (MOBU) received in step 2 regarding the other vehicles provided with corresponding vehicle on-board unit (VOBU) including autonomous vehicles;

    - comparing and correlating the data regarding simultaneous localization and mapping of the motorcycle rider and its smart helmet (SH) in the fused environment road model as resulted from step 5 with the data received in step 2 from the ego motorcycle on-board unit (MOBU) regarding the other vehicles provided with corresponding vehicle on-board unit (VOBU) including autonomous vehicles,

    - detecting in said processed video stream of each autonomous vehicle provided with the corresponding autonomous vehicle identifier (AVI), and

    - marking each detected autonomous vehicle in the processed video stream,

    resulting a processed video stream with detected marked autonomous vehicles driving in the field of view of the motorcycle rider.

    Step 7 Applying correction to the marking of each detected autonomous vehicle in the processed video stream for ensuring a greater accuracy of marking on the processed video stream of the detected autonomous vehicles and sending via the ego motorcycle bus system (BS) data regarding said marked autonomous vehicles the smart helmet visor (SH),

    resulting a corrected video stream with detected marked autonomous vehicles driving in the field of view of the motorcycle rider.

    Step 8 Displaying on the smart helmet (SH) visor of the marked autonomous vehicles in an understandable manner by the motorcycle rider


     
    7. A computer program comprising instructions which, when the program is executed on an ego motorcycle on-board detection processing unit (DPU) of any of the claims 1 to 5, cause said ego motorcycle on-board detection processing unit (DPU) to execute the steps 2 to 7 of the method according to claim 6.
     


    Ansprüche

    1. Ego-Motorrad-On-Board-Sensibilisierungssystem, platziert in einem Ensemble, umfassend ein Ego-Motorrad und einen intelligenten Ego-Motorradfahrerhelm (SH), wobei der intelligente Helm (SH) eine Videobilder erfassende erweiterte Fahrerunterstützungssysteme-Kamera (ADASC) umfasst, die ein nach vorn zeigendes Sichtfeld aufweist, und wobei der intelligente Helm (SH) einen Visor des intelligenten Helms (SH) zum Anzeigen der Videobilder für den Motorradfahrer sowie anderer Aufmerksamkeitsinformationen umfasst, dadurch gekennzeichnet, dass das Ego-Motorrad-On-Board-Sensibilisierungssystem ferner Folgendes umfasst:

    - ein Radarmodul (RM), vor dem Ego-Motorrad platziert und nach vorn zeigend, dazu ausgebildet, den Abstand zwischen dem Ego-Motorrad und den Fahrzeugen, die im Sichtfeld des Ego-Motorrads fahren, zu messen, und dazu ausgebildet, die Messungen an eine Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) zu senden;

    - ein Gyroskop (GYRO), innerhalb des intelligenten Helms (SH) platziert, dazu ausgebildet, die Richtung und Neigung des Sichtfelds des intelligenten Helms (SH) in Abhängigkeit von der Richtung zu bestimmen, in die der Fahrer schaut, und dazu ausgebildet, die Messungen an die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) zu senden;

    - einen GPS-Sensor (GPSS), im Ego-Motorrad platziert, dazu ausgebildet, die geografische Position des Ego-Motorrads zu bestimmen, und dazu ausgebildet, die Messungen an die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) zu senden;

    - einen Beschleunigungssensor (ACC), im Ego-Motorrad platziert, dazu ausgebildet, das Beschleunigungssignal des Ego-Motorrads auf drei Achsen (X, Y, Z) zu messen, und dazu ausgebildet, die Messungen an die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) zu senden;

    - einen Geschwindigkeitsmesser (SPEEDO), im Ego-Motorrad platziert, dazu ausgebildet, die Geschwindigkeit des Ego-Motorrads zu messen, und dazu ausgebildet, die Messungen an die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) zu senden;

    - eine Ego-Motorrad-On-Board-Einheit (MOBU), ausgebildet zum:

    - Kommunizieren über den Fahrzeug-zu-Allem- bzw. V2X-Kommunikationskanal mit anderen Fahrzeugen, die mit einer entsprechenden Fahrzeug-On-Board-Einheit (VOBU) versehen sind, zum Zwecke des Empfangens der geografischen Position, Geschwindigkeit und des Beschleunigungssignals auf drei Achsen (X, Y Z) jedes der anderen Fahrzeuge, die mit einer entsprechenden Fahrzeug-On-Board-Einheit (VOBU) versehen sind;

    - Empfangen und Lesen einer autonomen Fahrzeugkennung (AVI), gesendet über den Fahrzeug-zu-Allem- bzw. V2X-Kommunikationskanal durch die entsprechende Fahrzeug-On-Board-Einheit (VOBU) jedes entsprechenden autonomen Fahrzeugs zusammen mit dem beabsichtigten Pfad und Schätzung der Position, Geschwindigkeit und Beschleunigung auf drei Achsen (X, Y Z) in einem nachfolgenden vorbestimmten Vorhersageintervall für jedes autonome Fahrzeug, für das die autonome Fahrzeugkennung empfangen und gelesen wurde;

    - Senden der Messungen an die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU);

    - wobei die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) dazu ausgebildet ist, Detektion der autonomen Fahrzeuge auszuführen, die im Sichtfeld des Ego-Motorrads fahren, und ferner dazu ausgebildet ist, das Ergebnis der Detektion an den intelligenten Helm (SH) zu senden;

    - zumindest ein Ego-Motorradbussystem (BS), dazu ausgebildet, die erweiterte Fahrerunterstützungskamera (ADASC), das Radarmodul (RM), das Gyroskop (GYRO), den GPS-Sensor (GPSS), den Beschleunigungssensor (ACC), den Geschwindigkeitsmesser (SPEEDO), die Ego-Motorrad-On-Board-Einheit (MOBU), die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) und den Visor des intelligenten Helms (SH) miteinander zu verbinden.


     
    2. Ego-Motorrad-On-Board-Sensibilisierungssystem nach Anspruch 1, wobei der GPS-Sensor (GPSS) in dem Ego-Motorrad als eine Komponente des Motorrads platziert werden kann.
     
    3. Ego-Motorrad-On-Board-Sensibilisierungssystem nach Anspruch 1, wobei der GPS-Sensor (GPSS) eine Komponente der Ego-Motorrad On-Board-Einheit (MOBU) ist.
     
    4. Ego-Motorrad-On-Board-Sensibilisierungssystem nach Anspruch 1, wobei der GPS-Sensor (GPSS) eine Komponente eines Smartphones des Motorradfahrers ist.
     
    5. Ego-Motorrad-On-Board-Sensibilisierungssystem nach einem der Ansprüche 1 bis 4, wobei der Beschleunigungssensor (ACC) und der Geschwindigkeitsmesser (SPEEDO) in einem einzelnen Sensor kombiniert sind, der eine Motorraddynamikeinheit (MDU) bildet.
     
    6. Verfahren zum Detektieren und Anzeigen der Anwesenheit von autonomen Fahrzeugen, die im Sichtfeld eines Ego-Motorradfahrers fahren, der einen intelligenten Ego-Motorradfahrerhelm (SH) trägt, der mit einem Visor des intelligenten Helms (SH) versehen ist, unter Verwendung des Ego-Motorrad-On-Board-Sensibilisierungssystems, dadurch gekennzeichnet, dass die folgende repetitive Sequenz von Schritten des Verfahrens bei regelmäßigen Zeitintervallen tn ausgeführt wird:

    Schritt 1 Rundsenden, durch eine Ego-Motorrad-On-Board-Einheit (MOBU), von Nachrichten durch einen Fahrzeug-zu-Allem- bzw. V2X-Kommunikationskanal, wobei die Nachrichten den Zweck haben, an andere Fahrzeuge, die mit entsprechenden Fahrzeug-On-Board-Einheiten (VOBU) versehen sind, Daten über die Position des Ego-Motorrads zu senden, und den Zweck haben, Daten über die Anwesenheit der anderen Fahrzeuge, die mit entsprechenden Fahrzeug-On-Board-Einheiten (VOBU) versehen sind, zu sammeln und Daten darüber zu sammeln, welche der anderen Fahrzeuge mit einer entsprechenden autonomen Fahrzeugkennung (AVI) versehen sind.

    Schritt 2 Empfangen, durch eine Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU), von Eingangsdaten von den Sensoren über das zumindest eine Ego-Motorradbussystem (BS):

    - Videostreams, erfasst von einer erweiterten Fahrerunterstützungssysteme-Kamera (ADASC);

    - Abstände zu allen Fahrzeugen, die vor dem Ego-Fahrzeug befindlich sind, von einem Radarmodul (RM), sofern all diese Fahrzeuge innerhalb der Reichweite des Radarmoduls (RM) fahren;

    - Geschwindigkeit des Ego-Motorrads von einem Geschwindigkeitsmesser (SPEEDO);

    - Beschleunigungssignale von X-, Y- und Z-Achsen des Ego-Motorrads von einem Beschleunigungsmesser (ACC);

    - Ausrichtung des intelligenten Helms (SH) des Ego-Motorrads von einem Gyroskop (GYRO);

    - geografische Position des Ego-Motorrads von einem GPS-Sensor (GPSS);

    - Daten von einer Ego-Motorrad-On-Board-Einheit (MOBU) hinsichtlich der geografischen Position, Geschwindigkeit und Beschleunigungssignalen von X-, Y-und Z-Achsen von anderen Fahrzeugen, die mit entsprechenden Fahrzeug-On-Board-Einheiten (VOBU) versehen sind, einschließlich autonome Fahrzeuge;

    - Daten von der Ego-Motorrad-On-Board-Einheit (MOBU) für jedes autonome Fahrzeug: die autonome Fahrzeugkennung (AVI), der beabsichtigte Pfad und die Schätzung von Position, Geschwindigkeit und Beschleunigungssignalen von X-, Y-und Z-Achsen in einem nachfolgenden vorbestimmten Vorhersageintervall.

    Schritt 3 Durchführen, durch die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU), der Verarbeitung eines Videostreams, der in Schritt 2 von der erweiterten Fahrerunterstützungssysteme-Kamera (ADASC) erfasst wurde:

    - Anwenden von Bildsegmentierung auf den Videostream zum Zwecke des Identifizierens aller relevanten Objekte darin;

    - Kennzeichnen der relevanten Objekte in dem segmentierten Bild,

    in einem verarbeiteten Videostream resultierend, in dem die relevanten Objekte gekennzeichnet sind.

    Schritt 4 Erzeugen, durch die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU), eines fusionierten Straßenumgebungsmodells basierend auf dem vorher verarbeiteten Videostream und auf den von dem Radarmodul (RM) empfangenen Daten.

    Schritt 5 Basierend auf dem fusionierten Straßenumgebungsmodell aus Schritt 4 und basierend auf einem Teil der in Schritt 2 empfangenen Eingangsdaten:

    - Neigung des Helms, empfangen vom Gyroskop (GYRO);

    - geografische Position vom GPS-Sensor (GPSS);

    - das laterale und Längsbeschleunigungssignal und die Gierrate vom Beschleunigungsmesser (ACC) und

    - Geschwindigkeit vom Geschwindigkeitsmesser (SPEEDO),

    Anwenden, durch die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU), eines gleichzeitigen Lokalisierungs- und Abbildungsalgorithmus (SLAM) mit dem Zweck des Lokalisierens des Ego-Motorrads im fusionierten Straßenumgebungsmodell und Lokalisierens der entsprechenden Ausrichtung des intelligenten Helms (SH),
    in der gleichzeitigen Lokalisierung und Abbildung des Motorradfahrers und seines intelligenten Helms (SH) in dem fusionierten Straßenumgebungsmodell resultierend.

    Schritt 6 Basierend auf:

    - dem verarbeiteten Videostream mit den in Schritt 3 gekennzeichneten relevanten Objekten;

    - der gleichzeitigen Lokalisierung und Abbildung des Motorradfahrers und seines intelligenten Helms SH im fusionierten Straßenumgebungsmodell aus Schritt 5 und

    - den in Schritt 2 empfangenen Daten von der Ego-Motorrad-On-Board-Einheit (MOBU) hinsichtlich der anderen Fahrzeuge, die mit einer entsprechenden Fahrzeug-On-Board-Einheit (VOBU) versehen sind, einschließlich autonome Fahrzeuge;

    - Vergleichen und Korrelieren der Daten hinsichtlich der gleichzeitigen Lokalisierung und Abbildung des Motorradfahrers und seines intelligenten Helms (SH) in dem fusionierten Straßenumgebungsmodell, wie aus Schritt 5 resultierend, mit den in Schritt 2 von der Ego-Motorrad-On-Board-Einheit (MOBU) empfangenen Daten hinsichtlich der anderen Fahrzeuge, die mit einer entsprechenden Fahrzeug-On-Board-Einheit (VOBU) versehen sind, einschließlich autonome Fahrzeuge,

    - Detektieren, in dem verarbeiteten Videostream, jedes autonomen Fahrzeugs, das mit der entsprechenden autonomen Fahrzeugkennung (AVI) versehen ist, und

    - Markieren jedes detektierten autonomen Fahrzeugs in dem verarbeiteten Videostream,

    in einem verarbeiteten Videostream mit detektierten markierten Fahrzeugen resultierend, die im Sichtfeld des Motorradfahrers fahren.

    Schritt 7 Anwenden einer Korrektur auf die Markierung der einzelnen detektierten autonomen Fahrzeuge im verarbeiteten Videostream zum Sicherstellen einer größeren Genauigkeit der Markierung auf dem verarbeiteten Videostream der detektierten autonomen Fahrzeuge und Senden, über das Ego-Motorradbussystem (BS), von Daten hinsichtlich der markierten autonomen Fahrzeuge an den Visor des intelligenten Helms (SH),
    in einem korrigierten Videostream mit detektierten markierten Fahrzeugen resultierend, die im Sichtfeld des Motorradfahrers fahren.

    Schritt 8 Anzeigen, auf dem Visor des intelligenten Helms (SH), der markierten autonomen Fahrzeuge in einer für den Motorradfahrer verständlichen Weise.


     
    7. Computerprogramm, umfassend Anweisungen, die, wenn das Programm auf einer Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) nach einem der Ansprüche 1 bis 5 ausgeführt wird, die Ego-Motorrad-On-Board-Detektionsverarbeitungseinheit (DPU) veranlassen, die Schritte 2 bis 7 des Verfahrens nach Anspruch 6 auszuführen.
     


    Revendications

    1. Système de sensibilisation embarqué sur une égo-motocyclette, placé dans un ensemble consistant en une égo-motocyclette et un casque intelligent (SH) de conducteur d'égo-motocyclette, ledit casque intelligent (SH) comprenant une caméra de systèmes avancés d'aide à la conduite (ADASC) qui acquiert des images vidéo dont le champ de vision est orienté vers l'avant et ledit casque intelligent (SH) comprenant un viseur de casque intelligent (SH) destiné à afficher lesdites images vidéo à l'intention du conducteur de motocyclette ainsi que d'autres informations de sensibilisation, ledit système de sensibilisation embarqué sur une égo-motocyclette étant caractérisé en ce qu'il comprend en outre :

    - un module radar (RM) placé à l'avant de l'égo-motocyclette orienté vers l'avant, configuré pour mesurer la distance entre l'égo-motocyclette et les véhicules circulant dans le champ de vision de l'égo-motocyclette et configuré pour envoyer les mesures à une unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette ;

    - un gyroscope (GYRO) placé à l'intérieur du casque intelligent (SH) configuré pour déterminer la direction et l'inclinaison du champ de vision du casque intelligent (SH) en fonction de la direction dans laquelle le conducteur regarde et configuré pour envoyer les mesures à l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette ;

    - un capteur GPS (GPSS) placé dans l'égo-motocyclette configuré pour déterminer la position géographique de l'égo-motocyclette et configuré pour envoyer les mesures à l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette ;

    - un capteur d'accélération (ACC) placé dans l'égo-motocyclette, configuré pour mesurer le signal d'accélération de l'égo-motocyclette sur trois axes (X, Y, Z) et configuré pour envoyer les mesures à l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette ;

    - un compteur de vitesse (SPEEDO) placé dans l'égo-motocyclette, configuré pour mesurer la vitesse de l'égo-motocyclette et configuré pour envoyer les mesures à l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette ;

    - une unité embarquée sur l'égo-motocyclette (MOBU) configurée :

    - pour communiquer, via le canal de communication de type Véhicule-à-Tout V2X, avec d'autres véhicules pourvus d'une unité embarquée sur un véhicule (VOBU) correspondante aux fins de recevoir la position géographique, la vitesse et le signal d'accélération sur trois axes (X, Y, Z) de chaque dit autre véhicule pourvu d'une unité embarquée sur un véhicule (VOBU) correspondante ;

    - pour recevoir et lire un identifiant de véhicule autonome (AVI) envoyé via le canal de communication de type Véhicule-à-Tout V2X par l'unité embarquée sur un véhicule (VOBU) correspondante de chaque véhicule autonome respectif conjointement avec la trajectoire ciblée et une estimation de la position, la vitesse et l'accélération sur trois axes (X, Y, Z) dans un intervalle de prédiction prédéterminé suivant pour chaque véhicule autonome pour lequel l'identifiant de véhicule autonome a été reçu et lu ;

    - pour envoyer les mesures à l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette ;

    - l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette configurée pour réaliser la détection des véhicules autonomes circulant dans le champ de vision de l'égo-motocyclette et configurée en outre pour envoyer ledit résultat de détection au casque intelligent (SH) ;

    - au moins un système de bus (BS) d'égo-motocyclette configuré pour interconnecter la caméra de systèmes avancés d'aide à la conduite (ADASC), le module radar (RM), le gyroscope (GYRO), le capteur GPS (GPSS), le capteur d'accélération (ACC), le compteur de vitesse (SPEEDO), l'unité embarquée sur l'égo-motocyclette (MOBU), l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette et le viseur de casque intelligent (SH).


     
    2. Système de sensibilisation embarqué sur une égo-motocyclette selon la revendication 1, dans lequel le capteur GPS (GPSS) pourra être placé dans l'égo-motocyclette sous forme d'un composant de ladite motocyclette.
     
    3. Système de sensibilisation embarqué sur une égo-motocyclette selon la revendication 1, dans lequel le capteur GPS (GPSS) est un composant de l'unité embarquée sur l'égo-motocyclette (MOBU).
     
    4. Système de sensibilisation embarqué sur une égo-motocyclette selon la revendication 1, dans lequel le capteur GPS (GPSS) est un composant d'un smartphone du conducteur de motocyclette.
     
    5. Système de sensibilisation embarqué sur une égo-motocyclette selon l'une quelconque des revendications 1 à 4, dans lequel le capteur d'accélération (ACC) et le compteur de vitesse (SPEEDO) sont combinés en un seul capteur formant une unité de dynamique de motocyclette (MDU).
     
    6. Procédé de détection et d'affichage de la présence de véhicules autonomes circulant dans le champ de vision d'un conducteur d'égo-motocyclette portant un casque intelligent (SH) de conducteur d'égo-motocyclette pourvu d'un viseur de casque intelligent (SH), au moyen du système de sensibilisation embarqué sur une égo-motocyclette, caractérisé en ce que la séquence répétitive d'étapes du procédé suivante est mise en œuvre à intervalles de temps réguliers tn :

    Étape 1 Diffusion, par une unité embarquée sur l'égo-motocyclette (MOBU), de messages par le biais d'un canal de communication de type Véhicule-à-Tout V2X, lesdits messages ayant pour but d'envoyer à d'autres véhicules pourvus d'unités embarquées sur un véhicule (VOBU) correspondantes des données relatives à la position de l'égo-motocyclette et ayant pour but de collecter des données relatives à la présence desdits autres véhicules pourvus d'unités embarquées sur un véhicule (VOBU) correspondantes et de collecter des données relatives à ceux desdits autres véhicules qui sont pourvus d'un identifiant de véhicule autonome (AVI) correspondant.

    Étape 2 Réception, par une unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette, de données d'entrée en provenance des capteurs via l'au moins un système de bus (BS) d'égo-motocyclette, à savoir :

    - des flux vidéo acquis auprès d'une caméra de systèmes avancés d'aide à la conduite (ADASC) ;

    - des distances jusqu'à l'ensemble de véhicules situés devant l'égo-véhicule en provenance d'un module radar (RM) à condition que ledit ensemble de véhicules circulent à portée du module radar (RM) ;

    - une vitesse de l'égo-motocyclette en provenance d'un compteur de vitesse (SPEEDO) ;

    - des signaux d'accélération à partir d'axes X, Y et Z de l'égo-motocyclette en provenance d'un accéléromètre (ACC) ;

    - une orientation du casque intelligent (SH) de l'égo-motocyclette en provenance d'un gyroscope (GYRO) ;

    - une position géographique de l'égo-motocyclette en provenance d'un capteur GPS (GPSS) ;

    - des données en provenance de l'unité embarquée sur l'égo-motocyclette (MOBU) concernant la position géographique, la vitesse et les signaux d'accélération à partir d'axes X, Y et Z d'autres véhicules pourvus d'unités embarqués sur un véhicule (VOBU) correspondantes, y compris des véhicules autonomes ;

    - des données en provenance de l'unité embarquée sur l'égo-motocyclette (MOBU) pour chaque véhicule autonome : l'identifiant de véhicule autonome (AVI), la trajectoire ciblée et l'estimation de la position, la vitesse et l'accélération à partir d'axes X, Y et Z dans un intervalle de prédiction prédéterminé suivant.

    Étape 3 Réalisation, par l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette, du traitement d'un flux vidéo acquis à l'étape 2 auprès de la caméra de systèmes avancés d'aide à la conduite (ADASC) :

    - Application d'une segmentation d'image au flux vidéo aux fins d'y identifier tous les objets pertinents ;

    - Étiquetage des objets pertinents dans l'image segmentée.

    ayant pour résultat un flux vidéo traité comportant les objets pertinents étiquetés.

    Étape 4 Création, par l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette, d'un modèle d'environnement routier fusionné sur la base du flux vidéo traité précédent et des données reçues en provenance du module radar (RM).

    Étape 5 Sur la base du modèle d'environnement routier fusionné de l'étape 4 et sur la base d'une partie des données d'entrée reçues à l'étape 2, à savoir :

    - une inclinaison du casque reçue en provenance du gyroscope (GYRO) ;

    - une position géographique en provenance du capteur GPS (GPSS) ;

    - le signal d'accélération latérale, longitudinale et la vitesse de lacet en provenance de l'accéléromètre (ACC) et

    - une vitesse en provenance du compteur de vitesse (SPEEDO),

    application, par l'unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette, d'un algorithme de localisation et de cartographie simultanées (SLAM) aux fins de localiser l'égo-motocyclette dans le modèle d'environnement routier fusionné et de localiser l'orientation correspondante du casque intelligent (SH),

    ayant pour résultat la localisation et la cartographie simultanées du conducteur de motocyclette et de son casque intelligent (SH) dans le modèle d'environnement routier fusionné.

    Étape 6 Sur la base :

    - du flux vidéo traité comportant les objets pertinents étiquetés de l'étape 3 ;

    - de la localisation et de la cartographie simultanées du conducteur de motocyclette et de son casque intelligent (SH) dans le modèle d'environnement routier fusionné de l'étape 5 et

    - des données en provenance de l'unité embarquée sur l'égo-motocyclette (MOBU) reçues à l'étape 2 concernant les autres véhicules pourvus d'unités embarquées sur un véhicule (VOBU) correspondantes, y compris des véhicules autonomes ;

    - comparaison et corrélation des données concernant la localisation et la cartographie simultanées du conducteur de motocyclette et de son casque intelligent (SH) dans le modèle d'environnement routier fusionné issues de l'étape 5 avec les données reçues à l'étape 2 en provenance de l'unité embarquée sur l'égo-motocyclette (MOBU) concernant les autres véhicules pourvus d'unités embarquées sur un véhicule (VOBU) correspondantes, y compris des véhicules autonomes,

    - détection, dans ledit flux vidéo traité, de chaque véhicule autonome pourvu de l'identifiant de véhicule autonome (AVI) correspondant, et

    - marquage de chaque véhicule autonome détecté dans le flux vidéo traité,

    ayant pour résultat un flux vidéo traité comportant des véhicules autonomes détectés marqués circulant dans le champ de vision du conducteur de motocyclette.

    Étape 7 Application d'une correction au marquage de chaque véhicule autonome détecté dans le flux vidéo traité aux fins d'améliorer la précision de marquage sur le flux vidéo traité des véhicules autonomes détectés, et envoi, via le système de bus (BS) d'égo-motocyclette, de données concernant lesdits véhicules autonomes marqués au viseur de casque intelligent (SH),

    ayant pour résultat un flux vidéo corrigé comportant des véhicules autonomes détectés marqués circulant dans le champ de vision du conducteur de motocyclette.

    Étape 8 Affichage, sur le viseur de casque intelligent (SH), des véhicules autonomes marqués d'une manière compréhensible par le conducteur de motocyclette.


     
    7. Programme d'ordinateur comprenant des instructions qui, lorsque le programme est exécuté sur une unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette selon l'une quelconque des revendications 1 à 5, amènent ladite unité de traitement de détection (DPU) embarquée sur l'égo-motocyclette à exécuter les étapes 2 à 7 du procédé selon la revendication 6.
     




    Drawing














    Cited references

    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description