(19)
(11)EP 3 620 963 A1

(12)EUROPEAN PATENT APPLICATION

(43)Date of publication:
11.03.2020 Bulletin 2020/11

(21)Application number: 19187077.3

(22)Date of filing:  18.07.2019
(51)International Patent Classification (IPC): 
G06K 9/00(2006.01)
G06K 9/62(2006.01)
(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(30)Priority: 10.09.2018 CN 201811050981

(71)Applicant: Baidu Online Network Technology (Beijing) Co., Ltd.
Beijing 100085 (CN)

(72)Inventor:
  • CHEN, Youhan
    Haidian District, Beijing 100085 (CN)

(74)Representative: J A Kemp LLP 
14 South Square Gray's Inn
London WC1R 5JJ
London WC1R 5JJ (GB)

  


(54)METHOD, APPARATUS AND DEVICE FOR IDENTIFYING PASSENGER STATE IN UNMANNED VEHICLE


(57) Embodiments of the present application provide a method, an apparatus, and a device for identifying a passenger state in an unmanned vehicle, and a storage medium. The method comprises: obtaining monitoring data of different dimensions in a process where the passenger takes the unmanned vehicle; performing feature extraction on the monitoring data of the different dimensions and forming feature data of different dimensions; and identifying the passenger state according to the feature data of the different dimensions. By obtaining the monitoring data of various dimensions in the process where the passenger takes the unmanned vehicle to identify the passenger state, it is possible to omnidirectionally monitor the personal safety and property safety of the passengers, and effectively protect the passenger taking the unmanned vehicle.




Description

TECHNICAL FIELD



[0001] Embodiments of the present application relate to the field of unmanned driving technology, and in particular, to a method, an apparatus, and a device for identifying a passenger state in an unmanned vehicle and a storage medium.

BACKGROUND



[0002] With the developments of the Internet and the economy, in order to meet people's travel demand, unmanned driving technology has achieved rapid development. Unmanned driving car is a type of smart car, also known as wheeled mobile robot, which relies mainly on a computer-based intelligent pilot in the car to achieve the goal of unmanned driving.

[0003] There is no driver in the unmanned vehicle. When the passenger is traveling in an unmanned vehicle, it is essential to monitor the passenger state omnidirectionally to ensure the personal safety and property safety of the passenger.

[0004] In the prior art, the passenger's body is monitored mainly by means of a device carried by the passenger, such as a smart wristband. And when a threat is posed to the passenger's personal safety, the situation is reported by a smart wristband or a mobile phone. However, this method cannot omnidirectionally monitor the personal safety and property safety of passengers, and cannot effectively protect the passengers taking unmanned vehicles.

SUMMARY



[0005] Embodiments of the present application provide a method, an apparatus, and a device for identifying a passenger state in an unmanned vehicle, and a storage medium, which solve the problem that the passenger's personal safety and property safety cannot be omnidirectionally monitored and the passenger taking the unmanned vehicle cannot be effectively protected in the method for identifying the passenger state in the prior art.

[0006] A first aspect of the embodiments of the present application provides a method for identifying a passenger state in an unmanned vehicle, including: obtaining monitoring data of different dimensions in a process where a passenger takes the unmanned vehicle; performing feature extraction on the monitoring data of different dimensions and forming feature data of different dimensions; identifying the passenger state according to the feature data of different dimensions.

[0007] A second aspect of the embodiments of the present application provides an apparatus for identifying a passenger state in an unmanned vehicle, including: a data obtaining module, configured to obtain monitoring data of different dimensions in a process where a passenger takes the unmanned vehicle; and a feature extraction module, configured to perform feature extraction on the monitoring data of different dimensions, and form feature data of different dimensions; and a state identification module, configured to identify the passenger state according to the feature data of different dimensions.

[0008] A third aspect of the embodiments of the present application provides a terminal device, including: one or more processors; a storage apparatus, configured to store one or more programs; and a data collecting apparatus, configured to collect monitoring data of different dimensions; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of the first aspect described above.

[0009] A fourth aspect of the embodiments of the present application provides a computer readable storage medium having stored thereon a computer program; the program is executed by the processor to implement the method of the first aspect described above.

[0010] Based on the above aspects, the embodiments of the present application obtain monitoring data of different dimensions in a process where a passenger takes an unmanned vehicle, performs feature extraction on the monitoring data of different dimensions and forms feature data of different dimensions, identifies the passenger state according to the feature data of different dimensions. By obtaining the monitoring data of various dimensions in the process where the passenger takes the unmanned vehicle to identify the passenger state, it is possible to omnidirectionally monitor the personal safety and property safety of the passengers, and effectively protect the passenger taking the unmanned vehicle.

[0011] It should be understood that the content described in the above summary section of the application is not intended to limit the key or important features of the embodiment of the present application, and is not intended to limit the scope of the present application. Other features of the present application will be readily understood with reference to the following description.

Brief description of the drawings



[0012] 

FIG. 1 is a flowchart of a method for identifying a passenger state in an unmanned vehicle according to Embodiment 1 of the present application;

FIG. 2 is a flowchart of a method for identifying a passenger state in an unmanned vehicle according to Embodiment 2 of the present application;

FIG. 3 is a flowchart of a method for identifying a passenger state in an unmanned vehicle according to Embodiment 3 of the present application;

FIG. 4 is a schematic structural diagram of an apparatus for identifying a passenger state in an unmanned vehicle according to Embodiment 4 of the present application;

FIG. 5 is a schematic structural diagram of an apparatus for identifying a passenger state in an unmanned vehicle according to Embodiment 5 of the present application;

FIG. 6 is a schematic structural diagram of a terminal device according to Embodiment 6 of the present application.


DETAILED DESCRIPTION OF THE EMBODIMENTS



[0013] Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. Although some embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in a variety of forms and should not be construed as limited to the embodiments set forth herein, on the contrary, these embodiments are provided to make the present invention understood more thoroughly and completely. It should be understood that the drawings and the embodiments of the present application are for the purpose of illustration only and not for limiting the protection scope of the present application.

[0014] The terms "first", "second", "third", "fourth", etc. (if present) in the description and claims of the embodiments of the present application and the above figures are used to distinguish similar objects, and not necessarily to describe a particular sequence or sequential order. It should be understood that the data used in this way may be interchanged where appropriate, such that the embodiments of the present application described herein can be implemented, for example, in a sequence other than those illustrated or described herein. In addition, the terms "comprise" and "having" and any variations of them are intended to cover a non-exclusive inclusion, for example, processes, methods, systems, products, or devices that include series of steps or units are not necessarily limited to those explicitly listed steps or units, but may include other steps or units that are not explicitly listed or inherent to such processes, methods, products or devices.

[0015] In order to clearly understand the technical solution of the present application, the terms involved in the present application are explained below:
Unmanned driving car: an unmanned driving car is a smart car that senses the road environment through the in-vehicle sensing system, automatically plans the driving route and controls the vehicle to reach the predetermined target. It uses the on-board sensor to sense the surrounding environment of the vehicle, and controls the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by sensing, so that the vehicle can travel safely and reliably on the road. It integrates many technologies such as automatic control, architecture, artificial intelligence and visual computing, is a product of high development of computer science, pattern identification and intelligent control technology, and is also an important symbol of the scientific research strength and industrial level of a country, with broad application prospects in the national defense area and national economy area. In embodiments of the present application, the unmanned driving car is referred to as an unmanned vehicle.

[0016] Embodiments of the present application will be specifically described below with reference to the drawings.

Embodiment 1



[0017] FIG. 1 is a flowchart of a method for identifying a passenger state in an unmanned vehicle according to Embodiment 1 of the present application. As shown in FIG. 1, an executive body of the embodiment of the present application is an identification apparatus for identifying a passenger state in an unmanned vehicle. The identification apparatus for identifying the passenger state in the unmanned vehicle can be integrated in a terminal device. The terminal device is an in-vehicle terminal device in the unmanned vehicle. The method for identifying the passenger state in the unmanned vehicle provided in this embodiment includes the following steps.

[0018] Step 101: obtaining monitoring data of different dimensions in a process where a passenger takes an unmanned vehicle.

[0019] Specifically, in this embodiment, the identification apparatus for identifying the passenger state in the unmanned vehicle may be enabled to communicate with different types of sensors in the unmanned vehicle, and obtain monitoring data of different dimensions in the process where the passenger takes the unmanned vehicle; alternatively, different sensors in the unmanned vehicle may directly collect monitoring data of corresponding dimensions in the process where the passenger takes the unmanned vehicle.

[0020] In this case, different types of sensors in the unmanned vehicle may include: an internal camera, a microphone, a vital sign sensor, a collision sensor, and the like. Correspondingly, the monitoring data of different dimensions may include: expression monitoring data, limb movement monitoring data, sound monitoring data, vital sign monitoring data, collision data of colliding a vehicle body or a seat, and the like.

[0021] Step 102: performing feature extraction on the monitoring data of different dimensions and forming feature data of different dimensions.

[0022] Further, in this embodiment, the monitoring data of each dimension may be extracted by using a corresponding feature extraction algorithm to form feature data of each dimension.

[0023] Performing feature extraction on the monitoring data of different dimensions and forming feature data of different dimensions are illustrated as follows: for the expression monitoring data, expression feature extraction algorithm is used for the feature extraction, where the expression feature extraction algorithm can be a PCA feature extraction algorithm, an ICA feature extraction algorithm, etc. For the sound monitoring data, a sound feature extraction algorithm is used for the feature extraction, where the sound feature extraction algorithm can be a mel filterbank feature extraction algorithm, an mfcc feature extraction algorithm and so on.

[0024] Step 103: identifying the passenger state according to the feature data of different dimensions.

[0025] Specifically, in this embodiment, all the feature data of different dimensions can be input into a total identification algorithm to identify the passenger state; alternatively, the feature data of each dimension may also be input into a corresponding identification algorithm, and passenger state probability data corresponding to the feature data of each dimension is output, then a weighted summation calculation is performed on the passenger state probability data of each dimension according to a preset weight of the passenger state probability data of each dimension, thus general passenger state probability data is obtained, and the final state of the passenger is determined according to the general passenger state probability data and a preset threshold.

[0026] It can be understood that, in this embodiment, the manner for identifying the passenger state according to the feature data of different dimensions may be other manner, which is not limited in this embodiment.

[0027] In this case, the identified passenger state may be a danger state or a safety state.

[0028] In the method for identifying the passenger state in the unmanned vehicle provided by this embodiment, monitoring data of different dimensions in a process where a passenger takes an unmanned vehicle is obtained; feature extraction is performed on the monitoring data of different dimensions and feature data of different dimensions is formed; and the passenger state is identified according to the feature data of different dimensions. By obtaining the monitoring data of various dimensions in the process where a passenger takes the unmanned vehicle to identify the passenger state, it is possible to omnidirectionally monitor the personal safety and property safety of the passengers, and effectively protect the passenger taking the unmanned vehicle.

Embodiment 2



[0029] FIG. 2 is a flowchart of a method for identifying a passenger state in an unmanned vehicle according to Embodiment 2 of the present application. As shown in FIG. 2, the method for identifying a passenger state in an unmanned vehicle according to this embodiment is based on the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 1 of the present application, and further refines the steps 101 to 103. The method for identifying the passenger state in an unmanned vehicle provided by this embodiment includes the following steps.

[0030] Step 201, collecting in real time, by different sensors provided in the unmanned vehicle, the monitoring data of corresponding dimensions in the process where a passenger takes the unmanned vehicle.

[0031] Further, in this embodiment, the sensors include at least: an internal camera, a microphone, a vital sign sensor, and a collision sensor.

[0032] In this case, the internal camera is provided in the passenger seating area to capture a video or image of the passenger. The microphone can also be provided in the passenger seating area to collect the passenger's voice or other sounds. The vital sign sensor can be provided on the seat belt, and when taking the unmanned vehicle, the passenger wears the seat belt, or the passenger wears a device with a vital sign sensor to monitor the vital signs of the passenger. The collision sensor is provided on the inner side of the vehicle body or on the seat back of the passenger seat, and collects the collision data when the passenger collides with the inside of the vehicle body or the seat back.

[0033] Preferably, in this embodiment, collecting in real time, by different sensors provided in the unmanned vehicle, the monitoring data of the corresponding dimensions in the process where a passenger takes the unmanned vehicle specifically includes:
Firstly, collecting, by the internal camera, the expression monitoring data and the limb movement monitoring data in the process where the passenger takes the unmanned vehicle.

[0034] Further, in this embodiment, the number of internal cameras may be one or more. If there are multiple internal cameras, the internal cameras are provided in different directions of the passenger seating area. A video and an image of the passenger's face are captured by one or more internal cameras out of all the internal cameras, and the passenger's expression monitoring data is obtained from the video and the image of the face. And a video and an image of the passenger's whole body are captured by the rest one or more internal cameras, and limb movement monitoring data is obtained from the video and image of the whole body.

[0035] Secondly, collecting, by the microphone, the sound monitoring data in the process where the passenger takes the unmanned vehicle.

[0036] Further, in this embodiment, the number of the microphones may be one or more. If there are multiple microphones, the microphones are provided in different directions of the passenger seating area, and the voices of the passengers and other sounds in the unmanned vehicle are collected by the microphones to form the sound monitoring data.

[0037] Thirdly, collecting, by the vital sign sensor, vital sign monitoring data in the process where the passenger takes the unmanned vehicle.

[0038] Further, in this embodiment, the vital sign monitoring data collected by the vital sign sensor in the process where the passenger takes the unmanned vehicle may include: body temperature data, heartbeat data, blood pressure data, and the like.

[0039] Finally, collecting, by the collision sensor, the collision data of colliding the vehicle body or the seat in the process where a passenger takes the unmanned vehicle.

[0040] Further, in this embodiment, the collision sensors are installed on both the inner side of the vehicle body and the seat back, and the collision sensors collect the collision data when the passenger collides with the inner side of the vehicle body or the seat back.

[0041] It can be understood that each sensor can collect the monitoring data of the corresponding dimension in real time after the passenger gets on the vehicle and before getting off the vehicle.

[0042] It should be noted that, in this embodiment, step 201 is a further refinement of the step 101 of the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 1 of the present application.

[0043] Step 202: performing feature extraction on the monitoring data of each dimension by means of a corresponding feature extraction algorithm, and forming feature data of a corresponding dimension.

[0044] In this embodiment, there is a corresponding feature extraction algorithm for the monitoring data of each dimension. For example, as for the expression monitoring data, the corresponding feature extraction algorithm may be a PCA feature extraction algorithm, an ICA feature extraction algorithm, and the like. For the sound monitoring data, the corresponding feature extraction algorithm may be a mel filterbank feature extraction algorithm, an mfcc feature extraction algorithm, and the like.

[0045] Further, in this embodiment, the expression feature extraction algorithm is used to perform feature extraction on the expression monitoring data to form expression feature data. The limb movement feature extraction algorithm is used to perform the feature extraction on the limb movement monitoring data to form the limb movement feature data. The sound feature extraction algorithm is used to perform the feature extraction on the sound monitoring data to form sound feature data. The vital sign feature extraction algorithm is used to perform the feature extraction on the vital sign monitoring data to form vital sign feature data. The collision feature extraction algorithm is used to perform the feature extraction on the collision data to form collision feature data.

[0046] Step 203: obtaining a first training sample and a first test sample.

[0047] In this case, the first training sample and the first test sample are feature data of the same dimension of each passenger. There are multiple first training samples and test samples.

[0048] Further, in this embodiment, before the feature data of each dimension is input into the corresponding first identification algorithm and the passenger state probability data corresponding to the feature data of each dimension is output, the first identification algorithm needs to be optimized to form an optimized first identification algorithm.

[0049] In this case, the first identification algorithm is an algorithm for determining passenger state probability data corresponding to the feature data of each dimension. The first identification algorithm is a deep learning algorithm, such as a convolutional neural network model algorithm, a deep neural network model algorithm, or the like.

[0050] In this embodiment, the first training sample and the first test sample are the training sample and the test sample corresponding to the first identification algorithm, where the first training sample and the first test sample may be feature data of the same dimension of each passenger that has occurred. As for the identification algorithm for identifying the passenger state probability based on the expression feature data, the training sample and the test sample are expression feature data of each passenger that has occurred. As another example, for an identification algorithm for identifying passenger state probability based on sound feature data, the training sample and test sample are sound feature data of each passenger that has occurred.

[0051] Step 204: training the first identification algorithm by means of the first training sample, and testing the first identification algorithm by means of the first test sample until the first identification algorithm converges.

[0052] Further, in this embodiment, each first training sample is input into the first identification algorithm; the model of the first identification algorithm is trained; the parameters are optimized; and each first test sample is input to the optimized first identification algorithm; then it is determined whether the model of the first identification algorithm is optimal, and if not, the training for the model of the first identification algorithm is continued until the model of the first identification algorithm converges to be optimal.

[0053] It can be understood that after the first identification algorithm is the optimized identification algorithm, it is not necessary to perform step 203-step 204 every time the passenger state is identified.

[0054] Step 205: inputting the feature data of each dimension into a corresponding first identification algorithm, and outputting the passenger state probability data corresponding to the feature data of each dimension.

[0055] In this embodiment, the feature data of each dimension is input into the corresponding first identification algorithm, where the first identification algorithm is a first identification algorithm optimized after a training and test process, and the first identification algorithm identifies the passenger state according to the feature data of the corresponding dimension, and the corresponding passenger state probability data is output.

[0056] In this case, the passenger state probability data is the probability data of the passenger danger state or the probability data of the passenger safety state, which is not limited in this embodiment.

[0057] Step 206: obtaining a weight value corresponding to the passenger state probability data of each dimension.

[0058] Further, in this embodiment, the weight value corresponding to the passenger state probability data of each dimension is predefined and stored, and the weight value corresponding to the passenger state probability data of each dimension is obtained from a storage area.

[0059] Step 207: performing a weighted summation calculation on the passenger state probability data of respective dimensions to obtain general passenger state probability data.

[0060] Further, in this embodiment, the passenger state probability data of each dimension is multiplied by the corresponding weight value, and the multiplied results are summed, then the obtained result is the general passenger state probability data. If the passenger state probability data of each dimension is the probability data of the passenger danger state, the general passenger state probability data is the general probability data of the passenger danger state. If the passenger state probability data of each dimension is probability data of the passenger safety state, the general passenger state probability data is the general probability data of the passenger safety state.

[0061] Step 208: determining the passenger state according to the general passenger state probability data and a preset threshold.

[0062] Further, in this embodiment, a danger probability threshold corresponding to the general probability of the passenger danger state and a safety probability threshold corresponding to the general probability of the passenger safety state are defined in advance.

[0063] In this case, the specific values of the danger probability threshold and the safety probability threshold are not limited in this embodiment.

[0064] If the value corresponding to the general probability data of the passenger danger state is greater than the preset danger probability threshold, then it is determined that the passenger state is a danger state; if the value corresponding to the general probability data of the passenger danger state is less than or equal to the preset danger probability threshold, it is determined that the passenger state is a safety state. Or if the value corresponding to the general probability data of the passenger safety state is greater than the preset safety probability threshold, it is determined that the passenger state is the safety state; if the value corresponding to general probability data of the passenger safety state is less than or equal to the preset safety probability threshold, it is determined that the passenger state is the danger state.

[0065] In the method for identifying the passenger state in the unmanned vehicle provided by this embodiment, the monitoring data of corresponding dimensions in the process where the passenger takes the unmanned vehicle is collected by different sensors provided in the unmanned vehicle in real time; and feature extraction is performed on the monitoring data of each dimension by means of a corresponding feature extraction algorithm, thus feature data of corresponding dimension is formed; feature data of each dimension is input into the corresponding first identification algorithm and the passenger state probability data corresponding to each dimension feature data is output, thus a weight value corresponding to the passenger state probability data of each dimension is obtained; a weighted summation calculation is performed on passenger state probability data of respective dimensions to obtain general passenger state probability data; determining the passenger state according to the general passenger state probability data and the preset threshold enables the omnidirectional monitoring on the personal safety and property safety of the passengers, further, the accuracy for identifying passenger state is effectively improved by using the optimized deep learning algorithm to identify the passenger state.

Embodiment 3



[0066] FIG. 3 is a flowchart of a method for identifying a passenger state in an unmanned vehicle according to Embodiment 3 of the present application. As shown in FIG. 3, the method for identifying a passenger state in an unmanned vehicle according to this embodiment is based on the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 1 of the present application, and further refines the steps 101 to 103. The method for identifying the passenger state in an unmanned vehicle provided by this embodiment includes the following steps.

[0067] Step 301: collecting in real time, by different sensors provided in the unmanned vehicle, monitoring data of corresponding dimensions in the process where a passenger takes the unmanned vehicle.

[0068] Step 302: performing feature extraction on the monitoring data of each dimension by means of a corresponding feature extraction algorithm and forming feature data of a corresponding dimension.

[0069] In this embodiment, the implementation of the steps 301-302 is the same as the implementation of the steps 201-202 in the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 2 of the present application, and details are not described herein again.

[0070] Step 303: obtaining a second training sample and a second test sample.

[0071] In this case, the second training sample and the second test sample are feature data of different dimensions of each passenger.

[0072] Further, in this embodiment, before the feature data of different dimensions is input into a second identification algorithm and the passenger state is identified by the second identification algorithm, the second identification algorithm needs to be optimized to form an optimized second identification algorithm.

[0073] In this case, the second identification algorithm is an algorithm for identifying the passenger state through feature data of all dimensions. The second identification algorithm is a deep learning algorithm, such as a convolutional neural network model algorithm, a deep neural network model algorithm, or the like.

[0074] In this embodiment, the second training sample and the second test sample are the training sample and the test sample corresponding to the second identification algorithm, where the second training sample and the second test sample are feature data of all dimensions of each passenger that has occurred.

[0075] Step 304: training the second identification algorithm by means of the second training sample, and testing the second identification algorithm by means of the second test sample until the second identification algorithm converges.

[0076] Further, in this embodiment, each second training sample is input into the second identification algorithm, the model of the second identification algorithm is trained, the parameters are optimized; and each second test sample is input to the optimized second identification algorithm; then it is determined whether the model of the second identification algorithm is optimal, and if not, the training for the model of the second identification algorithm is continued until the model of the second identification algorithm converges to be optimal.

[0077] It can be understood that after the second identification algorithm is the optimized identification algorithm, it is not necessary to perform step 303-step 304 every time the passenger state is identified.

[0078] Step 305: inputting the feature data of different dimensions into a second identification algorithm, and identifying the passenger state by means of the second identification algorithm.

[0079] Step 306: outputting the passenger state.

[0080] Further, in this embodiment, the feature data of all dimensions of the monitored passenger is input into the second identification algorithm, and the second identification algorithm identifies the passenger state according to the feature data of all dimensions, and the passenger state is output. If the passenger state is identified as a danger state according to the feature data of all dimensions, the passenger danger state is output, and if the passenger state is identified as a safety state according to the feature data of all dimensions, the passenger safety state is output.

[0081] It can be understood that the identity information of the passenger and the contact information of the family are pre-stored in the unmanned vehicle. If the passenger state is the danger state, the passenger danger state, the identity information of the passenger and the contact information of the family are reported to the server through the communication module, such that the server notifies the passenger's family according to the identity information of the passenger and the contact information of the family. A GPS module may also be provided on the unmanned vehicle, through which the unmanned vehicle can be located, and the position of the unmanned vehicle may be sent to the server, so that the server obtains the position of the unmanned vehicle that the passenger takes, thus the passenger can be rescued in time.

[0082] In the method for identifying the passenger state in the unmanned vehicle provided by this embodiment, the monitoring data of the corresponding dimensions is collected in real time in the process where a passenger takes the unmanned vehicle by means of different sensors provided in the unmanned vehicle; feature extraction is performed on the monitoring data of each dimension by means of a corresponding feature extraction algorithm and feature data of the corresponding dimension is formed; a second training sample and a second test sample are obtained; the second identification algorithm is trained by means of the second training sample, and the second identification algorithm is tested by means of the second test sample until the second identification algorithm converges; the feature data of different dimensions is input into a second identification algorithm; and the passenger state is identified by the second identification algorithm, then the passenger state is output. The method enables the omnidirectional monitoring on the personal safety and property safety of the passengers, further the accuracy for identifying passenger state is effectively improved by using the optimized deep learning algorithm to identify the passenger state.

Embodiment 4



[0083] FIG. 4 is a schematic structural diagram of an apparatus for identifying a passenger state in an unmanned vehicle according to Embodiment 4 of the present invention. As shown in FIG. 4, the apparatus 40 for identifying a passenger state in an unmanned vehicle according to this embodiment includes: a data obtaining module 41, a feature extraction module 42 and a state identification module 43.

[0084] In this case, the data obtaining module 41 is configured to obtain monitoring data of different dimensions in the process where the passenger takes the unmanned vehicle. The feature extraction module 42 is configured to perform feature extraction on the monitoring data of different dimensions and form feature data of different dimensions. The state identification module 43 is configured to identify the passenger state according to the feature data of different dimensions.

[0085] The apparatus for identifying the state of the passenger in the unmanned vehicle provided in this embodiment can perform the technical solution of the method embodiment shown in FIG. 1, and the implementation principle and technical effects thereof are similar, thus details are not described herein again.

Embodiment 5



[0086] FIG. 5 is a schematic structural diagram of an apparatus for identifying a passenger state in an unmanned vehicle according to Embodiment 5 of the present invention. As shown in FIG. 5, based on the apparatus for identifying a passenger state in an unmanned vehicle provided in Embodiment 4 of the present application, the apparatus 50 for identifying a passenger state in an unmanned vehicle provided in this embodiment further includes: a first sample obtaining module 51, a first optimization module 52, a second sample obtaining module 53 and a second optimization module 54.

[0087] Further, the data obtaining module 41 is specifically configured to: collect real time, by means of different sensors provided in the unmanned vehicle, the monitoring data of the corresponding dimensions in the process where a passenger takes the unmanned vehicle.

[0088] Further, the sensors include at least: an internal camera, a microphone, a vital sign sensor, and a collision sensor.

[0089] Further, the data obtaining module 41 is specifically configured to: collect, by means of the internal camera, expression monitoring data and limb movement monitoring data in the process where the passenger takes the unmanned vehicle; and collect, by means of the microphone, sound monitoring data in the process where the passenger takes the unmanned vehicle; collect, by means of the vital signs sensor, the vital sign monitoring data in the process where the passenger takes the unmanned vehicle; collect, by means of the collision sensor, the collision data of colliding the vehicle body or the seat in the process where the passenger takes the unmanned vehicle.

[0090] Further, the feature extraction module 42 is specifically configured to: perform feature extraction on the monitoring data of each dimension by means of a corresponding feature extraction algorithm, and form feature data of the corresponding dimension.

[0091] Optionally, the state identification module 43 is specifically configured to: input the feature data of each dimension into the corresponding first identification algorithm, and output the passenger state probability data corresponding to the feature data of each dimension; obtain a weight value corresponding to the passenger state probability data of each dimension; perform a weighted summation calculation on passenger state probability data of respective dimensions to obtain general passenger state probability data; determine the passenger state according to the general passenger state probability data and a preset threshold.

[0092] Optionally, the state identification module 43 is specifically configured to: input the feature data of different dimensions into a second identification algorithm, and identify the passenger state by means of the second identification algorithm; output the passenger state.

[0093] Optionally, the first identification algorithm is a deep learning algorithm. The first sample obtaining module 51 is configured to obtain the first training sample and the first test sample, where the first training sample and the first test sample are feature data of the same dimension of each passenger. The first optimization module 52 is configured to train the first identification algorithm by means of the first training sample, and test the first identification algorithm by means of the first test sample until the first identification algorithm converges.

[0094] Optionally, the second identification algorithm is a deep learning algorithm. The second sample obtaining module 53 is configured to obtain the second training sample and the second test sample, where the second training sample and the second test sample are feature data of different dimensions of each passenger. The second optimization module 54 is configured to train the second identification algorithm by means of the second training sample, and test the second identification algorithm by means of the second test sample until the second identification algorithm converges.

[0095] The apparatus for identifying the passenger state in the unmanned vehicle provided in this embodiment can perform the technical solution of the method embodiment shown in FIG. 2 or FIG. 3. As the implementation principle and technical effects are similar, details are not described herein again.

Embodiment 6



[0096] FIG. 6 is a schematic structural diagram of a terminal device according to Embodiment 6 of the present application. As shown in FIG. 6, the terminal device 60 provided in this embodiment includes: one or more processors 61, a storage apparatus 62, and a data collecting apparatus 63.

[0097] The storage apparatus 62 is configured to store one or more programs. The data collecting apparatus 63 is configured to collect monitoring data of different dimensions. The one or more programs, when executed by one or more processors, cause the one or more processors to implement the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 1of the present application or the unmanned method for identifying the passenger state in the unmanned vehicle provided in Embodiment 2 of the present application or the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 3of the present application.

[0098] The related description can be understood by referring to the related descriptions and effects corresponding to the steps in FIG. 1 to FIG. 3, and no further description is made here.

Embodiment 7



[0099] Embodiment 7 of the present application provides a computer readable storage medium, on which a computer program is stored, and the computer program is executed by the processor to implement the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 1 of the present application or the unmanned method for identifying the passenger state in the unmanned vehicle provided in Embodiment 2 of the present application or the method for identifying the passenger state in the unmanned vehicle provided in Embodiment 3 of the present application.

[0100] In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be another division manner, for example, multiple modules or components may be combined or integrated into another system, or some features may be ignored or not implemented. Alternatively, the coupling, direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interfaces, apparatuses or modules, and may be in electrical, mechanical or other form.

[0101] The modules described as separate components may or may not be physically separated, and the components illustrated as modules may or may not be physical modules, that is, the components may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solution of the embodiment.

[0102] In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically separately, or two or more modules may be integrated into one module. The above integrated module may be implemented in the form of hardware or in the form of a combination of hardware and software functional modules.

[0103] Program codes for implementing the method of the present application can be written in any combination of one or more programming languages. The program codes may be provided to a general purpose computer, a special purpose computer or a processor or controller of other programmable data processing apparatus such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program codes may be executed entirely or partly on the machine, or be executed, as a stand-alone software package, partly on the machine and partly on the remote machine, or be executed entirely on a remote machine or a server.

[0104] In the context of the present application, a machine-readable medium may be a tangible medium that may include or store a program for use by or be used in connection with an instruction execution system, apparatus, or device. The machine readable medium can be a machine readable signal medium or a machine readable storage medium. A machine-readable medium can include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system apparatus or device, or any suitable combination of the above. More specific examples of the machine-readable storage medium may include electrical connections based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optical fiber, a compact disk read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

[0105] In addition, although the operations are depicted in a particular order, it should be understood that such operations are performed in the particular order shown or in a sequential order, or that all illustrated operations should be performed to achieve the desired results. Multitasking and parallel processing may be advantageous in certain circumstances. Likewise, several specific implementation details are included in the above discussion, which however should not be construed as limiting the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can be implemented in a plurality of implementations, either individually or in any suitable sub-combination.

[0106] Although the subject matter has been described in language specific to structural features and/or methodological acts, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Instead, the specific features and acts described above are merely exemplary forms of implementing the claims.


Claims

1. A method for identifying a passenger state in an unmanned vehicle, comprising:

obtaining (101) monitoring data of different dimensions in a process where a passenger takes the unmanned vehicle;

performing (102) feature extraction on the monitoring data of different dimensions and forming feature data of different dimensions; and

identifying (103) the passenger state according to the feature data of different dimensions.


 
2. The method according to claim 1, wherein the obtaining (101) monitoring data of different dimensions in a process where the passenger takes the unmanned vehicle comprises:
collecting (201) in real time, by different sensors provided in the unmanned vehicle, monitoring data of corresponding dimensions in the process where the passenger takes the unmanned vehicle.
 
3. The method of claim 2, wherein the sensors comprise at least: an internal camera, a microphone, a vital sign sensor, and a collision sensor.
 
4. The method according to claim 3, wherein the collecting (201) in real time, by different sensors provided in the unmanned vehicle, monitoring data of corresponding dimensions in the process where the passenger takes the unmanned vehicle, comprises:

collecting, by the internal camera, expression monitoring data and limb movement monitoring data in the process where the passenger takes the unmanned vehicle;

collecting, by the microphone, sound monitoring data in the process where the passenger takes the unmanned vehicle;

collecting, by the vital sign sensor, vital sign monitoring data in the process where the passenger takes the unmanned vehicle; and

collecting, by the collision sensor, collision data of colliding a vehicle body or a seat in the process where the passenger takes the unmanned vehicle.


 
5. The method according to claim 1, wherein the performing (102) feature extraction on the monitoring data of different dimensions and forming feature data of different dimensions comprises:
performing (202) the feature extraction on the monitoring data of each dimension by means of a corresponding feature extraction algorithm and forming feature data of a corresponding dimension.
 
6. The method according to claim 1, wherein the identifying (103) the passenger state according to the feature data of different dimensions comprises:

inputting (205) feature data of each dimension into a corresponding first identification algorithm, and outputting passenger state probability data corresponding to the feature data of each dimension;

obtaining (206) a weight value corresponding to the passenger state probability data of each dimension;

performing (207) a weighted summation calculation on passenger state probability data of respective dimensions to obtain general passenger state probability data; and

determining (208) the passenger state according to the general passenger state probability data and a preset threshold.


 
7. The method according to claim 1, wherein the identifying (103) the passenger state according to the feature data of different dimensions comprises:

Inputting (305) the feature data of different dimensions into a second identification algorithm, and identifying the passenger state by means of the second identification algorithm; and

outputting (306) the passenger state.


 
8. An apparatus (40) for identifying a passenger state in an unmanned vehicle, comprising:

a data obtaining module (41), configured to obtain monitoring data of different dimensions in a process where a passenger takes the unmanned vehicle;

a feature extraction module (42), configured to perform feature extraction on the monitoring data of different dimensions and form feature data of different dimensions;

a state identification module (43), configured to identify the passenger state according to the feature data of different dimensions.


 
9. The apparatus (40) according to claim 8, wherein the data obtaining module (41) is specifically configured to:
collect in real time, by different sensors provided in the unmanned vehicle, monitoring data of corresponding dimensions in the process where the passenger takes the unmanned vehicle.
 
10. The apparatus (40) according to claim 9, wherein the sensors comprise at least: an internal camera, a microphone, a vital sign sensor, a collision sensor.
 
11. The apparatus (40) according to claim 10, wherein the data obtaining module (41) is specifically configured to:

collect, by the internal camera, expression monitoring data and limb movement monitoring data in the process where the passenger takes the unmanned vehicle;

collect, by the microphone, sound monitoring data in the process where the passenger takes the unmanned vehicle;

collect, by the vital sign sensor, vital sign monitoring data in the process where the passenger takes the unmanned vehicle; and

collect, by the collision sensor, collision data of colliding a vehicle body or a seat in the process where the passenger takes the unmanned vehicle.


 
12. The apparatus (40) according to claim 8, wherein the feature extraction module (42) is specifically configured to:
perform the feature extraction on the monitoring data of each dimension by means of a corresponding feature extraction algorithm and form feature data of a corresponding dimension.
 
13. The apparatus (40) according to claim 8, wherein the state identification module (43) is specifically configured to:

input feature data of each dimension into a corresponding first identification algorithm, and output passenger state probability data corresponding to the feature data of each dimension;

obtain a weight value corresponding to the passenger state probability data of each dimension;

perform a weighted summation calculation on passenger state probability data of respective dimensions to obtain general passenger state probability data; and

determine the passenger state according to the general passenger state probability data and a preset threshold.


 
14. The apparatus (40) according to claim 8, wherein the state identification module (43) is specifically configured to:

input the feature data of different dimensions into a second identification algorithm, and identify the passenger state by means of the second identification algorithm; and

output the passenger state.


 
15. A terminal device (60), comprising:

one or more processors (61);

a storage apparatus, configured to store one or more programs;

a data collecting apparatus (62), configured to collect monitoring data of different dimensions;

wherein the one or more programs, when executed by the one or more processors (61), cause the one or more processors to implement the method according to any one of claims 1-7.


 




Drawing



















Search report









Search report