[0001] The present disclosure relates to the field of digital vehicle access. Embodiments
relate to a method for user equipment, an apparatus, a vehicle and a computer program.
[0002] Existing systems for unlocking/opening a flap of a vehicle, for example a tailgate
of a trunk or a front edge of a trunk, use so-called smart openers which, for example,
enable opening by means of a gesture executed with a foot. In particular, this allows
the flap to be opened without the use of hands. Other methods use detection of the
time user equipment is in the immediate vicinity of a door that is to be opened. In
this case, the door can be opened after a predefined period of time has elapsed. Both
the gesture to be performed with a foot and waiting for a door to open can be perceived
as unpleasant by a user. Thus, there may be a need to improve a control of a flap
of a vehicle.
[0003] It is therefore a finding that a control of a flap of a vehicle can be improved by
obtaining position data and obtaining orientation data. By comparing the position
of the user and the orientation of the user with reference values, an intent of the
user to use the flap can be determined. Thus, the movement of the flap can be controlled
in an improved way.
[0004] Examples provide a method for executing by a processing circuitry of a vehicle for
improving a control of a flap of the vehicle. The method comprises obtaining position
data indicative of a position of a user and obtaining orientation data indicative
of an orientation of the user. Further, the method comprises comparing the position
of the user with a reference area and comparing the orientation of the user with a
reference orientation. Further, when the position of the user matches the reference
area and the orientation of the user matches the reference orientation, the method
further comprises generating control data for controlling a movement of the flap of
the vehicle. Using the position of the user and an orientation of the user may allow
to increase a reliability of a determination of the user intent. For example, a user
may only intent to use a trunk of the vehicle, if the user faces the trunk. Thus,
a false-positive event caused by the correct position of the user in the reference
area can be avoided. Combining the position of the user and the orientation of the
user may allow to control the movement of the flap in an improved way.
[0005] In an example, the method may further comprise receiving measurement data from an
antenna device of the vehicle and determining the position of the user by determining
a position of user equipment configured as digital key. The position of the user is
determined based on the measurement data. For example, ultra-wideband communication
and/or a Bluetooth low energy communication may be used to determine the position
of the user equipment. Determining the position of the user based on the position
of the user equipment may allow to determine the position in a facilitated way. Further,
the user equipment may be configured to allow the user an access to the vehicle. Thus,
a control of the movement of the flap may be only triggered for user equipment authorized
to unlock the flap. In this way, it can be ensured that only an authorized user can
trigger a generation of control data.
[0006] In an example, the method may further comprise receiving sensor data from an orientation
sensor and determining the orientation of the user based on the sensor data. Receiving
the sensor data may allow to determine the orientation of the user in a facilitated
way.
[0007] In an example, the orientation sensor may be a light detection and ranging sensor
and/or a camera. Using the orientation sensor in conjunction with an antenna device
may allow to use two different sensors for obtaining information to generate the control
data. In this way, a reliability of the control data can be increased.
[0008] In an example, the method may further comprise obtaining time data indicative of
a time threshold and generating the control data based on the time data. Using the
time data may allow to generate the control data after a predefined time has been
elapsed. For example, the control data may be only generated if the time threshold
is exceeded. In this way, false-positive events can be reduced.
[0009] In an example, the method may further comprise obtaining a trajectory of the user
and comparing the trajectory of the user with a reference trajectory. Further, the
control data may only be generated, when the trajectory of the user matches the reference
trajectory. Thus, the user trajectory can be used to improve the determination of
the user intent. In this way, false-positive events can be reduced.
[0010] In an example, the control data is for closing and/or opening the flap of the vehicle,
e.g., tailgate of a trunk, a front edge of a frunk and/or a door of the vehicle.
[0011] Examples relate to an apparatus, comprising interface circuitry configured to communicate
with user equipment, an antenna device and/or an orientation sensor and processing
circuitry configured to perform a method as described above. Examples relate to a
vehicle, comprising an apparatus as described above.
[0012] Examples further relate to a computer program having a program code for performing
the method described above, when the computer program is executed on a computer, a
processor, or a programmable hardware component.
[0013] Some examples of apparatuses, methods and/or computer programs will be described
in the following by way of example only, and with reference to the accompanying figures,
in which
Fig. 1 shows an example of a method for executing by a processing circuitry of a vehicle
for improving a control of a flap of the vehicle; and
Fig. 2 shows a block diagram of an example of an apparatus, e.g., part of a vehicle.
[0014] As used herein, the term "or" refers to a non-exclusive or, unless otherwise indicated
(e.g., "or else" or "or in the alternative"). Furthermore, as used herein, words used
to describe a relationship between elements should be broadly construed to include
a direct relationship or the presence of intervening elements unless otherwise indicated.
For example, when an element is referred to as being "connected" or "coupled" to another
element, the element may be directly connected or coupled to the other element or
intervening elements may be present. In contrast, when an element is referred to as
being "directly connected" or "directly coupled" to another element, there are no
intervening elements present. Similarly, words such as "between", "adjacent", and
the like should be interpreted in a like fashion.
[0015] The terminology used herein is for the purpose of describing particular embodiments
only and is not intended to be limiting of example embodiments. As used herein, the
singular forms "a", "an" and "the" are intended to include the plural forms as well,
unless the context clearly indicates otherwise. It will be further understood that
the terms "comprises", "comprising", "includes", or "including", when used herein,
specify the presence of stated features, integers, steps, operations, elements or
components, but do not preclude the presence or addition of one or more other features,
integers, steps, operations, elements, components or groups thereof.
[0016] Unless otherwise defined, all terms (including technical and scientific terms) used
herein have the same meaning as commonly understood by one of ordinary skill in the
art to which example embodiments belong. It will be further understood that terms,
e.g., those defined in commonly used dictionaries, should be interpreted as having
a meaning that is consistent with their meaning in the context of the relevant art
and will not be interpreted in an idealized or overly formal sense unless expressly
so defined herein.
[0017] Fig. 1 shows an example of a method 100 for executing by a processing circuitry of
a vehicle for improving a control of a flap of the vehicle. The flap may be a cover
of a storage compartment such as a trunk or trunk and/or a door of the vehicle. The
method 100 comprises obtaining 110 position data indicative of a position of a user
(relative to the flap of the vehicle). The position data may be determined by the
processing circuitry based on sensor data and/or measurement data. For example, the
processing circuitry may receive sensor data from an orientation sensor data indicative
of a surrounding of the vehicle. Thus, the processing circuitry may determine a position
of the user relative to the vehicle based on the orientation sensor data. Alternatively,
the processing circuitry may receive information about the position of the user relatively
to the vehicle directly from the orientation sensor. In this case, the processing
circuitry does not need to determine the position of the user.
[0018] Further, the method 100 comprises obtaining 120 orientation data indicative of an
orientation of the user. The orientation data can be determined by the processing
circuitry based on orientation sensor data. Alternatively, the processing circuitry
may receive the orientation of the user directly, e.g., from the orientation sensor.
In this case, post processing of the orientation sensor data by the processing circuitry
can be omitted.
[0019] Further, the method 100 comprises comparing 130 the position of the user with a reference
area. Comparing 130 may be performed by the processing circuitry. The reference area
may be a predefined area in which a movement of the flap may be intended by the user.
For example, the reference area may be an area adjacent a trunk of the vehicle. When
the user stands in front of the trunk inside the reference area, the user can intent
to use the trunk. Thus, the reference area may define an area in the surrounding of
the vehicle, for which a usage of the flap by the user is likely. The reference area
may be different for each flap of the vehicle. Alternatively, a reference area may
be assigned to multiple flaps of the vehicle. In this case, a determination which
flap should be opened can be based optionally on the orientation of the user.
[0020] Further, the method 100 comprises comparing 140 the orientation of the user with
a reference orientation. The reference orientation may be an orientation of the user
for which it is likely that the user intends to open/close the flap of the vehicle.
For example, if the user faces the flap of the vehicle and/or a gaze direction of
the user is towards the flap, the orientation of the user may indicate an intent to
open/close the flap of the vehicle. Thus, the reference orientation may be a user
facing the flap and/or locking towards the flap.
[0021] Further, when the position of the user matches the reference area and the orientation
of the user matches the reference orientation, the method 100 further comprises generating
150 control data for controlling a movement of the flap of the vehicle. The control
data may be determined by the processing circuitry of an apparatus (see Fig. 2) configured
to perform the method 100. Using the position of the user and the orientation of the
user a determination of an intent of the user to open/close the flap of the vehicle
can be improved. Thus, a control of the movement of the flap can be improved. For
example, if the user stands in the reference area but is turned away from the vehicle
no control data may be generated 150. In this way, control data may be only generated
150, if the determined user intent based on the orientation of the user matches the
determined user intent based on the position of the user. Thus, a false-positive event
caused by a user standing in the reference area can be avoided.
[0022] For example, comparing 130 the user position and comparing 140 the orientation of
the user may indicate a user intention to access a storage compartment of a vehicle,
e.g., to store an object in the storage compartment and/or to take an object out of
the storage compartment covered by the flap.
[0023] For example, the flap may be a flap of a trunk. The control data may be indicative
of an opening and/or closing of the trunk. The reference orientation and/or the reference
area may be different for opening and closing of the trunk. For example, the reference
orientation for opening the trunk may be a user facing the trunk and/or looking at
the trunk. The reference orientation for closing the trunk may be a user turned away
from the trunk and/or not looking at the trunk, for example.
[0024] For example, the trunk shall be opened when the user is localized (e.g., tracked
by his user equipment) for a certain amount of time (e.g., at least 1 s, or at least
2 s, or at least 3 s) in front of the trunk in the reference area. Further, the trunk
shall be opened when live video data from the surrounding of the vehicle (e.g., received
from the orientation sensor) until that time show that the user approaches the trunk
and turns its front body towards the trunk, for example. The reference area may be
a rectangular or elliptic area of, for example, two square meters behind the trunk.
The reference area can have any desired shape and/or dimension. Thus, a user intent
to use the trunk can be determined based on the position of the user and the orientation
of the user.
[0025] For example, the trunk shall be closed when the user exits the reference area and
stays away for a certain amount of time (e.g., at least 1 s, or at least 2 s, or at
least 3 s) and when the live video data from the surrounding of the vehicle shows
that the user is not in the reference area and/or is not facing and/or looking at
the trunk. The live video data may be received from an orientation sensor. Alternatively,
if no live video data is available, because the orientation sensor may be obstructed
by an opened flap of the trunk, generating 150 the control data may be only based
on the position of the user. Generating 150 the control data for opening the flap
may be independently of a position of the flap. Thus, generating 150 the control data
for opening the flap is based on the position of the user and the orientation of the
user. However, an opened flap may obstruct an orientation sensor. Thus, generating
150 control data for closing the flap may be based on only the position of the user.
[0026] For example, method 100 can be performed by combining live camera data, e.g., received
from a camera of the vehicle, and live digital key localization. Both aspects are
described in more detail below.
[0027] In an example, the method 100 may further comprise receiving measurement data from
an antenna device of the vehicle and determining the position of the user by determining
a position of user equipment configured as digital key. The position of the user is
determined based on the measurement data. The antenna device may be an ultra-wide
band receiver and/or a Bluetooth receiver, for example. The antenna device may be
part of the vehicle. For example, the antenna device may be used to communicate with
other communication devices, such like other vehicles, infrastructures and/or base
stations. The antenna device can be used to communicate with user equipment of the
user. The user equipment of the user can be configured as digital key, as defined
in the Car Connectivity Consortium (CCC) standard, e.g., the Digital Key Release 3
Version 12.0-r14. Thus, a position of the user can be determined based on the position
of the user equipment.
[0028] Determining a position of the user equipment may allow to determine an authorization
of the user to access the vehicle, e.g., a storage compartment of the vehicle. Thus,
it can be ensured that the control data is only generated 150 by the apparatus if
the user approaching the vehicle is authorized to access the vehicle. In this way,
generation 150 of control data triggered by an unauthorized user can be avoided. For
example, the control data may be transmitted to an actuator configured to move the
flap. Alternatively, the apparatus may trigger the movement of the flap, e.g., an
actuator configured to move the flap may be part of the apparatus.
[0029] Additionally or alternatively, the position of the user can be determined based on
sensor data indicative of an orientation of the user. The sensor data can also be
used to determine a position of the user relative to the vehicle. In an example, the
method 100 may further comprise receiving sensor data from an orientation sensor and
determining the orientation of the user based on the sensor data. In an example, the
orientation sensor may be a light detection and ranging sensor and/or a camera.
[0030] For example, the live image data may be acquired by the orientation sensor. The live
image data can be transmitted from the orientation sensor to the apparatus performing
the method 100. Based on the live image data the apparatus can determine the orientation
of the user and optionally the position of the user relative to the flap of the vehicle.
In principle, the position of the user and the orientation of the user can be determined
based on only the live image data, e.g., received from the orientation sensor.
[0031] Additionally or alternatively, the live image data can be combined with the live
digital key localization, as described above. In this way, an authorization of the
user can be verified. Further, using data from different sensors, namely the measurement
sensor and the orientation sensor, may increase a reliability of the method 100.
[0032] Optionally, false-positive events can be reduced by indicating the user a movement
of the flap in advance. For example, the apparatus may generate 150 the control data
indicative of an opening of the flap. Before triggering the opening of the flap by
the apparatus 150, the apparatus may trigger an information output to the user for
informing the user about the opening of the flap in advance. For example, the apparatus
150 may trigger an illumination of lights of the vehicle. Thus, the user can be informed
about the future opening of the flap by the vehicle. Therefore, if the user did not
intend to open the flap, the user can turn away from the flap and/or leave the reference
area to avoid an opening of the flap. For example, if the user did no intent to close
a flap, the user can reappear in front of the orientation sensor and/or reenter the
reference area to avoid a closing of the flap.
[0033] In an example, the method may further comprise obtaining time data indicative of
a time threshold and generating the control data based on the time data. As described
above, the apparatus may generate 150 the control data only if the user is for a certain
time within the reference area and/or has an orientation for a certain time. For example,
the control data 150 may be generated only if the user is for at least 1 s or at least
2 s or at least 3 s in the reference area and/or the user orientation matches for
at least1 s or at least 2 s or at least 3 s the reference orientation. Using the time
data may increase a reliability of the method 100.
[0034] In an example, the method may further comprise obtaining a trajectory of the user
and comparing the trajectory of the user with a reference trajectory. Further, the
control data may only be generated, when the trajectory of the user matches the reference
trajectory. Thus, the user trajectory can be used to improve the determination of
the user intent. In this way, false-positive events can be reduced.
[0035] For example, a computer vision model (i.e., a special branch of artificial intelligence)
can be trained to classify videos of the user approaching the flap and/or leaving
the flap. The training data may be a user pattern of the user behavior from a plurality
of users without any restriction to the users. Additionally or alternatively, a user
pattern may be from a specified subcategory of users, such like employees of the same
company, women or men (which might have different usage of the trunk because of different
bag types). Additionally or alternatively, a user pattern may correspond to a specific
vehicle, e.g., the pattern may be from the plurality of users of a vehicle. Additionally
or alternatively, a user pattern may correspond to a user of the vehicle, e.g., a
single user identified by his digital key.
[0036] The computer vision model is a machine-learning model that is a data structure and/or
set of rules representing a statistical model that the processing circuitry uses to
perform the above tasks without using explicit instructions, instead relying on models
and inference. The data structure and/or set of rules represents learned knowledge
(e.g., based on training performed by a machine-learning algorithm). For example,
in machine-learning, instead of a rule-based transformation of data, a transformation
of data may be used, that is inferred from an analysis of historical and/or training
data. In the proposed technique, the content of user pattern is analyzed using the
machine-learning model (i.e., a data structure and/or set of rules representing the
model).
[0037] The machine-learning model is trained by a machine-learning algorithm. The term "machine-learning
algorithm" denotes a set of instructions that are used to create, train or use a machine-learning
model. For the machine-learning model to analyze the content of user pattern, the
machine-learning model may be trained using training and/or historical user pattern
as input and training content information (e.g., open or close the flap) as output.
By training the machine-learning model with a large set of training user pattern and
associated training content information (e.g., labels or annotations), the machine-learning
model "learns" to recognize the content of the user pattern, so the content of user
pattern that are not included in the training data can be recognized using the machine-learning
model. By training the machine-learning model using training user pattern and a desired
output, the machine-learning model "learns" a transformation between the user pattern
and the output, which can be used to provide an output based on non-training user
pattern provided to the machine-learning model.
[0038] A certain classification trigger may be chosen (e.g., after the user was localized
for at least 3 seconds in the reference area, such like a rectangular zone of 2 square
meter behind the trunk for opening or after the user leaves a rectangular zone of
2 square meter behind the trunk for closing). The last 10 seconds of live video data
can be considered and live classified once the classification trigger is reached.
Then, the machine-learning model shall output a first result: either "trunk opening
intent detected" or "no trunk opening intent detected". While no opening intent is
detected and until the user exits the zone behind the trunk, the machine-learning
model can continuously reassess its prediction on newest data frames of the live video
data.
[0039] This machine-learning model can be trained on a set of live video data, labeled according
to two use cases. The first use case can be an opening intent. That is, the user may
approach the trunk and may wait in front of the flap, pretending to wait for a hand
free opening of the trunk. The second use case can be a non-opening intent. That is,
the user walks by the trunk and leaves, or the user approaches the trunk but does
not face the trunk (e.g., the user can do something else, like texting or talking
to someone).
[0040] The data collection, e.g., the user pattern collection, can be performed by testers
holding digital keys, such that these training videos can be properly cut-off until
the classification trigger.
[0041] Further, as a robustness measure, a computer vision model can confirm that no user
is localized in a trunk area, e.g., the reference area. Thus, the user intention to
close the trunk can be verified and optional an accident can be prevented.
[0042] For example, the computer vision model can be trained using a set of live video data,
labeled according to two use cases. The first use case may be a person localized in
the trunk area. The second use case may be no person nor animals localized in the
trunk area.
[0043] In an example, the control data is for closing and/or opening the flap of the vehicle,
e.g., a trunk and/or a door of the vehicle. For example, the control data may be output
data of the computer vision model.
[0044] More details and aspects are mentioned in connection with the embodiments described
below. The example shown in Fig. 1 may comprise one or more optional additional features
corresponding to one or more aspects mentioned in connection with the proposed concept
or one or more examples described below (e.g., Fig. 2).
[0045] Fig. 2 shows a block diagram of an example of an apparatus 30, e.g., for a vehicle
40. The apparatus 30 comprises interface circuitry 32 configured to communicate with
user equipment, an antenna device and/or an orientation sensor and processing circuitry
34 configured to perform a method as described above, e.g., the method for processing
circuitry as described with reference to Fig. 1. For example, the apparatus 30 may
be part of the vehicle 40, e.g., part of a control unit of the vehicle 40.
[0046] For example, the vehicle 40 may be a land vehicle, such as a road vehicle, a car,
an automobile, an off-road vehicle, a motor vehicle, a bus, a robo-taxi, a van, a
truck or a lorry. Alternatively, the vehicle 40 may be any other type of vehicle,
such as a train, a subway train, a boat or a ship. For example, the proposed concept
may be applied to public transportation (trains, bus) and future means of mobility
(e.g., robo-taxis).
[0047] As shown in Fig. 2 the respective interface circuitry 32 is coupled to the respective
processing circuitry 34 at the apparatus 30. In examples the processing circuitry
34 may be implemented using one or more processing units, one or more processing devices,
any means for processing, such as a processor, a computer or a programmable hardware
component being operable with accordingly adapted software. Similar, the described
functions of the processing circuitry 34 may as well be implemented in software, which
is then executed on one or more programmable hardware components. Such hardware components
may comprise a general-purpose processor, a Digital Signal Processor (DSP), a microcontroller,
etc. The processing circuitry 34 is capable of controlling the interface circuitry
32, so that any data transfer that occurs over the interface circuitry 32 and/or any
interaction in which the interface circuitry 32 may be involved may be controlled
by the processing circuitry 34.
[0048] In an embodiment the apparatus 30 may comprise a memory and at least one processing
circuitry 34 operably coupled to the memory and configured to perform the method described
above.
[0049] In examples the interface circuitry 32 may correspond to any means for obtaining,
receiving, transmitting or providing analog or digital signals or information, e.g.,
any connector, contact, pin, register, input port, output port, conductor, lane, etc.
which allows providing or obtaining a signal or information. The interface circuitry
32 may be wireless or wireline and it may be configured to communicate, e.g., transmit
or receive signals, information with further internal or external components.
[0050] The apparatus 30 may be a computer, processor, control unit, (field) programmable
logic array ((F)PLA), (field) programmable gate array ((F)PGA), graphics processor
unit (GPU), application-specific integrated circuit (ASICs), integrated circuits (IC)
or system-on-a-chip (SoCs) system.
[0051] More details and aspects are mentioned in connection with the embodiments described.
The example shown in Fig. 2 may comprise one or more optional additional features
corresponding to one or more aspects mentioned in connection with the proposed concept
or one or more examples described above (e.g., Fig. 1).
[0052] The aspects and features described in relation to a particular one of the previous
examples may also be combined with one or more of the further examples to replace
an identical or similar feature of that further example or to additionally introduce
the features into the further example.
[0053] Examples may further be or relate to a (computer) program including a program code
to execute one or more of the above methods when the program is executed on a computer,
processor or other programmable hardware component. Thus, steps, operations or processes
of different ones of the methods described above may also be executed by programmed
computers, processors or other programmable hardware components. Examples may also
cover program storage devices, such as digital data storage media, which are machine-,
processor- or computer-readable and encode and/or contain machine-executable, processor-executable
or computer-executable programs and instructions. Program storage devices may include
or be digital storage devices, magnetic storage media such as magnetic disks and magnetic
tapes, hard disk drives, or optically readable digital data storage media, for example.
Other examples may also include computers, processors, control units, (field) programmable
logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor
units (GPU), application-specific integrated circuits (ASICs), integrated circuits
(ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods
described above.
[0054] It is further understood that the disclosure of several steps, processes, operations
or functions disclosed in the description or claims shall not be construed to imply
that these operations are necessarily dependent on the order described, unless explicitly
stated in the individual case or necessary for technical reasons. Therefore, the previous
description does not limit the execution of several steps or functions to a certain
order. Furthermore, in further examples, a single step, function, process or operation
may include and/or be broken up into several sub-steps, -functions, -processes or
- operations.
[0055] If some aspects have been described in relation to a device or system, these aspects
should also be understood as a description of the corresponding method. For example,
a block, device or functional aspect of the device or system may correspond to a feature,
such as a method step, of the corresponding method. Accordingly, aspects described
in relation to a method shall also be understood as a description of a corresponding
block, a corresponding element, a property or a functional feature of a corresponding
device or a corresponding system.
[0056] If some aspects have been described in relation to a device or system, these aspects
should also be understood as a description of the corresponding method and vice versa.
For example, a block, device or functional aspect of the device or system may correspond
to a feature, such as a method step, of the corresponding method. Accordingly, aspects
described in relation to a method shall also be understood as a description of a corresponding
block, a corresponding element, a property or a functional feature of a corresponding
device or a corresponding system.
[0057] The following claims are hereby incorporated in the detailed description, wherein
each claim may stand on its own as a separate example. It should also be noted that
although in the claims a dependent claim refers to a particular combination with one
or more other claims, other examples may also include a combination of the dependent
claim with the subject matter of any other dependent or independent claim. Such combinations
are hereby explicitly proposed, unless it is stated in the individual case that a
particular combination is not intended. Furthermore, features of a claim should also
be included for any other independent claim, even if that claim is not directly defined
as dependent on that other independent claim.
[0058] The aspects and features described in relation to a particular one of the previous
examples may also be combined with one or more of the further examples to replace
an identical or similar feature of that further example or to additionally introduce
the features into the further example.
References
[0059]
30 apparatus
32 processing circuitry
34 interface circuitry
40 vehicle
100 method for improving a control of a flap of the vehicle
110 obtaining position data indicative of a position of a user
120 obtaining orientation data indicative of an orientation of the user
130 comparing the position of the user with a reference area
140 comparing the orientation of the user with a reference orientation
150 generating control data for controlling a movement of the flap of the vehicle