[0001] The present disclosure relates to the field of digital vehicle access. Embodiments
relate to a method for improving a flap opening of a vehicle, an apparatus, a vehicle
and a computer program.
[0002] Existing systems for unlocking/opening a flap of a vehicle, for example a tailgate
of a trunk or a front edge of a trunk, use so-called smart openers which, for example,
enable opening by means of a gesture executed with a foot. In particular, this allows
the flap to be opened without the use of hands. Other methods use detection of the
time user equipment is in the immediate vicinity of a door that is to be opened. In
this case, the door can be opened after a predefined period of time has elapsed. Both
the gesture to be performed with a foot and waiting for a door to open can be perceived
as unpleasant by a user. Thus, there may be a need to improve a flap opening of a
vehicle.
[0003] It is therefore a finding that an opening of a flap of a vehicle can be improved
by comparing a movement profile of the user with a predefined movement profile. The
flap can be opened if the movement profile of the user matches the predefined movement
profile. In this way, false-positive events can be reduced. Thus, an experience of
the user can be increased.
[0004] Examples provide a method for improving flap opening of a vehicle. The method comprises
determining a movement profile of a user relative to the vehicle. The movement profile
is indicative of a trajectory and a movement speed of the user of the trajectory.
Further, the method comprises comparing the movement profile of the user with a predefined
movement profile. The method further comprises opening the flap of the vehicle if
the movement profile of the user matches the predefined movement profile. By comparing
the movement profile of the user with a predefined movement profile an intention of
the user can be determined. This may allow to determine an intended usage of the flap
by the user in an improved way.
[0005] In an example, the method may further comprise storing the determined movement profile
of the user and using the stored movement profile of the user to train an artificial
intelligence to compare the movement profile of the user. Thus, a comparison of the
movement profile of the user with the predefined movement profile can be improved
by the artificial intelligence. In this way, a likelihood of a false-positive event
can be further reduced.
[0006] In an example, the method may further comprise determining a distance of the user
to the flap. At least one of determining the movement profile of the user or comparing
the movement profile of the user is based on the determined distance. Thus, the movement
profile of the user can be compared with the predefined movement profile in a predefined
area. For example, the predefined area may be an area next to the vehicle where the
movement profile of the user may be more relevant for determining an intended usage
of the flap. In this way, a determination of the user intention can be improved. This
may allow to improve the flap opening of the vehicle.
[0007] In an example, the method may further comprise determining a cancellation parameter
indicative of a trigger event to cancel the determination of the movement profile
of the user. Further, the method may comprise canceling the determination of the movement
profile of the user based on the cancellation parameter. This may allow to determine
the situation where a determination of the movement profile of the user is no longer
necessary. Thus the determination of the movement profile of the user can be canceled.
In this way, an energy consumption for determining the movement profile of the user
can be reduced.
[0008] In an example, the method may further comprise receiving profile data indicative
of a general predefined movement profile and generating custom profile data indicative
of a customized predefined movement profile. The custom profile data is generated
based on the profile data. The custom profile data may be customized for the vehicle
and/or the user. This may allow to adapt the comparison to the vehicle and/or the
user. In this way, a determination of an intention of the user can be further improved.
[0009] In an example, the method may further comprise receiving environmental data indicative
of an environment of the vehicle and comparing the movement profile of the user based
on the environmental data. In this way, a predefined movement profile which is used
for comparison can be adapted and/or opening of the flap of the vehicle can be prevented
(for example, if the flap is blocked by an obstruction). Thus, opening of the flap
of the vehicle can be adapted to an environment.
[0010] In an example, the method may further comprise obtaining feedback data indicative
of a use of the open flap by the user into using the feedback data to train an artificial
intelligence to compare the movement profile of the user. By obtaining feedback data
the training of the artificial intelligence can be improved. In this way, the comparison
of the movement profile of the user with the predefined movement profile can be improved.
[0011] Examples relate to an apparatus, comprising interface circuitry and processing circuitry
configured to perform a method as described above. Examples relate to a vehicle, comprising
an apparatus as described above.
[0012] Examples further relate to a computer program having a program code for performing
the method described above, when the computer program is executed on a computer, a
processor, or a programmable hardware component.
[0013] Some examples of apparatuses, methods and/or computer programs will be described
in the following by way of example only, and with reference to the accompanying figures,
in which
Fig. 1 shows an example of a method for improving flap opening of a vehicle;
Figs. 2a-2f show a proof-of-principle; and
Fig. 3 shows a block diagram of an example of an apparatus, e.g., part of a vehicle.
[0014] As used herein, the term "or" refers to a non-exclusive or, unless otherwise indicated
(e.g., "or else" or "or in the alternative"). Furthermore, as used herein, words used
to describe a relationship between elements should be broadly construed to include
a direct relationship or the presence of intervening elements unless otherwise indicated.
For example, when an element is referred to as being "connected" or "coupled" to another
element, the element may be directly connected or coupled to the other element or
intervening elements may be present. In contrast, when an element is referred to as
being "directly connected" or "directly coupled" to another element, there are no
intervening elements present. Similarly, words such as "between", "adjacent", and
the like should be interpreted in a like fashion.
[0015] The terminology used herein is for the purpose of describing particular embodiments
only and is not intended to be limiting of example embodiments. As used herein, the
singular forms "a", "an" and "the" are intended to include the plural forms as well,
unless the context clearly indicates otherwise. It will be further understood that
the terms "comprises", "comprising", "includes", or "including", when used herein,
specify the presence of stated features, integers, steps, operations, elements or
components, but do not preclude the presence or addition of one or more other features,
integers, steps, operations, elements, components or groups thereof.
[0016] Unless otherwise defined, all terms (including technical and scientific terms) used
herein have the same meaning as commonly understood by one of ordinary skill in the
art to which example embodiments belong. It will be further understood that terms,
e.g., those defined in commonly used dictionaries, should be interpreted as having
a meaning that is consistent with their meaning in the context of the relevant art
and will not be interpreted in an idealized or overly formal sense unless expressly
so defined herein.
[0017] Fig. 1 shows an example of a method 100 for improving flap opening of a vehicle.
The method 100 comprises determining 110 a movement profile of a user relative to
the vehicle. The movement profile is indicative of a trajectory and a movement speed
of the user on the trajectory. The movement profile may be determined based on sensor
data received from a sensor, e.g., a sensor of the vehicle, a sensor of an infrastructure.
For example, the method 100 may further comprise receiving sensor data from a sensor.
The sensor of the vehicle and/or the infrastructure may be an ultra-wide band sensor,
a camera, RADAR sensor, for example. For example the movement profile of the user
may be determined based on a digital key. A digital key for a vehicle is a digital
authentication method that allows a user to access and operate their vehicle without
using a physical key. It is a form of electronic key that may be stored and transmitted
through user equipment. For example, the position or the trajectory of the digital
key can be determined based on ultra-wide band sensor data. The flap may be a door,
a trunk lid, a trunk lid, for example.
[0018] Further, the method 100 comprises comparing the movement profile of the user with
a predefined movement profile. The predefined profile may be obtained by a control
unit performing the method 100, e.g., by processing circuitry of the control unit.
The predefined movement profile may be, e.g., loaded from a database, received from
a communication device such as a network node, determined based on previous movement
profiles of the user.
[0019] The predefined movement profile may be a movement profile associated with a user,
a vehicle, a vehicle type, a vehicle fleet, for example. The predefined movement profile
can be associated with the user and/or the vehicle. Alternatively, the predefined
movement profile can be independent from the user or the vehicle. For example, the
predefined movement profile may be a general predefined movement profile.
[0020] The predefined movement profile can be determined based on previous movement profiles.
For example, the predefined movement profile can be determined by an artificial intelligence.
For example, a machine-learning model can determine the predefined movement profile.
[0021] The machine-learning model is a data structure and/or set of rules representing a
statistical model that processing circuitry uses to determine the predefined movement
profile without using explicit instructions, instead relying on models and inference.
The data structure and/or set of rules represents learned knowledge (e.g., based on
training performed by a machine-learning algorithm). For example, in machine-learning,
instead of a rule-based transformation of data, a transformation of data may be used,
that is inferred from an analysis of historical and/or training data. In the proposed
technique, the content of movement profiles is analyzed using the machine-learning
model (e. g., a data structure and/or set of rules representing the model).
[0022] The machine-learning model is trained by a machine-learning algorithm. The term "machine-learning
algorithm" denotes a set of instructions that are used to create, train or use a machine-learning
model. For the machine-learning model to analyze the content of movement profiles,
the machine-learning model may be trained using training and/or historical movement
profiles as input and training content information as output (e.g., labels of the
predefined movement profiles). By training the machine-learning model with a large
set of training movement profiles and associated training content information (e.g.,
labels or annotations), the machine-learning model "learns" to recognize the content
of the movement profiles, so the content of movement profiles that are not included
in the training data can be recognized using the machine-learning model. By training
the machine-learning model using training movement profiles and a desired output,
the machine-learning model "learns" a transformation between the movement profiles
and the output, which can be used to provide an output based on non-training movement
profiles provided to the machine-learning model.
[0023] The machine-learning model may be trained using training input data (e.g., training
movement profiles). For example, the machine-learning model may be trained using a
training method called "supervised learning". In supervised learning, the machine-learning
model is trained using a plurality of training samples, wherein each sample may comprise
a plurality of input data values, and a plurality of desired output values, e. g.,
each training sample is associated with a desired output value. By specifying both
training samples and desired output values, the machine-learning model "learns" which
output value to provide based on an input sample that is similar to the samples provided
during the training. For example, a training sample may comprise training movement
profiles as input data and one or more labels as desired output data. The labels indicate
the predefined movement profiles.
[0024] Apart from supervised learning, semi-supervised learning may be used. In semi-supervised
learning, some of the training samples lack a corresponding desired output value.
Supervised learning may be based on a supervised learning algorithm (e.g., a classification
algorithm or a similarity learning algorithm). Classification algorithms may be used
as the desired outputs of the trained machine-learning model are restricted to a limited
set of values (categorical variables), e. g., the input is classified to one of the
limited set of values (type of exercise, execution quality). Similarity learning algorithms
are similar to classification algorithms but are based on learning from examples using
a similarity function that measures how similar or related two objects are.
[0025] Apart from supervised or semi-supervised learning, unsupervised learning may be used
to train the machine-learning model. In unsupervised learning, (only) input data are
supplied and an unsupervised learning algorithm is used to find structure in the input
data such as training and/or historical movement profiles (e.g., by grouping or clustering
the input data, finding commonalities in the data). Clustering is the assignment of
input data comprising a plurality of input values into subsets (clusters) so that
input values within the same cluster are similar according to one or more (predefined)
similarity criteria, while being dissimilar to input values that are included in other
clusters.
[0026] Reinforcement learning is a third group of machine-learning algorithms. In other
words, reinforcement learning may be used to train the machine-learning model. In
reinforcement learning, one or more software actors (called "software agents") are
trained to take actions in an environment. Based on the taken actions, a reward is
calculated. Reinforcement learning is based on training the one or more software agents
to choose the actions such that the cumulative reward is increased, leading to software
agents that become better at the task they are given (as evidenced by increasing rewards).
[0027] Furthermore, additional techniques may be applied to some of the machine-learning
algorithms. For example, feature learning may be used. In other words, the machine-learning
model may at least partially be trained using feature learning, and/or the machine-learning
algorithm may comprise a feature learning component. Feature learning algorithms,
which may be called representation learning algorithms, may preserve the information
in their input but also transform it in a way that makes it useful, often as a pre-processing
step before performing classification or predictions. Feature learning may be based
on principal components analysis or cluster analysis, for example.
[0028] In some examples, anomaly detection (e. g., outlier detection) may be used, which
is aimed at providing an identification of input values that raise suspicions by differing
significantly from the majority of input or training data. In other words, the machine-learning
model may at least partially be trained using anomaly detection, and/or the machine-learning
algorithm may comprise an anomaly detection component.
[0029] In some examples, the machine-learning algorithm may use a decision tree as a predictive
model. In other words, the machine-learning model may be based on a decision tree.
In a decision tree, observations about an item (e.g., a set of input movement profiles)
may be represented by the branches of the decision tree, and an output value corresponding
to the item may be represented by the leaves of the decision tree. Decision trees
support discrete values and continuous values as output values. If discrete values
are used, the decision tree may be denoted a classification tree, if continuous values
are used, the decision tree may be denoted a regression tree.
[0030] Association rules are a further technique that may be used in machine-learning algorithms.
In other words, the machine-learning model may be based on one or more association
rules. Association rules are created by identifying relationships between variables
in large amounts of data. The machine-learning algorithm may identify and/or utilize
one or more relational rules that represent the knowledge that is derived from the
data. The rules may, e.g., be used to store, manipulate or apply the knowledge.
[0031] For example, the machine-learning model may be an Artificial Neural Network (ANN).
ANNs are systems that are inspired by biological neural networks, such as can be found
in a retina or a brain. ANNs comprise a plurality of interconnected nodes and a plurality
of connections, so-called edges, between the nodes. There are usually three types
of nodes, input nodes that receive input values (e.g., the movement profiles, especially
a position or a trajectory and movement speed/velocity of the user), hidden nodes
that are (only) connected to other nodes, and output nodes that provide output values
(e.g., predefined movement profiles). Each node may represent an artificial neuron.
Each edge may transmit information from one node to another. The output of a node
may be defined as a (non-linear) function of its inputs (e.g., of the sum of its inputs).
The inputs of a node may be used in the function based on a "weight" of the edge or
of the node that provides the input. The weight of nodes and/or of edges may be adjusted
in the learning process. In other words, the training of an ANN may comprise adjusting
the weights of the nodes and/or edges of the ANN, e. g., to achieve a desired output
for a given input.
[0032] Alternatively, the machine-learning model may be a support vector machine, a random
forest model or a gradient boosting model. Support vector machines (e. g., support
vector networks) are supervised learning models with associated learning algorithms
that may be used to analyze data (e.g., in classification or regression analysis).
Support vector machines may be trained by providing an input with a plurality of training
input values (e.g., movement profiles, especially a position or a trajectory and movement
speed/velocity of the user) that belong to one of two categories (e.g., predefined
movement profiles with a high likelihood and movement profiles with a low likelihood
of an intention to access/open the flap). The support vector machine may be trained
to assign a new input value to one of the two categories. Alternatively, the machine-learning
model may be a Bayesian network, which is a probabilistic directed acyclic graphical
model. A Bayesian network may represent a set of random variables and their conditional
dependencies using a directed acyclic graph. Alternatively, the machine-learning model
may be based on a genetic algorithm, which is a search algorithm and heuristic technique
that mimics the process of natural selection. In some example, the machine-learning
model may be a combination of the above examples.
[0033] The method 100 further comprises opening the flap of the vehicle if the movement
profile of the user matches the predefined movement profile. The predefined movement
profile may be indicative of an intention of the user to access/open the flap. Thus,
if the movement profile of the user matches the predefined movement profile it can
be assumed that the user intends to access/open the flap. Therefore, the flap can
be opened based on the match of the movement profile of the user and the predefined
movement profile. In this way, an intention of the user can be determined in an improved
way and thus an opening of the flap can be improved.
[0034] By comparing the movement profile of the user based on the predefined movement profile
a forecast can be achieved. For example an automation of an action of the vehicle
when the user is walking around the vehicle, such like opening the flap, can be provided.
In this way, a user experience can be improved. For example, a user's life can be
made easier, since no interaction with the flap may be required. The comparison can
be used to trigger the opening of the flap at the "right" time when the user is on
his way and close enough to the flap.
[0035] By combining a trajectory and a movement speed of the user a reliability of a determination
of the intention of the user can be increased. For example, some false-positive opening
events may remain when the user follows a typical trajectory towards the flap without
intending to open the flap (e.g., because the user passes by the flap). In contrast,
by combining the trajectory and the movement speed false-positive opening events can
be reduced. Additionally to the trajectory of the user the movement speed of the user
is considered. For example, the movement speed may be indicative of a velocity of
the user, an acceleration and/or a deceleration. Thus, the movement speed can be used
as further indicator for determining an intent of the user to access/open the flap.
[0036] In this way, a detecting of an intention of a user to open and/or access a flap of
the vehicle can be determined as early as possible. Further, by considering the movement
speed of the user a number of false-positive opening events in cases where the user
does not intend to open the respective flap, despite approaching it, can be reduced.
[0037] For example, a deceleration of the user during an approach towards the flap, followed
by a stop near the flap may be a typical movement profile of the user indicating an
intent to access/open the flap. Thus, the movement speed can help to distinguish use
cases with and without an intent of the user to access/open the flap, respectively.
[0038] For example, the machine-learning model used to determine the predefined movement
profile can be used to compare the movement profile of the user with the predefined
movement profile. For example, an artificial intelligence may be used to compare the
movement profile of the user with the predefined movement profile. Alternatively,
an artificial intelligence different from the artificial intelligence determining
the predefined movement profile can be used to compare the movement profile the user
with the predefined movement profile.
[0039] In an example, the method 100 may further comprise storing the determined movement
profile of the user and using the stored movement profile of the user to train an
artificial intelligence to compare the movement profile of the user. The determined
movement profile of the user may be stored in a memory device of the control unit
performing the method 100. Additionally or alternatively, the determined movement
profile of the user may be stored in the database, e.g., of another communication
device such as a server, an infrastructure. By using the stored movement profile of
the user an artificial intelligence can be trained such that the predefined movement
profile is associated with the user. In this way, the comparison between movement
profile of the user and predefined movement profile can be improved. For example,
an input value for the machine-learning model may be the stored movement profile of
the user. By using the stored movement profile a universal approach to determine the
intention of the user can be adapted to a user specific approach. Thus, the usual
movement speed/velocity and/or a typical approach of the user can be considered.
[0040] In an example, the method 100 may further comprise determining a distance of the
user to the flap. At least one of determining the movement profile of the user or
comparing the movement profile of the user is based on the determined distance. For
example, the distance of the user to the vehicle may be a start trigger to start determining
the movement profile of the user and/or comparing the movement profile the user. This
allows to determine the movement profile of the user only if the user is within a
certain distance, e.g., closer than 3 m from the flap, for example. In this case,
only input data, e.g., sensor data indicative of the movement profile, which are obtained
within a certain distance can be used to determine the movement profile of the user.
In this way, an energy consumption can be reduced. Further, an accuracy of the comparison
can be improved.
[0041] Further, only the movement profile determined within a distance of the user to the
flap may be stored. This may allow to classify the user's past behavior as soon as
the user gets closer than a threshold, e.g., 2 m, to the flap.
[0042] The comparison of the movement profile of the user with the predefined movement profile
can be performed multiple times during the determination of the movement profile of
the user. For example, the determination of the movement profile of the user can be
adjusted with any new incoming data point, e.g., part of the sensor data received
from the sensor. Thus, the comparison can also be adjusted with any new incoming data
point. The determination of the movement profile of the user and/or the comparison
can be performed until a classification output indicates an intent of the user to
access/open the flap or the users goes away from the flap.
[0043] For example, a comparison can be based on an already determined part of the movement
profile of the user. The flap can be opened as soon as a match of the (part of the)
determined movement profile with the (part of the) predefined movement profile is
detected. This may allow to open the flap as soon as an intent of the user to access/open
the flap is detected.
[0044] In an example, the method 100 may further comprise determining a cancellation parameter
indicative of a trigger event to cancel the determination of the movement profile
of the user. Further, the method may comprise canceling the determination of the movement
profile of the user based on the cancellation parameter. The cancellation parameter
may be a stop trigger to stop tracking the behavior of the user. For example the cancellation
parameter may be utilized to cancel the determination of an intent of the user. The
cancellation parameter may be a distance of the user to the flap, a movement speed
of the user (especially if the user is close to the flap), for example. For example,
once the user is far enough from the flap, the user may be not tracked any more, until
the next start trigger, e.g., the user is again within a certain distance to the flap.
Thus, if a distance of the user to the flap is greater than a threshold the determination
of the movement profile and the comparison can be canceled. In this way, an energy
consumption can be reduced.
[0045] In an example, the method 100 may further comprise receiving profile data indicative
of a general predefined movement profile of the user and generating custom profile
data indicative of a customized predefined movement profile of the user. The custom
profile data is generated based on the profile data. The general predefined movement
profile of the user may be determined based on training data of multiple users and/or
vehicles. For example, the general predefined movement profile of the user can be
used for different users and/or vehicles. This may allow to use a standard predefined
movement profile of the user without resource intensive training for every user/vehicle.
[0046] The custom profile data is customized for the vehicle and/or the user. By customizing
the general predefined movement profile of the user a single classification model
can be adapted to multiple vehicle models with different dimensions and/or different
users. In this way, a determination of the predefined movement provides of the user
can be facilitated.
[0047] For example, to gain a time saving in model development and data generation, e.g.,
determination or generation of the predefined movement profile of the user, a single
model that is applicable to multiple vehicle models with different dimensions can
be developed. A trajectory of a general predefined movement profile of the user can
be pre-processed specifically to the type of the vehicle. For example, a shift of
the trajectory according to a vehicle bounding box can be done. Additionally or alternatively,
the flap can be set as the origin of the coordinate system. A bounding box of a vehicle
may be a rectangular box that is drawn around the outline of the vehicle in an image
or video frame. It is used in computer vision and machine learning applications for
object detection and tracking, and it is a way to represent the location and size
of an object in an image. The bounding box is defined by its top-left and bottom-right
coordinates, which define the corners of the rectangle. The top-left coordinate is
the coordinate of the top-left corner of the rectangle, and the bottom-right coordinate
is the coordinate of the bottom-right corner of the rectangle. The use of the bounding
box may allow to adjust a single model to a specific model for type of vehicle. For
example, the customized movement profile of the user may be determined based on the
general movement profile of the user and the bounding box of the vehicle.
[0048] In an example, the method may further comprise receiving environmental data indicative
of an environment of the vehicle and comparing the movement profile of the user based
on the environmental data. In this way, a predefined movement profile which is used
for comparison can be adapted and/or opening of the flap of the vehicle can be prevented
(for example, if the flap is blocked by an obstruction). Thus, opening of the flap
of the vehicle can be adapted to an environment.
[0049] In an example, the method may further comprise obtaining feedback data indicative
of a use of the open flap by the user into using the feedback data to train an artificial
intelligence to compare the movement profile of the user. For example, the feedback
data can be used as input data for the machine-learning model. In this way, a reliability
of the comparison can be improved.
[0050] The method may be performed by a processing circuitry, e.g., part of the vehicle.
The processing circuitry may be part of a control unit, e.g., a central control unit
(such as an engine control unit) of the vehicle. The processing circuitry may be communicatively
coupled via an interface circuitry to a sensor, e.g., to receive the sensor data,
to determine the position of a digital key.
[0051] More details and aspects are mentioned in connection with the embodiments described
below. The example shown in Fig. 1 may comprise one or more optional additional features
corresponding to one or more aspects mentioned in connection with the proposed concept
or one or more examples described below (e.g., Fig. 2 - 3).
[0052] Figs. 2a-2f show a proof-of-principle. Figs. 2a-2c show a use case where the trajectory
of the user and the movement speed of the user indicate an intent to access/open a
flap, e.g., a trunk lid. Fig. 2a shows the vehicle 200 and a trajectory 210a along
the rear side of the vehicle 200. Figs. 2b and 2c show the movement speed of the user
at distinct points on the trajectory 210a. As can be seen in Figs. 2b and 2c the movement
speed of the user decreases with the number of points on the trajectory towards the
trunk. Thus, the approach to the trunk may be to access/open the trunk lid. Therefore,
for the movement profile shown in Figs. 2a-2c an opening of the trunk should be performed.
[0053] In contrast Figs. 2d-2f show a use case with a trajectory 210b which may indicate
an intent to access/open the flap, e.g., a trunk lid. However, the movement speed
of the user may give no indication for an intent to access/open the trunk lid. As
can be seen in Figs. 2e and 2f the movement speed of the user accelerates after a
certain point N. Thus, the movement speed could be decelerated to surround the vehicle
200, but not to access/open the trunk lid. Thus, after passing point N the movement
speed could be accelerated, since the user may have no intent to access/open the trunk
lid. Thus, by combining the movement speed with the trajectory false-positive events
can be reduced.
[0054] More details and aspects are mentioned in connection with the embodiments described
below. The example shown in Fig. 2 may comprise one or more optional additional features
corresponding to one or more aspects mentioned in connection with the proposed concept
or one or more examples described above (e.g., Fig. 1) and/or described below (e.g.,
Fig. 3).
[0055] Fig. 3 shows a block diagram of an example of an apparatus 30, e.g., for a vehicle
40. The apparatus 30 comprises interface circuitry 32 and processing circuitry 34
configured to perform a method as described above, e.g., the method for a vehicle
as described with reference to Fig. 1. For example, the apparatus 30 may be part of
the vehicle 40, e.g., part of a control unit of the vehicle 40.
[0056] For example, the vehicle 40 may be a land vehicle, such a road vehicle, a car, an
automobile, an offroad vehicle, a motor vehicle, a bus, a robo-taxi, a van, a truck
or a lorry. Alternatively, the vehicle 40 may be any other type of vehicle, such as
a train, a subway train, a boat or a ship. For example, the proposed concept may be
applied to public transportation (trains, bus) and future means of mobility (e.g.,
robo-taxis).
[0057] As shown in Fig. 3 the respective interface circuitry 32 is coupled to the respective
processing circuitry 34 at the apparatus 30. In examples the processing circuitry
34 may be implemented using one or more processing units, one or more processing devices,
any means for processing, such as a processor, a computer or a programmable hardware
component being operable with accordingly adapted software. Similar, the described
functions of the processing circuitry 34 may as well be implemented in software, which
is then executed on one or more programmable hardware components. Such hardware components
may comprise a general-purpose processor, a Digital Signal Processor (DSP), a micro-controller,
etc. The processing circuitry 34 is capable of controlling the interface circuitry
32, so that any data transfer that occurs over the interface circuitry 32 and/or any
interaction in which the interface circuitry 32 may be involved may be controlled
by the processing circuitry 34.
[0058] In an embodiment the apparatus 30 may comprise a memory and at least one processing
circuitry 34 operably coupled to the memory and configured to perform the method described
above.
[0059] In examples the interface circuitry 32 may correspond to any means for obtaining,
receiving, transmitting or providing analog or digital signals or information, e.g.,
any connector, contact, pin, register, input port, output port, conductor, lane, etc.
which allows providing or obtaining a signal or information. The interface circuitry
32 may be wireless or wireline and it may be configured to communicate, e.g., transmit
or receive signals, information with further internal or external components.
[0060] The apparatus 30 may be a computer, processor, control unit, (field) programmable
logic array ((F)PLA), (field) programmable gate array ((F)PGA), graphics processor
unit (GPU), application-specific integrated circuit (ASICs), integrated circuits (IC)
or system-on-a-chip (SoCs) system.
[0061] More details and aspects are mentioned in connection with the embodiments described.
The example shown in Fig. 3 may comprise one or more optional additional features
corresponding to one or more aspects mentioned in connection with the proposed concept
or one or more examples described above (e.g., Fig. 1 - 2).
[0062] The aspects and features described in relation to a particular one of the previous
examples may also be combined with one or more of the further examples to replace
an identical or similar feature of that further example or to additionally introduce
the features into the further example.
[0063] Examples may further be or relate to a (computer) program including a program code
to execute one or more of the above methods when the program is executed on a computer,
processor or other programmable hardware component. Thus, steps, operations or processes
of different ones of the methods described above may also be executed by programmed
computers, processors or other programmable hardware components. Examples may also
cover program storage devices, such as digital data storage media, which are machine-,
processor- or computer-readable and encode and/or contain machine-executable, processor-executable
or computer-executable programs and instructions. Program storage devices may include
or be digital storage devices, magnetic storage media such as magnetic disks and magnetic
tapes, hard disk drives, or optically readable digital data storage media, for example.
Other examples may also include computers, processors, control units, (field) programmable
logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor
units (GPU), application-specific integrated circuits (ASICs), integrated circuits
(ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods
described above.
[0064] It is further understood that the disclosure of several steps, processes, operations
or functions disclosed in the description or claims shall not be construed to imply
that these operations are necessarily dependent on the order described, unless explicitly
stated in the individual case or necessary for technical reasons. Therefore, the previous
description does not limit the execution of several steps or functions to a certain
order. Furthermore, in further examples, a single step, function, process or operation
may include and/or be broken up into several sub-steps, -functions, - processes or
-operations.
[0065] If some aspects have been described in relation to a device or system, these aspects
should also be understood as a description of the corresponding method. For example,
a block, device or functional aspect of the device or system may correspond to a feature,
such as a method step, of the corresponding method. Accordingly, aspects described
in relation to a method shall also be understood as a description of a corresponding
block, a corresponding element, a property or a functional feature of a corresponding
device or a corresponding system.
[0066] If some aspects have been described in relation to a device or system, these aspects
should also be understood as a description of the corresponding method and vice versa.
For example, a block, device or functional aspect of the device or system may correspond
to a feature, such as a method step, of the corresponding method. Accordingly, aspects
described in relation to a method shall also be understood as a description of a corresponding
block, a corresponding element, a property or a functional feature of a corresponding
device or a corresponding system.
[0067] The following claims are hereby incorporated in the detailed description, wherein
each claim may stand on its own as a separate example. It should also be noted that
although in the claims a dependent claim refers to a particular combination with one
or more other claims, other examples may also include a combination of the dependent
claim with the subject matter of any other dependent or independent claim. Such combinations
are hereby explicitly proposed, unless it is stated in the individual case that a
particular combination is not intended. Furthermore, features of a claim should also
be included for any other independent claim, even if that claim is not directly defined
as dependent on that other independent claim.
[0068] The aspects and features described in relation to a particular one of the previous
examples may also be combined with one or more of the further examples to replace
an identical or similar feature of that further example or to additionally introduce
the features into the further example.
References
[0069]
30 apparatus
32 processing circuitry
34 interface circuitry
40 vehicle
100 method for improving flap opening
110 determining a movement profile
120 comparing the movement profile
130 opening the flap of the vehicle
200 vehicle
210a, 210b trajectory