[0001] The present invention is directed to a method for controlling a transparent display
configured to display content in a windshield of a vehicle, and a control unit configured
to carry out the method at least partly. An, optionally automated, vehicle comprising
the control unit may be provided. Additionally or alternatively, a computer program
comprising instructions which, when the program is executed by a computer, e.g., the
control unit, cause the computer to carry out the method at least partly, may be provided.
Additionally or alternatively, a computer-readable medium comprising instructions
which, when executed by a computer, cause the computer to carry out the method at
least partly, may be provided.
[0002] Transparent displays configured to display content in a windshield of a vehicle,
also called head-up displays, are widely used in vehicles, such as cars, to display
content/information to a user/driver of the vehicle.
[0003] For example,
WO 2019/038201 A2 relates to a head-up display comprising a display element, a projection system, a
diffusing screen and a mirror element. It is proposed that the diffusing screen has
focusing elements on its side facing the projection system and a light-blocking mask
on its side facing away from the projection system.
[0004] However, one disadvantage of the state of the art is that content displayed in the
windshield using the head-up display may irritate the driver of the vehicle since
the content may overlap with object located in a field of view of the driver.
[0005] In the light of this prior art, the object of the present invention is to provide
a device and/or a method being suitable for overcoming at least the above-mentioned
disadvantage of the prior art, respectively.
[0006] The object is solved by the features of the independent claims. The dependent claims
have preferred further embodiments of the invention as their subject matter.
[0007] More specifically, the object is solved by a method for controlling a transparent
display configured to display content in a windshield of a vehicle.
[0008] The transparent display may be a head-up display. A head-up display, also known as
a HUD, is any transparent display that presents data/content without requiring users
to look away from their usual viewpoints during driving the vehicle. This allows the
user/driver of the vehicle to view information with the head positioned "up" and looking
forward, instead of angled down looking at lower instruments. A HUD also has the advantage
that the user's eyes do not need to refocus to view the outside after looking at the
optically nearer instruments. The transparent display may be configured to generate
the content, e.g., including information being relevant for driving the vehicle, by
an image source (possibly with an intermediate image plane) in a dashboard of the
vehicle and project the content via an optical system onto a combiner (i.e., a transparent
mirror), here the windshield of the vehicle. Additionally or alternatively, the windshield
itself may comprise or may be the transparent display, e.g., comprising a transparent
OLED (i.e., organic light emitting diode) display. The transparent display may be
configured to display augmented reality content.
[0009] The method comprises capturing sensor data relating to an environment of the vehicle
using a sensor system installed at the vehicle, determining at least one color included
in the sensor data, and determining at least one color of the displayed content based
on the determined at least one color included in the sensor data.
[0010] The sensor system may include one or more cameras, e.g., a front camera and/or a
surround view camera system. The method may comprise fusing the sensor data in case
more than one camera is provided. The at least one color determined based on the sensor
data may be a color distribution. The at least one color may be determined using the
sensor data of a whole field of view of the sensor system or may be determined using
solely a part of the filed of view of the sensor system corresponding to solely a
part of the sensor data.
[0011] In the following, preferred implementations of the method are described in detail.
[0012] The method may comprise detecting an object of a predefined class in the environment
of the vehicle based on the sensor data and determining a color of the detected object
as the at least one color included in the sensor data.
[0013] The predefined class may comprise one or multiple sub classes such as a pedestrian
and/or bicyclist, i.e., a so-called vulnerable road-user. However, the method is not
limited thereto, and the class may comprise any other subclass such as a vehicle and/or
a traffic sign.
[0014] The method may comprise determining a current and/or a future position of the detected
object based on the sensor data, and optionally determining the at least one color
of the displayed content based on the determined current and/or future position of
the detected object.
[0015] That is, if an object of the predefined class is detected, the method may comprise
determining a position thereof. Determining the position may be done using the above-described
camera data and may optionally include using data of other sensors, e.g., of a radar
sensor, a lidar sensor and/or an ultrasonic sensor. The determined position may be
used additionally to determine the at least one color of the displayed content, e.g.,
the at least one color of the displayed content may vary based on the determined position
of the detected object.
[0016] Determining the future position of the detected object may comprise determining an
expected trajectory of the detected object.
[0017] The trajectory may include information of an expected movement of the detected object,
i.e., the trajectory may include information on one or more future positions of the
object. The trajectory may be represented by a spline. The trajectory may comprise
time information, i.e., the trajectory may comprise an information when the object
will be at which position.
[0018] Determining the trajectory of the detected object may comprise determining a current
velocity, a current acceleration and/or a current direction of movement of the detected
object.
[0019] That is, the above-described trajectory may be estimated on a current movement profile
of the detected object. However, the method is not limited thereto and other information
about the environment around the detected object may be used, such as obstacles and/or
a road guidance, optionally including traffic signs. Estimating/Determining the trajectory
may be done using a machine learning based algorithm and/or a deterministic algorithm.
The machine learning based algorithm may comprise a neural network.
[0020] The method may comprise determining a size and/or a shape of the detected object.
Determining the size and/or the shape of the detected object may comprise determining
a bounding box for the detected object, wherein the detected object may be fully located
inside the bounding box. The shape and/or the size if the object may be a shape and/or
a size of the determined bounding box.
[0021] The method may comprise determining if the detected object and the content displayed
in the windshield are overlapping in at least one display area being located in a
predefined field of view of a user of the vehicle when the user is watching through
the displayed content based on the determined current and/or future position of the
detected object.
[0022] In other words, the transparent display is configured to display the content in a
certain area in the windshield of the vehicle. A size and/or a position of this area
may be variable, e.g., manually using keys installed in the vehicle and/or automatically
based on a position of the user's eyes. When the user is watching through this area
in the windshield, the user may not only see the displayed content but may also see
the detected object. This might be the case if the detected object is located in front
of the vehicle. However, this depends on the size of the area where the content is
displayed, the position of the object and the field of view of the user. The predefined
field of view of the user may be a fixed field of view, i.e., a default field of view,
and/or may be determined based on a position of the user's eyes. In case the user
can see both, the displayed content and the detected object, at the same time, the
displayed content and the detected object are overlapping (optionally excluding the
peripheral field of view of the user). Since the object may be located farer away
from the user's eyes than the displayed content and/or since the object may have a
relatively small size, the detected object and the displayed content do not necessarily
have to overlap in the whole area where the content is displayed, but may (solely)
overlap in a certain sub area thereof, i.e., the so called at least one display area
(or sub area of an area in the windshield where the content is displayed). This display
area may be determined by the method.
[0023] If it is determined that the detected object and the content displayed in the windshield
are overlapping in the at least one display area, the method may comprise determining
the at least one color of the displayed content in the determined at least one display
area based on the determined at least one color included in the sensor data.
[0024] That is, the method may include setting a color in the above-described sub area of
the area in the windshield where the content is displayed depending on the color of
the detected object.
[0025] However, the method is not limited thereto, and it is also possible that the color
of the whole area in the windshield where the content is displayed is changed based
on the color of the detected object in case it is determined that the detected object
and the content displayed in the windshield are overlapping in the at least one display
area.
[0026] In other words, the method may comprise determining the at least one color of the
displayed content in the whole area where the content is displayed in the windshield
based on the determined at least one color included in the sensor data, if it is determined
that the detected object and the content displayed in the windshield are overlapping
in the at least one display area.
[0027] The at least one color of the displayed content may be determined to be the inverted
color of the at least one color included in the sensor data.
[0028] In general, determining the color of the displayed content may comprise changing
or setting the at least one color of the displayed content depending on the at least
one color determined based on the sensor data. As described above, the at least one
color determined based on the sensor data may be a color of a certain detected object
in the environment of the vehicle, but is not limited thereto and may also be any
color included in the sensor data. The term "color" as used herein does not necessarily
solely include a color of a certain pixel/spot but may also include a color distribution.
The color distribution may be determined based on the sensor data. Additionally or
alternatively, a color distribution for the displayed content may be determined based
on the color determined using the sensor data. Moreover, the term "color" as used
herein may also include black and/or white. The term "color" may also include red,
blue and/or green, and optionally any kind of mix of these three colors. Here, inverting
refers to a process for "reversing colors". Specifically, this means that the "opposite
color" of the respective color space may be determined. For example, black becomes
white and vice versa. Additionally or alternatively, the RGB color space may be used.
In the RGB color space, the inverted color may be determined by subtracting a color
value (e.g., at a pixel) from the maximum value of the RGB color space. More specifically
and exemplary, a pixel may have a color value such as red = 55, green = 128 or blue
= 233 in a color resolution of 0-255 (corresponds to 8-bit color depth). In this example,
the inverted values would therefore be for red = 255 - 55 = 200, green = 255 - 128
= 127 and blue = 255 - 233 = 22.
[0029] The method may comprise determining a control signal based on the determined at least
one color of the displayed content and output the determined control signal to the
transparent display such that the transparent display displays the content with the
determined at least one color of the displayed content.
[0030] The above-described method may be summarized in other words and with respect to a
more concrete implementation thereof as follows.
[0031] A solution for improving a head-up display's capability to display information/content
such that a user thereof may easily distinguish collision relevant objects such as
pedestrians, bicycles, motorbikes, obstacles, or any other kind of objects from the
content/information displayed in the head-up display is provided. More specifically,
while using a head-up display, it may happen that some collision relevant objects/obstacles
may not be clearly visible or could not be distinguished by the driver from the displayed
content due to properties of the head-up display. Hence, it may happen that the driver
does not correctly detect a critical object well in advance to take evasive action.
This might be overcome by dynamically adapting an interface of the head-up display.
[0032] More specifically, the method may take input from different sensors such as a front
facing camera and/or a surround view camera to extract properties of collision relevant
objects or obstacles. This information may then be provided to a central display control
unit which may be configured to adapt properties of the head-up display to distinguish
the detected object(s) from the displayed content, hence increasing drivers' attention
and/or reducing the driver's mental workload.
[0033] Initially, the information may be displayed using default properties of the head-up
display during normal driving conditions.
[0034] As soon as some collision relevant objects (including obstacles) are approaching
in a field of view of the driver, e.g., a cyclist crossing a road in front of the
vehicle in an urban scenario, the object or obstacle may be detected by on-board sensors
such as the front facing camera of the vehicle and/or by the surround view camera
system for objects/obstacles which are in near range.
[0035] The on-board sensors may extract the properties of the cyclist such as color, size,
shape and/or background color thereof.
[0036] These properties may be provided to the central display controller, which may work
as an independent electronic control unit (ECU) taking input from the one or multiple
sensors.
[0037] The central display controller as an ECU, may process the data from different sensors
e.g., from different camera's and may plausibilize the properties of a detected, optionally
classified as collision critical, object to ensure that qualified object properties
with higher confidence may be generated.
[0038] The central display controller may compare the properties of the detected object
to that of the current properties of the head-up display, i.e., such as the color
of a cyclist to a color of head-up display information.
[0039] In case the properties of objects/obstacles are similar to the properties of the
head-up display, the central display controller may modify the properties of the head-up
display such that it is easily possible for the user/driver to distinguish between
the information projected by the head-up display and the detected object.
[0040] This dynamic adaption of the head-up display properties ensure that the driver is
more attentive to (optionally collision critical) objects. This enables the driver
to easily distinguish the (optionally collision critical) objects from the displayed
content.
[0041] Furthermore, a control unit is provided. The control unit is configured to carry
out the above-described method at least partly.
[0042] The control unit can be an electronic control unit (ECU) for a vehicle. The electronic
control unit can be an intelligent processor-controlled unit that can communicate
with other modules, optionally via a central gateway (CGW). The control unit can form
part of the vehicle's onboard network via fieldbuses such as the CAN bus, LIN bus,
MOST bus and/or FlexRay or via automotive Ethernet, optionally together with a telematics
control unit. The electronic control unit may be configured to control functions relevant
to a driving behavior of the vehicle, such as an engine control system, a power transmission,
a braking system and/or a tire pressure control system. In addition, some or all driver
assistance systems such as parking assistant, adaptive cruise control, lane departure
warning, lane change assistant, traffic sign recognition, light signal recognition,
approach assistant, night vision assistant, intersection assistant, and/or many others
may be controlled by the control unit.
[0043] Moreover, the description given above with respect to the method applies mutatis
mutandis to the control unit and vice versa.
[0044] Furthermore, a vehicle is provided. The vehicle comprises the above-described control
unit and a transparent display connected to the control unit.
[0045] The vehicle may be an automobile, e.g., a car. The vehicle may be automated. The
automated vehicle can be designed to take over lateral and/or longitudinal guidance
at least partially and/or temporarily during automated driving of the automated vehicle.
Therefore, inter alia the sensor data of the above-described sensor system may be
used. The control unit may be configured to control the automated driving at least
partly.
[0046] The automated driving may be such that the driving of the vehicle is (largely) autonomous.
The vehicle may be a vehicle of autonomy level 1, i.e., have certain driver assistance
systems that support the driver in vehicle operation, for example adaptive cruise
control (ACC).
[0047] The vehicle can be a vehicle of autonomy level 2, i.e., be partially automated in
such a way that functions such as automatic parking, lane keeping or lateral guidance,
general longitudinal guidance, acceleration and/or braking are performed by driver
assistance systems.
[0048] The vehicle may be an autonomy level 3 vehicle, i.e., automated in such a conditional
manner that the driver does not need to continuously monitor the system vehicle. The
vehicle autonomously performs functions such as triggering the turn signal, changing
lanes, and/or lane keeping. The driver can attend to other matters, but is prompted
by the system to take over control within a warning time if needed.
[0049] The vehicle may be an autonomy level 4 vehicle, i.e., so highly automated that the
driving of the vehicle is permanently taken over by the system vehicle. If the driving
tasks are no longer handled by the system, the driver may be requested to take over
control.
[0050] The vehicle may be an autonomy level 5 vehicle, i.e., so fully automated that the
driver is not required to complete the driving task. No human intervention is required
other than setting the destination and starting the system. The vehicle can operate
without a steering wheel or pedals.
[0051] Moreover, the description given above with respect to the method and the control
unit applies mutatis mutandis to the vehicle and vice versa.
[0052] Furthermore, a computer program comprising instructions which, when the program is
executed by a computer, optionally the control unit, cause the computer to carry out
the above-described method at least partly.
[0053] The program may comprise any program code, in particular a code suitable for control
systems of vehicles. The description given above with respect to the method, the control
unit and the vehicle applies mutatis mutandis to the computer program and vice versa.
[0054] Furthermore, a computer-readable medium comprising instructions which, when executed
by a computer, cause the computer to carry out the above-described method at least
partly.
[0055] The computer-readable medium may be any digital data storage device, such as a USB
flash drive, a hard disk, a CD-ROM, an SD card or an SSD card. The above-described
computer program may be stored on the computer-readable medium. However, the computer
program does not necessarily have to be stored on such a computer-readable medium,
but can also be obtained via the Internet.
[0056] Moreover, the description given above with respect to the method, the control unit,
the vehicle, and the computer program applies mutatis mutandis to the computer-readable
(storage) medium and vice versa.
[0057] An embodiment is described with reference to figures 1 to 3 below.
- Fig. 1
- shows schematically and partly a vehicle comprising a control unit configured to carry
out a method for controlling a transparent display configured to display content in
a windshield of the vehicle,
- Fig. 2
- shows schematically the control unit connected to the transparent display of the vehicle
of figure 1, and
- Fig. 3
- shows schematically a flowchart of the method.
[0058] In the following an embodiment is described with reference to figures 1 to 3, wherein
the same reference signs are used for the same objects throughout the description
of the figures and wherein the embodiment is just one specific example for implementing
the invention and does not limit the scope of the invention as defined by the claims.
[0059] In figure 1 a vehicle 1 is shown schematically and partly. As can be gathered from
figure 2, the vehicle 1 comprises a control unit 2 being connected to a front camera
3 and a surround view camera system 4 on an input side thereof, and to a transparent
display 5 on an output side thereof.
[0060] The front camera 3 and the surround view camera system 4 form part of a sensor system
of the vehicle 1. The transparent display 5 is configured to display content/information
6 in a windshield 7 of the vehicle 1, as can be gathered from figure 1. The sensor
system is configured to capture - as sensor data - images from the environment of
the vehicle 1.
[0061] The situation shown in figure 1 shows a predefined field of view 11 of a user 9 of
the vehicle 1, which is looking in a main driving direction of the vehicle 1.
[0062] The control unit 2 is configured to carry out a method for controlling the transparent
display 5 based on the sensor data captured by and received from the sensor system
shown in figure 2, wherein a flowchart of the method is shown in figure 3.
[0063] As can be gathered from figure 3, the method substantially comprises sixth steps
S1 - S6.
[0064] In a first step S1 of the method, the sensor system being installed at the vehicle
1, captures the sensor data relating to the environment of the vehicle 1.
[0065] In a second step S2 of the method, the control unit 2 detects an object of a predefined
class in the environment of the vehicle 1 based on the sensor data captured in the
first step S1 of the method. In the present embodiment the detected object is a bicycle
8. However, this is just an example and every other object, such as a pedestrian and/or
an obstacle, may be detected.
[0066] In a third step S3 of the method, the control unit 2 determines a current position
81 and future positions 82, 83 of the detected bicycle 8, as well as a size, a color
and a shape of the detected bicycle 8 based on the sensor data captured in the first
step S1 of the method. For determining the future positions 82, 83 of the bicycle
8, the control unit 2 determines an expected trajectory 12 of the detected bicycle
8. Determining the trajectory 12 of the detected bicycle 8 comprises determining a
current velocity, a current acceleration and/or a current direction of movement of
the bicycle 8. Figure 1 shows the bicycle 8 in its current position 81 on a left side
of the windshield 7, in a first future position 82 being located on a right side of
the current position 81 and in a second future position 83 being located on a right
side of the first future position 82. Therefore, the current position 81 is the start
of the trajectory 12 and the first and the second future positions 82, 83 are located
on the trajectory 12 such that the three positions 81, 82, 83 are connected by a spline
representing the trajectory 12. Here, it is determined based on the determined trajectory
12 that the bicycle 8 will cross a street in front of the vehicle 1 and thus may be
a safety critical object. Also, other scenarios are possible, e.g., a pedestrian looking
at his smartphone on a crossing in front of the vehicle 1. That is, the method may
optionally include judging based on the class of the detected object 8 and/or its
trajectory 12 if the detected object 8 is a safety critical object and then only if
the detected object 8 is a safety critical object the following steps S4 - S6 are
carried out.
[0067] In a fourth step S4 of the method, the control unit 2 determines if the detected
bicycle 8 and the content 6 displayed in the windshield 7 are overlapping in at least
one display area 10 being located in a predefined field of view 11 of the user 9 of
the vehicle 1 in the situation shown in figure 1, i.e., when the user 9 is watching
through the displayed content 6 in the main driving direction of the vehicle 1.
[0068] In the present case, the field of view 11 is a default area in the windshield 7 of
the vehicle 1 corresponding to an area in which the transparent display 2 may display
the content 6, i.e., the method uses a fixed predefined field of view 11. However,
it would also be possible that the control unit 2 receives image data from a camera
installed in the vehicle capturing images from the user 9 of the vehicle 1 and determines
a current/varying field of view 11 of the user 9 of the vehicle 1.
[0069] The control unit 2 uses the trajectory 12 including the current and future positions
81 - 83 of the bicycle 8 as well as the shape and the size of the bicycle 8 to determine
a bounding box around the bicycle 8, i.e., the bounding box is the display area 10
inside which the bicycle 8 is located in its current and future positions 81 - 83.
The bounding box 10 may also be a box around the current position 81 of the bicycle
8 and may move according the trajectory 12 together with the bicycle 8.
[0070] As can be gathered from figure 1, the display area/bounding box 10 is partly located
inside the field of view 11, i.e., the displayed content 6 and the detected object
8 overlap, such that the method proceeds with a fifth step S5. Otherwise, the method
would start again with the first step S1. That is, the fifth step S5 is only carried
out if it is determined that the detected object 8 and the content 6 displayed in
the windshield 7 are overlapping in the at least one display area 10 and optionally
if the content 6 has similar properties, such as a color, as the bicycle 8. In conclusion,
determining the color of the displayed content 6 is also done based on the determined
current and future positions 81 - 83 of the detected bicycle 8.
[0071] More specifically, in the fifth step S5 of the method, the control unit 2 determines
a color of the displayed content 6 based on the color of the bicycle 8 determined
in the third step S3 of the method. In general, two alternatives may be distinguished
in determining the color of the displayed content 6. In a first alternative, the control
unit 2 solely determines a color of the content 6 displayed in the display area/bounding
box 10 based on the determined color of the bicycle 8 (wherein the remaining part
of the content 6 outside the display area/bounding box 10 is not considered by the
control unit 2). In a second alternative, the control unit 2 determines the at least
one color of the displayed content 6 in a whole area where the content 6 is displayed
in the windshield 7, here the area defined as the field of view 11, based on the determined
color of the bicycle 8. In both alternatives, the control unit 2 determines the inverted/opposite
color of the detected object, here the bicycle 8, and sets the inverted color as the
color of the displayed content 6.
[0072] In a sixth step S6 of the method, the control unit 2 determines a control signal
based on the determined at least one color of the displayed content 6 and outputs
the determined control signal to the transparent display 2 such that the transparent
display 2 displays the content 6 and/or the bicycle 8 with the determined at least
one color of the displayed content 6.
[0073] After the sixth step S6, the method starts again with the first step S1. However,
the method is not limited to the strict order of the steps S1 - S6 shown in figure
3. This is just an illustrative example and the steps S1 - S6 may also be processed
by the control unit 2 in a different order. For example, some of the steps may be
processed simultaneously, e.g., the first and the fifth step S1, S5.
Reference signs list
[0074]
- 1
- vehicle
- 2
- control unit
- 3
- front camera
- 4
- surround view camera
- 5
- transparent display
- 6
- displayed content
- 7
- windshield
- 8
- detected object, here bicycle
- 81
- current position of the bicycle
- 82
- first future position of the bicycle
- 83
- second future position of the bicycle
- 9
- user/driver of the vehicle
- 10
- display area where the bicycle is and will be located
- 11
- field of view of the user
- 12
- trajectory
- S1 - S6
- steps of the method
1. Method for controlling a transparent display (5) configured to display content (6)
in a windshield (7) of a vehicle (1),
characterized in that the method comprises:
- capturing (S1) sensor data relating to an environment of the vehicle (1) using a
sensor system (3, 4) installed at the vehicle (1),
- determining (S3) at least one color included in the sensor data, and
- determining (S5) at least one color of the displayed content (6) based on the determined
at least one color included in the sensor data.
2. Method according to claim 1,
characterized in that the method comprises:
- detecting (S2) an object (8) of a predefined class in the environment of the vehicle
(1) based on the sensor data, and
- optionally determining (S5) a color of the detected object (8) as the at least one
color included in the sensor data.
3. Method according to claim 2,
characterized in that the method comprises:
- determining (S3) a current and/or a future position (81 - 83) of the detected object
(8) based on the sensor data, and
- determining (S5) the at least one color of the displayed content (6) based on the
determined current and/or a future position (81 - 83) of the detected object (8).
4. Method according to claim 3, characterized in that determining (S3) the future position (82, 82) of the detected object (8) comprises
determining an expected trajectory (12) of the detected object (8).
5. Method according to claim 4, characterized in that determining the trajectory (12) of the detected object (8) comprises determining
a current velocity, a current acceleration and/or a current direction of movement
of the detected object (8).
6. Method according to any of claims 2 to 5, characterized in that the method comprises determining a size and/or a shape and/or other properties of
the detected object (8).
7. Method according to claim any of claims 3 to 6, characterized in that the method comprises determining (S4) if the detected object (8) and the content
(6) displayed in the windshield (7) are overlapping in at least one display area (10)
being located in a predefined field of view (11) of a user (9) of the vehicle (1)
when the user (9) is watching through the displayed content (6) based on the determined
current and/or future position (81 - 83) of the detected object (8).
8. Method according to claim 7,
characterized in that:
- if it is determined that the detected object (8) and the content (6) displayed in
the windshield (7) are overlapping in the at least one display area (10), the at least
one color of the displayed content (6) in the determined at least one display area
(10) is determined based on the determined at least one color included in the sensor
data, or
- if it is determined that the detected object (8) and the content (6) displayed in
the windshield (7) are overlapping in the at least one display area (10), determining
the at least one color of the displayed content (6) in a whole area where the content
(6) is displayed in the windshield (7) based on the determined at least one color
included in the sensor data.
9. Method according to any of claims 1 to 8, characterized in that the at least one color of the displayed content (6) is determined to be the inverted
color of the at least one color included in the sensor data.
10. Method according to any of claims 1 to 9, characterized in that the method comprises determining (S6) a control signal based on the determined at
least one color of the displayed content (6) and output the determined control signal
to the transparent display (5) such that the transparent display (5) displays the
content (6) with the determined at least one color of the displayed content (6).
11. Control unit (2), characterized in that the control unit (2) is configured to carry out the method according to any of claims
1 to 10.
12. Vehicle (1), characterized in that the vehicle comprises the control unit (2) according to claim 11 and a transparent
display (5) connected to the control unit (2).
13. A computer program comprising instructions which, when the program is executed by
a computer, cause the computer to carry out the method according to any of claims
1 to 10.
14. A computer-readable medium comprising instructions which, when executed by a computer,
cause the computer to carry out the method according to any of claims 1 to 10.