TECHNOLOGICAL FIELD
[0001] Examples of the present disclosure relate to apparatus, methods and computer programs
for providing an audio user interface. Some relate to apparatus, methods and computer
programs for providing a spatial audio user interface.
BACKGROUND
[0002] User interfaces can be provided to enable a user to interact with electronic devices
such as mobile telephones. Spatial audio user interfaces can provide audio outputs
that are rendered so that the user can perceive the audio outputs to be originating
from a particular location or direction.
BRIEF SUMMARY
[0003] According to various, but not necessarily all, examples of the disclosure there may
be provided an apparatus comprising means for; estimating a position of one or more
body parts of a user relative to a spatial audio output system; and enabling one or
more spatial audio items to be provided via the spatial audio output system so that
the one or more spatial audio items are provided at positions determined, at least
in part, by the position of the one or more body parts of the user and one or more
distraction criteria associated with the one or more spatial audio items.
[0004] The means may be for using one or more wireless signals to estimate the position
of the one or more body parts.
[0005] The wireless signals may comprise any one or more of mmWaves, ultra wide band signals,
WiFi signals, acoustic signals.
[0006] The means may be for determining a location of the spatial audio output system and
using the determined location of the spatial audio output system to estimate positions
of one or more body parts of a user.
[0007] The one or more spatial audio items may comprise one or more interactive user interface
items that enable the user to interact with one or more applications of the apparatus.
[0008] Spatial audio items with a higher one or more distraction criteria may be positioned
at more prominent positions relative to the one or more body parts of the user compared
to spatial audio items with a lower one or more distraction criteria so that the spatial
audio items with a higher one or more distraction criteria are more audibly perceptible
by the user.
[0009] The one or more distraction criteria may be determined by one or more of; importance
of spatial audio item, application associated with spatial audio item, assigned user
preferences.
[0010] One or more of the spatial audio items may be provided within the user's peripersonal
space.
[0011] The means may be for tracking the position of the one or more body parts of the user
and if the one or more body parts of the user has changed adjusting the rendering
of the one or more spatial audio items so that the position of the one or more spatial
audio items relative to the one or more body parts of the user is maintained.
[0012] The one or more body parts of the user may comprise any one or more of: user's arms,
legs, torso.
[0013] The spatial audio output system may comprise at least one of: ear pieces, head set,
surround sound speaker system.
[0014] According to various, but not necessarily all, examples of the disclosure there may
be provided an apparatus comprising at least one processor; and at least one memory
including computer program code, the at least one memory and the computer program
code configured to, with the at least one processor, cause the apparatus at least
to perform: estimating a position of one or more body parts of a user relative to
a spatial audio output system; and enabling one or more spatial audio items to be
provided via the spatial audio output system so that the one or more spatial audio
items are provided at positions determined, at least in part, by the position of the
one or more body parts of the user and one or more distraction criteria associated
with the one or more spatial audio items.
[0015] According to various, but not necessarily all, examples of the disclosure there may
be provided a method comprising: estimating a position of one or more body parts of
a user relative to a spatial audio output system; and enabling one or more spatial
audio items to be provided via the spatial audio output system so that the one or
more spatial audio items are provided at positions determined, at least in part, by
the position of the one or more body parts of the user and one or more distraction
criteria associated with the one or more spatial audio items.
[0016] According to various, but not necessarily all, examples of the disclosure there may
be provided a computer program comprising computer program instructions that, when
executed by processing circuitry, cause: estimating a position of one or more body
parts of a user relative to a spatial audio output system; and enabling one or more
spatial audio items to be provided via the spatial audio output system so that the
one or more spatial audio items are provided at positions determined, at least in
part, by the position of the one or more body parts of the user and one or more distraction
criteria associated with the one or more spatial audio items.
[0017] According to various, but not necessarily all, examples of the disclosure there may
be provided a user device comprising an apparatus as described herein.
BRIEF DESCRIPTION
[0018] Some examples will now be described with reference to the accompanying drawings in
which:
Fig. 1 shows an example system;
Fig. 2 shows an example apparatus;
Fig. 3 shows an example method;
Figs. 4A and 4B show an example apparatus in use; and
Fig. 5 shows another example method.
DETAILED DESCRIPTION
[0019] Examples of the disclosure provide apparatus, methods and computer programs for providing
spatial audio user interface. The spatial audio user interface can comprise spatial
audio items that can enable a user to interact with a user device. In examples of
the disclosure the position of a limb or other part of a user's body can be determined.
The spatial audio items can then be positioned based on the positions of the user's
limbs or other parts of their body. This enables the spatial audio items to be provided
in positions that are intuitive and convenient for a user to interact with.
[0020] Fig. 1 shows an example system 101 that can be used to implement examples of the
disclosure. The system 101 comprises a user device 111 and a spatial audio output
system 113. The spatial audio output system 113 can be configured to provide spatial
audio for a user 103. It is to be appreciated that only components of the system 101
that are referred to in this description are shown in Fig.1 and that the system 101
could comprise additional components in other examples of the disclosure.
[0021] Fig. 1 also shows a user 103. The user 103 can be the user 103 of the user device
111. The spatial audio output system 113 can be configured to provide spatial audio
to the user 103.
[0022] The user 103 has one or more body parts 105. In the example of Fig. 1 the body parts
105 are the user's arms. In other examples the body parts 105 could be the user's
legs, torso or any other suitable part of their body.
[0023] The user device 111 could be a mobile phone, a smart speaker or any other suitable
electronic device. The user device 111 could be a portable electronic device that
the user 103 could carry in their pocket, handbag, or other place such that there
might be no direct line of sight between the user device 111 and the one or more parts
105 of the user's body. In examples of the disclosure the user device 111 can be configured
to determine the position of the user's body parts 105. In some examples of the disclosure
the user device 11 could be configured to control the spatial audio system 113 to
enable a spatial audio output to be rendered for the user 103.
[0024] The user device 111 can be positioned in proximity to the user 103 so that the user
device 111 can be used to estimate positions of one or more parts of the user's body.
For example, the user device 111 can be positioned close enough to the user 103 to
enable wireless signals to be used to estimate positions of one or more parts of the
user's body.
[0025] The user device 111 comprises an apparatus 107 and a transceiver 109. Only the components
of the user device 111 referred to in the following description have been shown in
Fig.1. It is to be appreciated that in implementations of the disclosure the user
device 111 can comprise additional components that have not been shown in Fig. 1.
For example, the user device 111 can comprise a power source, a user interface and
other suitable components.
[0026] The apparatus 107 can be a controller 203 comprising a processor 205 and memory 207
that can be as shown in Fig. 2. The apparatus 107 can be configured to enable control
of the user device 111. For example, the apparatus 107 can be configured to control
the radiofrequency beams that are transmitted by the transceiver 109 or any other
suitable functions of the user device 111.
[0027] The user device 111 also comprises at least one transceiver 109. The transceiver
109 can comprise any means that can be configured to enable radio frequency signals
to be transmitted and received by the user device 111. The transceiver 109 can be
configured to enable wireless communications.
[0028] The transceiver 109 can be configured to provide one or more wireless signals that
can be used to estimate the position of one or more body parts of the user. For example,
the transceiver 109 can provide wireless signals such as mmWaves, ultra wide band
signals, WiFi signals, acoustic signals (e.g. active sonar ranging). The transceiver
109 can be configured to transmit wireless signals and then detect the signals that
are reflected back from the user's body parts 105. The apparatus 107 can then use
the information in the reflected signals to estimate the positions of the parts of
the user's body. In some examples the positions of the body parts 105 can be detected
by detecting shadowing or blocking of the wireless signals. In some examples shadowing
or blocking of wireless signals of a communication channel between the user device
111 and the spatial audio system 113 can be used.
[0029] In some examples the user device 111 can comprise additional components that are
not shown in Fig. 1. For example, an acoustic transducer could be provided that can
be configured to enable acoustic signals to be used to detect the position of the
user's body parts 105. For example, a speaker and a microphone (or speaker array and
microphone array) may be used in combination to transmit acoustic signals and then
detect the acoustic signals that are reflected back from the one or more body parts
105 of the user 103.
[0030] In the example system 101 of Fig. 1 only one user device 111 is shown. In this example
the same user device 111 can transmit the wireless signal and detect reflected wireless
signals. In other examples a plurality of different devices could be provided that
can transmit and/or detect the wireless signals. These can enable the wireless signals
to be transmitted from a first device and reflected from the user's body and then
detected by a different device.
[0031] In examples where the transceiver 109 can be configured to enable wireless communication
using mm waves the transceiver 109 can be configured to enable wireless communication
using a wavelength below approximately 10mm. Wavelengths below approximately 10mm
can be considered to be short wavelengths. The transceiver 109 can be configured to
enable wireless communication using a high frequency. The high frequency can be above
24 GHz. In some examples the frequency may be between 24 to 39 GHz.
[0032] In some example the transceiver 109 can be configured to enable 5G communication.
The transceiver 205 can be configured to enable communication within New Radio networks.
New Radio is the 3GPP (3
rd Generation Partnership Project) name for 5G technology.
[0033] The use of the wireless signals to determine the position of the one or more parts
105 of the user's body can enable the positions of the parts 105 of the user's body
to be determined even when there is no direct line of sight between the user device
111 and the parts 105 of the user's body, and therefore also no direct line of sight
between the apparatus 107 within the user device 111 and the parts of the user's body.
For instance, the user device 111 could be in the user's pocket or handbag.
[0034] The spatial audio output system 113 can comprise any means that can be configured
to provide a spatial audio output to the user 103. The spatial audio output system
113 is configured to convert an electrical input signal to an output sound signal
that can be heard by the user 103. The spatial audio output system 113 can comprise
earphones, a head set, an arrangement of loudspeakers or any other suitable system.
[0035] The spatial audio that is played back by the spatial audio output system 113 can
be configured so that spatial audio items can be perceived by the user 103 to be located
at particular positions. The audio that is provided by the spatial audio output system
113 can comprise one or more settings to control the spatial aspects of the audio.
For example, head related transfer functions (HRTFs), or other processes can be used
to create spatial characteristics that can be reproduced by the spatial audio output
system 113 when the audio is played back.
[0036] The spatial processing of the audio content can be performed by any suitable device
in the system 101. In some examples the audio content can be spatially processed by
the user device 111 and can then be transmitted to the spatial audio output system
113 for playback. In other examples the spatial audio output system 113 could comprise
one or more processing modules that could be configured to process the audio signal
to provide spatial characteristics.
[0037] The user device 111 can be configured to communicate with the spatial audio output
system 113. This can enable the user device 111 to provide spatial audio content for
playback to the spatial audio output system 113. In some examples this can enable
the user device 111 to provide information to the spatial audio output system 113
that can then be used by the spatial audio output system 113 when rendering the spatial
audio content and/or spatial audio items. For example, the user device 111 can determine
the position of the user 103 or part 105 of the user's body. This information could
then be provided to the spatial audio output system 113 and used by the spatial audio
output system 113 when rendering one or more spatial audio items. The rendering can
comprise the processing of a digital signal before the digital signal is played back
by a loudspeaker.
[0038] Fig. 2 shows an example apparatus 107. The apparatus 107 illustrated in Fig. 2 can
be a chip or a chip-set. The apparatus 107 can be provided within user devices 111
such as a mobile phone, personal electronics device or any other suitable type of
user device 111. The apparatus 107 could be provided within user devices 111 as shown
in Fig. 1.
[0039] In the example of Fig. 2 the apparatus 107 comprises a controller 203. In the example
of Fig. 2 the implementation of the controller 203 can be as controller circuitry.
In some examples the controller 203 can be implemented in hardware alone, have certain
aspects in software including firmware alone or can be a combination of hardware and
software (including firmware).
[0040] As illustrated in Fig. 2 the controller 203 can be implemented using instructions
that enable hardware functionality, for example, by using executable instructions
of a computer program 209 in a general-purpose or special-purpose processor 205 that
can be stored on a computer readable storage medium (disk, memory etc.) to be executed
by such a processor 205.
[0041] The processor 205 is configured to read from and write to the memory 207. The processor
205 can also comprise an output interface via which data and/or commands are output
by the processor 205 and an input interface via which data and/or commands are input
to the processor 205.
[0042] The memory 207 is configured to store a computer program 209 comprising computer
program instructions (computer program code 211) that controls the operation of the
apparatus 107 when loaded into the processor 205. The computer program instructions,
of the computer program 209, provide the logic and routines that enables the apparatus
107 to perform the methods illustrated in Figs. 3 and 5. The processor 205 by reading
the memory 207 is able to load and execute the computer program 209.
[0043] The apparatus 107 therefore comprises: at least one processor 205; and at least one
memory 207 including computer program code 211, the at least one memory 207 and the
computer program code 211 configured to, with the at least one processor 205, cause
the apparatus 107 at least to perform:
estimating 301 a position of one or more body parts 105 of a user 103 relative to
a spatial audio output system 113; and
enabling 303 one or more spatial audio items to be provided via the spatial audio
output system 113 so that the one or more spatial audio items are provided at positions
determined, at least in part, by the position of the one or more body parts105 of
the user 103 and one or more distraction criteria associated with the one or more
spatial audio items.
[0044] As illustrated in Fig. 2 the computer program 209 can arrive at the apparatus 107
via any suitable delivery mechanism 201. The delivery mechanism 201 can be, for example,
a machine readable medium, a computer-readable medium, a non-transitory computer-readable
storage medium, a computer program product, a memory device, a record medium such
as a Compact Disc Read-Only Memory (CD-ROM) or a Digital Versatile Disc (DVD) or a
solid-state memory, an article of manufacture that comprises or tangibly embodies
the computer program 209. The delivery mechanism can be a signal configured to reliably
transfer the computer program 209. The apparatus 107 can propagate or transmit the
computer program 209 as a computer data signal. In some examples the computer program
209 can be transmitted to the apparatus 107 using a wireless protocol such as Bluetooth,
Bluetooth Low Energy, Bluetooth Smart, 6LoWPan (IP
v6 over low power personal area networks) ZigBee, ANT+, near field communication (NFC),
Radio frequency identification, wireless local area network (wireless LAN) or any
other suitable protocol.
[0045] The computer program 209 comprises computer program instructions for causing an apparatus
107 to perform at least the following:
estimating 301 a position of one or more body parts 105 of a user 103 relative to
a spatial audio output system 113; and
enabling 303 one or more spatial audio items to be provided via the spatial audio
output system 113 so that the one or more spatial audio items are provided at positions
determined, at least in part, by the position of the one or more body parts105 of
the user 103 and one or more distraction criteria associated with the one or more
spatial audio items.
[0046] The computer program instructions can be comprised in a computer program 209, a non-transitory
computer readable medium, a computer program product, a machine readable medium. In
some but not necessarily all examples, the computer program instructions can be distributed
over more than one computer program 209.
[0047] Although the memory 207 is illustrated as a single component/circuitry it can be
implemented as one or more separate components/circuitry some or all of which can
be integrated/removable and/or can provide permanent/semi-permanent/ dynamic/cached
storage.
[0048] Although the processor 205 is illustrated as a single component/circuitry it can
be implemented as one or more separate components/circuitry some or all of which can
be integrated/removable. The processor 205 can be a single core or multi-core processor.
[0049] References to "computer-readable storage medium", "computer program product", "tangibly
embodied computer program" etc. or a "controller", "computer", "processor" etc. should
be understood to encompass not only computers having different architectures such
as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures
but also specialized circuits such as field-programmable gate arrays (FPGA), application
specific circuits (ASIC), signal processing devices and other processing circuitry.
References to computer program, instructions, code etc. should be understood to encompass
software for a programmable processor or firmware such as, for example, the programmable
content of a hardware device whether instructions for a processor, or configuration
settings for a fixed-function device, gate array or programmable logic device etc.
[0050] As used in this application, the term "circuitry" can refer to one or more or all
of the following:
- (a) hardware-only circuitry implementations (such as implementations in only analog
and/or digital circuitry) and
- (b) combinations of hardware circuits and software, such as (as applicable):
- (i) a combination of analog and/or digital hardware circuit(s) with software/firmware
and
- (ii) any portions of hardware processor(s) with software (including digital signal
processor(s)), software, and memory(ies) that work together to cause an apparatus,
such as a mobile phone or server, to perform various functions and
- (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion
of a microprocessor(s), that requires software (e.g. firmware) for operation, but
the software might not be present when it is not needed for operation.
[0051] This definition of circuitry applies to all uses of this term in this application,
including in any claims. As a further example, as used in this application, the term
circuitry also covers an implementation of merely a hardware circuit or processor
and its (or their) accompanying software and/or firmware. The term circuitry also
covers, for example and if applicable to the particular claim element, a baseband
integrated circuit for a mobile device or a similar integrated circuit in a server,
a cellular network device, or other computing or network device.
[0052] The blocks illustrated in the Figs. 3 and 5 can represent steps in a method and/or
sections of code in the computer program 209. The illustration of a particular order
to the blocks does not necessarily imply that there is a required or preferred order
for the blocks and the order and arrangement of the block can be varied. Furthermore,
it can be possible for some blocks to be omitted.
[0053] Fig. 3 shows an example method according to examples of the disclosure. The method
could be implemented using an apparatus 107 and system 101 as described above.
[0054] At block 301 the method comprises estimating a position of one or more body parts
105 of a user 103 relative to a spatial audio output system 113.
[0055] Any suitable process can be used to estimate the positions of the body parts 105.
In some examples one or more wireless signals can be used to estimate the position
of the one or more body parts 105. The wireless signals could comprise mmWaves, ultra
wide band signals, WiFi signals, acoustic signals or any other suitable type of signal.
The position of the user's body parts 105 can be estimated based on detected reflections
of the wireless signals. This can enable the position of the body parts 105 of the
user 103 relative to the user device 111 to be determined.
[0056] The position of the spatial audio system 113 relative to the user device 111 can
also be determined. For example, where the spatial audio system 113 comprises earbuds
or a head set the properties of a datalink between the spatial audio system and the
user device 111 can be examined to determine the position of the earbuds or headset.
In some examples the position of the earbuds or head set can be determined by estimating
a likely position of the user's head based on normal human motions. In examples where
the spatial audio system comprises loudspeakers the positions of these loudspeakers
can be determined and provided to the user device 111.
[0057] Once the position of the spatial audio system 113 relative to the user device 111
and the position of the user's body parts relative to the user device 111 are known
these can be used to estimate a position of the user's body parts relative to the
spatial audio system 113. This position can be estimated using an algorithm or any
other suitable means.
[0058] In some examples the positions of the body parts 105 of the user 103 could be estimated
based on a context of the user 103. For example, the position and/or activity of the
user 103 could be determined and this could enable the position of parts of the user's
body to be estimated. For instance, if it is determined that the user 103 is seated
at a desk then it is likely that the user's arms will be positioned in front of them
to enable typing or writing and that the legs of the user 103 will be position in
front of them in a seated position.
[0059] It is to be appreciated that a combination of methods for determining the position
of the parts105 of the user's body could be used in some examples of the disclosure.
For instance, the apparatus 107 could first determine the context of the user 103
to determine a likely position of the user's limbs or other body parts 105. The wireless
signals, or any other suitable means could then be used to obtain a more accurate
estimate of the positions or could be used to detect movement or changes in position
of the user's limbs or other body parts 105.
[0060] At block 303 the method comprises enabling one or more spatial audio items to be
provided via the spatial audio output system 113. The spatial audio items could comprise
one or more interactive user interface items that enable the user 103 to interact
with one or more applications of the apparatus 107 and/or user device 111. For example,
the spatial audio items could comprise a notification that an event associated with
one or more applications of the apparatus 107 has occurred. In some examples the spatial
audio items could comprise items that a user 103 could select or otherwise interact
with.
[0061] The spatial audio items can be provided by the spatial audio output system 113 so
that the spatial audio items are perceived by the user 103 to originate from a determined
location or direction. Any suitable means can be used to enable the rendering of the
spatial audio items. For example HRTFs can be used where the spatial audio output
system 113 comprises a head set or earbuds. Other types of filtering or processing
could be used in other examples of the disclosure.
[0062] The spatial audio items can be provided within the user's peripersonal space. The
peripersonal space is the region of space immediately surrounding the user's body.
The location of the peripersonal space can be determined from the estimated positions
of the user's body parts 105.
[0063] The position at which the spatial audio items are to be provided can be determined,
at least in part, by the position of the one or more body parts 105 of the user 103
and one or more distraction criteria associated with the one or more spatial audio
items. Sounds that occur within some locations within the peripersonal space of a
user 105 are naturally more distracting than sounds provided at other locations. For
example, a sound that occurs near a user's hands will be naturally more distracting
than a similar sound that occurs near a user's elbow or a sound that occurs further
away from the user 105. This naturally occurring variation in how distracting a sound
will be can be used to determine where the audio items should be provided. This can
enable different levels of distraction and noticeability to be associated with different
spatial audio items.
[0064] The spatial audio can be provided so that items with a higher one or more distraction
criteria are positioned at more prominent positions relative to the body parts 105
of the user 103 compared to spatial audio items with a lower one or more distraction
criteria.
[0065] This causes the spatial audio items with a higher one or more distraction criteria
to be more audibly perceptible by the user 103. For instance, a spatial audio item
with a higher distraction criteria could be provided at a location close to a user's
hand while a spatial audio item that has a lower distraction criteria could be provided
at a location that is close to the user's elbow. Other locations for the spatial audio
items could be used in other examples of the disclosure.
[0066] Any suitable factors could be used to determine the distraction criteria. In some
examples the one or more distraction criteria could be determined, at least in part,
by an importance of a spatial audio item. The importance could be an indication of
the significance of an event associated with a spatial audio item. For instance, a
notification that the user device 111 is running low on power could have a higher
importance than an incoming chat message.
[0067] In some examples the one or more distraction criteria could be determined, at least
in part, by an application associated with a spatial audio item. For instance, items
associated with an email application could be provided with a higher distraction criteria
than spatial audio items associated with a chat or gaming application or other applications.
[0068] In some examples the one or more distraction criteria could be determined, at least
in part, by assigned user preferences. In such examples the user 103 of the user device
111 could indicate the items, functions and/or applications to which they would like
to assign higher distraction criteria. For example, a user 103 could indicate the
applications of the user device 111 that they wish to associate with a higher distraction
criteria or conversely the applications that they wish to associate with a lower distraction
criteria. In some examples a user 103 could associate specific events of the user
device 111 with a higher distraction criteria. For instance, the user 103 could assign
messages or incoming communications from specific people to a higher distraction criteria.
As an example, this could enable the user 103 to assign a higher distraction criteria
to their manager or to family members than to friends.
[0069] It is to be appreciated that the method can comprise additional blocks that are not
shown in Fig. 3. For instance, in some examples the method can also comprise tracking
the position of the one or more body parts 105 of the user 103. This can enable changes
in the positions of the user 103 or body parts of the user 103 to be monitored over
time. This enables the rendering of the one or more spatial audio items to be adjusted
so that the position of the one or more spatial audio items relative to the one or
more body parts of the user 103 is maintained. For instance, if the spatial audio
item is to be rendered close to the user's hand then as the user 103 moves their hand
relative to the apparatus 107 or the spatial audio system 113 the position of the
spatial audio item relative to the apparatus 107 and/or spatial audio system 113 also
changes. The apparatus 107 can be configured to update the rendering of the spatial
audio to take into account this change.
[0070] The tracking of the position of the one or more body parts 105 of the user 103 can
enable the position of the peripersonal space and the locations within the peripersonal
space to be adjusted relative to the position of the apparatus 107 and/or the spatial
audio system 113. For instance, if the user's arm is in a first position at a first
time then the spatial audio output can be provided at a first position corresponding
to the position of the arm. If the arm is detected to be in a second position at a
second time then the spatial audio output can be provided in a second position corresponding
to the second position of the arm.
[0071] Figs. 4A and 4B show example implementations of the disclosure. In the example shown
in Fig. 4A the apparatus 107 has determined the position of parts 105 of the user's
body. In this example the body parts 105 are the arms 405A, 405B of the user 103.
It is to be appreciated that the positions of other parts 105 of the user's body could
be determined in other examples of the disclosure. For example, the position of the
user's legs or torso could be determined.
[0072] The positions of the user's body parts 105 can be determined using any suitable means.
In some examples the positions of the user's body parts can be determined using wireless
signals and/or from determining a context of the user 103.
[0073] In the example shown in Fig. 4A both of the user's arms 405A, 405B are positioned
in front of the user 103. In this exampled the user 103 could be seated at a desk
and the two arms 405A, 405B could be positioned in front of the user 103 as the user
types.
[0074] In this example the user's peripersonal space has been divided into four different
zones 401A, 401B, 401C, 401D. The positions of the four different zones 401A, 401B,
401C, 401D are determined by the positions of the user's arms 405A, 405B. In this
example a first zone 401A is provided to the left of the user's left arm 405A. A second
zone 401B is provided to the right of the user's left arm 405B. A third zone 401C
is provided to the left of the user's right arm 405B and a fourth zone 401D is provided
to the right of the user's right arm 405B. It is to be appreciated that other numbers
and arrangements of the zones could be used in other examples of the disclosure.
[0075] The zones 401A, 401B, 401C, 401D are positioned so that they are easy and intuitive
for a user 103 to interact with. The zones 401A, 401B, 401C, 401D are positioned so
that spatial audio items 403 that are provided in these zones401A, 401B, 401C, 401D
are highly perceptible to the user 103. The spatial audio items 403 in the zones 401A,
401B, 401C, 401D are likely to be more perceptible than spatial audio items provided
elsewhere. This makes the spatial audio items 403 provided in these zones more distracting
than spatial audio items provided outside of the user's peripersonal space, or provided
in an inconvenient location within the peripersonal space such as close to the user's
elbow.
[0076] The different zones 401A, 401B, 401C, 401D can be associated with different functions
and/or applications of the apparatus 107 or the user device 111. In the example of
Figs. 4A and 4B each zone 401A, 401B, 401C, 401D is associated with a different function
or application. In other examples some applications and functions could be associated
with more than one zone.
[0077] As shown in Fig. 4A the first zone 401A is associated with a first email account,
the second zone 401B is associated with a second email account, the third zone 401C
is associated with a first application and the fourth zone 401D is associated with
a second application.
[0078] In the example shown in Fig. 4A the user device 111 receives an email associated
with the second email account. This causes a notification to be provided to the user
103. In this example the notification is a spatial audio item 403. The apparatus 107
determines that a spatial audio item 403 is to be provided and then determines the
location of the zone 401B associated with the spatial audio item 403 that is to be
provided. In this example the apparatus 107 will determine that the spatial audio
item 403 is to be provided in a zone 401B that is located to the right of the user's
left arm 405A as shown in Fig. 4A. The apparatus 107 can then determine the location
of the user's arm so that the apparatus 107 can determine the location of the user's
arm relative to the spatial audio system 113 and/or the apparatus 107. The apparatus
107 can then enable the spatial audio item 403 to be processed so that the spatial
audio item 403 sounds as though it is located within the second zone 401B. This therefore
enables an alert to be provided to the user 103. The user 103 can perceive the alert
without having to directly interact with the user device 111.
[0079] In this example the position of the spatial audio item 403 provides information to
the user 103. In this example the position of the spatial audio item 403 provides
information indicative of the application or function that is associated with the
spatial audio item 403. This can enable the user 103 to distinguish between different
notifications or other types of spatial audio items 403 without having to look at
the user device 111.
[0080] In some examples the apparatus 107 can be configured to detect a user input or user
interaction associated with the spatial audio item 403. For instance, the user 103
could make a user input in response to the spatial audio item 403. For instance, the
user 103 could make a gesture or other user input that could cause the spatial audio
item 403 to be cancelled or could enable a function associated with the spatial audio
item 403 to be performed.
[0081] Fig. 4B shows an example in which the user 103 is interacting with the spatial audio
item 403 and so is enabling control of functions of the user device 111 and/or apparatus
107. In this example the interactions comprise the user 103 making a gesture within
the second zone 401B, or at least partly within the second zone 401B that is associated
with the spatial audio item 403. In this example the user 103 touches the right side
of their left arm 405A. The user 103 can move their right arm 405B to touch the right
side of their left arm 405A as shown in Fig. 4B.
[0082] The apparatus 107 can detect that the gesture user input has occurred. The apparatus
107 can detect this by determining the positions of the user's arms 405A, 405B and
recognizing this as a gesture user input. The positions of the user's arms 405A, 405B
and/or movement of the user's arms 405A, 405B can be detected using wireless signals
and/or any other suitable means.
[0083] The apparatus 107 can then identify that this user input is associated with the spatial
audio item 403 because the user input has been detected in the second zone 401B to
the right of the left arm 405A. The apparatus 107 therefore determines that the interaction
is associated with the second email account 401B and so can enable a function associated
with the second email account to be performed. In this example the function could
be providing an update relating to the emails that have been received. For example,
it could provide an indication of the number of emails that have been received and
the sender of the emails. The indication could be provided as an audio output via
the spatial audio system 111. Other functions could be performed in response to the
interactions from the user 103 in other examples of the disclosure.
[0084] The spatial audio items 403 therefore enable the user 103 to interact with functions
of the apparatus 107 and/or user device 111 without having to touch or look at or
otherwise directly interact with the apparatus 107 and/or user device 111. This can
enable the user 103 to control functions of the apparatus 107 and/or user device 111
while the user device 111 remains in their pocket and/or handbag or otherwise when
there is no line of sight between the apparatus 107 and the one or more parts of the
user's body. This can be more convenient for the user 103.
[0085] The examples of Figs. 4A and 4B show an example of how a spatial audio item 403 could
be provided to a user 103 and how the user 103 can interact with the spatial audio
item 403. In some examples the apparatus 107 can be configured to enable a spatial
audio item 403 to be provided in response to an input from the user 103. The input
103 could be a gesture user input that comprises the user 103 positioning their arms
405, or other parts of their body, in a specific configuration. For example, the user
103 could hold their arms 405 out in front of them or could use one arm 405 to tap
another part of their body. The positions of the user's arms 405 could be detected
and recognized as a gesture input. In response to the recognition of the gesture input
the spatial audio items 403 could be provided.
[0086] As an example, if the user 103 wishes to check whether any emails have been received
on the second email account they could move their right arm 405B to touch the right
side of their left arm 405A. This gesture could be recognized as an input associated
with the second email account. In response to this input a function associated with
the second email account could be performed. For instance, a spatial audio item 403
comprising information relating to the second email account could be provided. The
information that is provided could be an indication of the number of emails that have
been received and the sender of the emails or any other suitable information. The
spatial audio item 403 could be provided in the zone 401B associated with the second
email account or in any other suitable location within the user's peripersonal space.
[0087] Fig. 5 shows another example method that could be implemented using the apparatus
107 and systems 101 described herein.
[0088] At block 501 the method comprises requesting a spatial audio item 403. The spatial
audio item 403 could be a user interface item that can enable a user 103 to interact
with the user device 111 and/or apparatus 107. For instance, it could be a notification
that informs the user 103 of an event or an item that can be selected by a user 103
or any other suitable user interaction
[0089] The request for the spatial audio item 403 can be received or otherwise obtained
by the apparatus 107. The request for the spatial audio item 403 could be triggered
by the occurrence of an event. The trigger event could be the receiving of a message
by an email or messaging application or any other suitable event. In some examples
the trigger event could be detected by the user device 111 or a module within the
user device 111.
[0090] The method can also comprise receiving one or more distraction criteria for the spatial
audio item 403. The distraction criteria can provide an indication of the level of
distraction that should be provided by the spatial audio item 403. The distraction
criteria can be determined by any one or more of; importance of spatial audio item,
application associated with spatial audio item, assigned user preferences or any other
suitable factors or combination of such factors.
[0091] At block 503 a position for the spatial audio item 403 can be determined. The position
can be a location relative to one or more body parts 105 of the user 103. In some
examples the location could be a position within one or more zones that are defined
relative to one or more body parts 105 of the user 103. The position for the spatial
audio item 403 can be determined based on the functions associated with the spatial
audio item 403, the positions of the body parts 105 of the user 103, the zones available
within the user's peripersonal space and/or any other suitable factor.
[0092] The position for the spatial audio item 403 can be determined, at least in part,
based on one or more distraction criteria associated with the spatial audio item.
The distraction criteria can ensure that spatial audio items 403 with a higher designated
importance value can be provided in positions that correspond to the level of importance.
This can enable spatial audio items 403 with a higher one or more distraction criteria
to be positioned at more prominent positions relative to the body parts 105 of the
user 103 compared to spatial audio items with a lower one or more distraction criteria.
[0093] The most prominent positions for the spatial audio items 403 comprise the positions
at which the user 103 would find the spatial audio items 403 most distracting. The
most prominent positions could be close to the user's hands or a position that appears
to be within the user's head. Positions with lower prominence could comprise positions
close to the user's elbow or behind the user 103. The distraction criteria can be
used so that spatial audio items 403 with one or more higher distraction criteria
are provided at more prominent positions. This causes the spatial audio items with
a higher one or more distraction criteria to be more audibly perceptible by the user
103.
[0094] The position determined at block 503 can be an ideal position. This could be an optimal,
or substantially optimal position, for the spatial audio item. The actual position
that can be achieved for the spatial audio item 403 can be limited by factors such
as the accuracy with which the position of the user's body parts 105 can be determined,
the accuracy at which the spatial audio system 113 can render audio items and any
other suitable factors.
[0095] At block 505 the position of one or more of the user's body parts 105 can be estimated.
This could comprise estimating the position of the user's legs, arms or torso or any
other suitable part of the user's body.
[0096] The position of the user's body parts 105 can be determined using wireless signals
or any other suitable means.
[0097] At block 507 the settings for the spatial audio system 113 are calculated. The settings
for the spatial audio system 113 can comprise the filters, or other processes, that
are to be user to enable the spatial audio item 403 to be rendered so that the spatial
audio item 403 is perceived to be at the position determined at block 503. The user's
limbs or other body parts 105 do not have a fixed position relative to the apparatus
107 and/or the spatial audio system 113. This means that the position at which the
spatial audio item 403 is to be rendered can change over time, even if the user device
111 and the spatial audio system 113 do not move. Therefore, the estimated position
of the body parts105 of the user 103 is used together with the ideal position for
the spatial audio item 403 in order to calculate the settings for the spatial audio
system 113.
[0098] At block 509 the spatial audio item 403 is provided. At bock 509 the spatial audio
item 403 can be provided in a digital signal. At block 511 the spatial audio item
403 is played back to a user 103. At block 511 the spatial audio system 113 converts
the digital signal comprising the spatial audio item 403 into an acoustic signal that
can be heard by the user 103. The settings that are applied to the spatial audio item
403 ensure that the user 103 perceives the spatial audio item 403 at the appropriate
position within their peripersonal space.
[0099] The example method shown in Fig. 5 also shows how spatial audio items 403 can be
provided in response to a user interaction or input. At block 513 the method comprises
estimating a new position of the body parts 105 of the user 103. The new position
can be estimated using wireless signals or any other suitable means. The new position
can be estimated in the same way that the position of the body parts 105 is estimated
at block 505.
[0100] At block 515 a user interaction can be detected. The user interaction can be detected
if the new estimated position of the body parts 105 corresponds to a known input gesture.
For instance, in the example shown in Fig. 4B the user interaction is the user 103
moving one of their hands into one of the predefined zones within their peripersonal
space. In the example of Fig. 4B the user moves their hand into the zone in which
the spatial audio item 403 was provided. Other gestures and user inputs could be used
in other examples of the disclosure.
[0101] At block 517 a function associated with the detected user interaction is determined.
For example, the function and/or application associated with the zone in which the
gesture has been detected could be identified. This could be used in examples such
as those shown in Fig. 4A and 4B. It is to be appreciated that the user interaction
could comprise any recognisable gesture so that other user interactions could be used
in other examples of the disclosure. For instance, the user interaction could be a
user waving their hand or other part of their body. The user interaction need not
be a movement that takes place within a particular zone.
[0102] At block 519 the function associated with the interaction can be performed.
[0103] Examples of the disclosure provide the advantage that the position of the parts of
the user's body 105 can be determined even when there is no direct line of sight between
the apparatus 107 and the parts of the user's body. For example, if the user device
111 is within the user's pocket or handbag. The spatial user interface and the spatial
audio items 403 that are provided can therefore enable a user to interact with the
user device 111 without having to remove the user device 111 from their pocket or
handbag. This can be more convenient for the user 103.
[0104] In some examples of the disclosure the position of the spatial audio items 403 can
also enable additional information to be conveyed to a user 103. For instance, the
position of the spatial audio item can provide an indication of the application or
function associated with a notification. In some examples the position of the spatial
audio item can be used to provide an indication of importance of the spatial audio
item 403 so that spatial audio items 403 with a higher designated importance value
can be provided in positions that correspond to the level of importance.
[0105] The term 'comprise' is used in this document with an inclusive not an exclusive meaning.
That is any reference to X comprising Y indicates that X may comprise only one Y or
may comprise more than one Y. If it is intended to use 'comprise' with an exclusive
meaning then it will be made clear in the context by referring to "comprising only
one..." or by using "consisting".
[0106] In this description, reference has been made to various examples. The description
of features or functions in relation to an example indicates that those features or
functions are present in that example. The use of the term 'example' or 'for example'
or 'can' or 'may' in the text denotes, whether explicitly stated or not, that such
features or functions are present in at least the described example, whether described
as an example or not, and that they can be, but are not necessarily, present in some
of or all other examples. Thus 'example', 'for example', 'can' or 'may' refers to
a particular instance in a class of examples. A property of the instance can be a
property of only that instance or a property of the class or a property of a sub-class
of the class that includes some but not all of the instances in the class. It is therefore
implicitly disclosed that a feature described with reference to one example but not
with reference to another example, can where possible be used in that other example
as part of a working combination but does not necessarily have to be used in that
other example.
[0107] Although examples have been described in the preceding paragraphs with reference
to various examples, it should be appreciated that modifications to the examples given
can be made without departing from the scope of the claims.
[0108] Features described in the preceding description may be used in combinations other
than the combinations explicitly described above.
[0109] Although functions have been described with reference to certain features, those
functions may be performable by other features whether described or not.
[0110] Although features have been described with reference to certain examples, those features
may also be present in other examples whether described or not.
[0111] The term 'a' or 'the' is used in this document with an inclusive not an exclusive
meaning. That is any reference to X comprising a/the Y indicates that X may comprise
only one Y or may comprise more than one Y unless the context clearly indicates the
contrary. If it is intended to use 'a' or 'the' with an exclusive meaning then it
will be made clear in the context. In some circumstances the use of 'at least one'
or 'one or more' may be used to emphasis an inclusive meaning but the absence of these
terms should not be taken to infer any exclusive meaning.
[0112] The presence of a feature (or combination of features) in a claim is a reference
to that feature or (combination of features) itself and also to features that achieve
substantially the same technical effect (equivalent features). The equivalent features
include, for example, features that are variants and achieve substantially the same
result in substantially the same way. The equivalent features include, for example,
features that perform substantially the same function, in substantially the same way
to achieve substantially the same result.
[0113] In this description, reference has been made to various examples using adjectives
or adjectival phrases to describe characteristics of the examples. Such a description
of a characteristic in relation to an example indicates that the characteristic is
present in some examples exactly as described and is present in other examples substantially
as described.
[0114] Whilst endeavoring in the foregoing specification to draw attention to those features
believed to be of importance it should be understood that the Applicant may seek protection
via the claims in respect of any patentable feature or combination of features hereinbefore
referred to and/or shown in the drawings whether or not emphasis has been placed thereon.
1. An apparatus comprising means for;
estimating a position of one or more body parts of a user relative to a spatial audio
output system; and
enabling one or more spatial audio items to be provided via the spatial audio output
system so that the one or more spatial audio items are provided at positions determined,
at least in part, by the position of the one or more body parts of the user and one
or more distraction criteria associated with the one or more spatial audio items.
2. An apparatus as claimed in claim 1 wherein the means are for using one or more wireless
signals to estimate the position of the one or more body parts.
3. An apparatus as claimed in claim 2 wherein the wireless signals comprise any one or
more of mmWaves, ultra wide band signals, WiFi signals, acoustic signals.
4. An apparatus as claimed in any preceding claim wherein the means are for determining
a location of the spatial audio output system and using the determined location of
the spatial audio output system to estimate positions of one or more body parts of
a user.
5. An apparatus as claimed in any preceding claim wherein the one or more spatial audio
items comprise one or more interactive user interface items that enable the user to
interact with one or more applications of the apparatus.
6. An apparatus as claimed in any preceding claim wherein spatial audio items with a
higher one or more distraction criteria are positioned at more prominent positions
relative to the one or more body parts of the user compared to spatial audio items
with a lower one or more distraction criteria so that the spatial audio items with
a higher one or more distraction criteria are more audibly perceptible by the user.
7. An apparatus as claimed in any preceding claim wherein the one or more distraction
criteria are determined by one or more of; importance of spatial audio item, application
associated with spatial audio item, assigned user preferences.
8. An apparatus as claimed in any preceding claim wherein one or more of the spatial
audio items are provided within the user's peripersonal space.
9. An apparatus as claimed in any preceding claim wherein the means are for tracking
the position of the one or more body parts of the user and if the one or more body
parts of the user has changed adjusting the rendering of the one or more spatial audio
items so that the position of the one or more spatial audio items relative to the
one or more body parts of the user is maintained.
10. An apparatus as claimed in any preceding claim wherein the one or more body parts
of the user comprise any one or more of: user's arms, legs, torso.
11. An apparatus as claimed in any preceding claim wherein the spatial audio output system
comprises at least one of: ear pieces, head set, surround sound speaker system.
12. A method comprising:
estimating a position of one or more body parts of a user relative to a spatial audio
output system; and
enabling one or more spatial audio items to be provided via the spatial audio output
system so that the one or more spatial audio items are provided at positions determined,
at least in part, by the position of the one or more body parts of the user and one
or more distraction criteria associated with the one or more spatial audio items.
13. A method as claimed in claim 12 comprising using one or more wireless signals to estimate
the position of the one or more body parts.
14. A computer program comprising computer program instructions that, when executed by
processing circuitry, cause:
estimating a position of one or more body parts of a user relative to a spatial audio
output system; and
enabling one or more spatial audio items to be provided via the spatial audio output
system so that the one or more spatial audio items are provided at positions determined,
at least in part, by the position of the one or more body parts of the user and one
or more distraction criteria associated with the one or more spatial audio items.
15. A computer program as claimed in claim 14 the computer program instructions also cause
using one or more wireless signals to estimate the position of the one or more body
parts.