TECHNOLOGICAL FIELD
[0001] Embodiments of the present disclosure relate to audio processing.
BACKGROUND
[0002] Spatial audio enables the capturing of audio from an audio source while retaining
information about the relative position of the audio source from an origin. The audio
can then be rendered to a listener at the same (or a different) relative position
from the listener. It is also possible to separately attenuate or amplify specific
audio sources or even remove them entirely from the rendered audio. This provides
a 'focus' on a specific audio source or specific audio sources.
[0003] One way of capturing audio while retaining information about the position of the
audio source is to use an array of microphones. The microphones have known fixed positional
differences and therefore audio from a particular audio source can reach each microphone
with a different time delay. This phase information can accurately position the audio
source if the array is carefully designed. Two omnidirectional microphones can position
a stationary audio source as lying on an intersect (e.g. circle) centered on the axis
between the microphones. A third omnidirectional microphone positions the stationary
audio source at either of two points on the intersect (circle) on either side of the
plane shared by the three microphones. A fourth microphone can be used position the
stationary audio source at a single point.
[0004] Applying phase and amplitude modulation to the outputs of the array of microphones
creates a phased microphone array that can be used to beam steer a lobe of a virtual
microphone for selective capturing of an audio source.
[0005] It will therefore be appreciated that for high quality spatial audio special microphone
arrangements that have four of more microphones are generally used. The microphones
in such arrangements can be omnidirectional or directional.
[0006] It would be desirable to obtain some of the advantages of spatial audio without requiring
such special microphone arrangements.
BRIEF SUMMARY
[0007] According to various, but not necessarily all, embodiments there is provided an apparatus
comprising means for:
detecting user input indicating a presence of a, at least partial, user-controlled
obstruction;
receiving obstructed audio input from at least one obstructed microphone when a user
is providing the, at least partial, user-controlled obstruction between the at least
one obstructed microphone and a first region;
receiving unobstructed audio input from at least one unobstructed microphone when
the user is not providing the, at least partial, user-controlled obstruction between
the at least one unobstructed microphone and the first region;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from the at least one microphone to create filtered
audio that amplifies or attenuates an audio source in the first region.
[0008] In some but not necessarily all examples, the frequency dependent filter is configured
as a spatially dependent filter that differentially amplifies or attenuates audio
sources at different spatial positions.
[0009] In some but not necessarily all examples, the apparatus is configured to detect as
the user input, a spatially specific hand gesture that provides a spatially dependent,
at least partial, user-controlled obstruction of audio reaching the at least one obstructed
microphone.
[0010] In some but not necessarily all examples, the apparatus comprises a camera and means
for detecting user input indicating a presence of a, at least partial, user-controlled
obstruction comprising means for processing an output from the camera to recognize
a presence of a hand and movement of the hand as a user hand gesture.
[0011] In some but not necessarily all examples, the frequency dependent filter is based
on a difference in spectrum of the audio input between the at least one obstructed
microphone and the at least one unobstructed microphone caused by acoustic shadowing
by the, at least partial, user-controlled obstruction between the at least one obstructed
microphone and a first region.
[0012] In some but not necessarily all examples, the apparatus comprises means for:
spectral analysis of the obstructed audio input;
spectral analysis of the unobstructed audio input;
generating the frequency dependent filter based on a difference between the spectral
analysis.
[0013] In some but not necessarily all examples, the frequency dependent filter selectively
provides a gain to spectral components corresponding to spectral components of the
obstructed audio signal.
[0014] In some but not necessarily all examples, the frequency dependent filter selectively
provides a gain to lower frequency harmonics of spectral components that are attenuated
by the, at least partial, user-controlled obstruction that have a harmonic structure.
[0015] In some but not necessarily all examples, the gain is controlled by the user.
[0016] In some but not necessarily all examples, the frequency dependent filter is a time-variable
filter configured to fade over time.
[0017] In some but not necessarily all examples, the apparatus is configured to prompt the
user to repeat creation of the frequency dependent filter, comprising:
receiving obstructed audio input from at least one obstructed microphone when a user
is providing an, at least partial, obstruction between the at least one obstructed
microphone and a first region;
receiving unobstructed audio input from the at least one unobstructed microphone when
the user is not providing the, at least partial, obstruction between the at least
one unobstructed microphone and the first region;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from the at least one microphone to create filtered
audio that amplifies or attenuates an audio source in the first region.
[0018] According to various, but not necessarily all, embodiments there is provided a method
comprising:
receiving obstructed audio input from at least one obstructed microphone when a user
is providing an, at least partial, user-controlled obstruction between the at least
one microphone and a first region;
receiving unobstructed audio input from the at least one unobstructed microphone when
the user is not providing the, at least partial, user-controlled obstruction between
the at least one microphone and the first region;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from the at least one microphone to create filtered
audio that amplifies or attenuates an audio source in the first region.
[0019] In some but not necessarily all examples, the frequency dependent filter is based
on a difference in spectrum of the audio input from the obstructed microphone and
the unobstructed microphone caused by acoustic shadowing by a hand of the user.
[0020] In some but not necessarily all examples, the frequency dependent filter provides
frequency dependent gain, and the gain is controlled by the user to be attenuation
or amplification.
[0021] According to various, but not necessarily all, embodiments there is provided a computer
program that when run on at least one processor performs:
detecting user input indicating a presence of an, at least partial, user-controlled
obstruction;
receiving obstructed audio input from at least one obstructed microphone when a user
is providing an, at least partial, user-controlled obstruction between the at least
one obstructed microphone and a first region;
receiving unobstructed audio input from the at least one unobstructed microphone when
the user is not providing the, at least partial, user-controlled obstruction between
the at least one unobstructed microphone and the first region;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from the at least one microphone to create filtered
audio that amplifies or attenuates an audio source in the first region.
[0022] According to various, but not necessarily all, embodiments there is provided an apparatus
comprising means for:
detecting user input indicating a presence of a, at least partial, user-controlled
obstruction;
receiving obstructed audio input from at least one microphone when a user is providing
the, at least partial, user-controlled obstruction;
receiving unobstructed audio input from at least one microphone when the user is not
providing the, at least partial, user-controlled obstruction;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from at least one microphone to create filtered audio
that amplifies or attenuates an audio source when the user is not providing the, at
least partial, user-controlled obstruction.
[0023] In one embodiment an apparatus comprises means for:
detecting user input indicating a presence of a, at least partial, user-controlled
obstruction;
receiving obstructed audio input from a microphone when a user is providing the, at
least partial, user-controlled obstruction;
receiving unobstructed audio input from the microphone when the user is not providing
the, at least partial, user-controlled obstruction;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from the microphone to create filtered audio.
[0024] In another embodiment an apparatus comprises means for:
detecting user input indicating a presence of a, at least partial, user-controlled
obstruction;
receiving obstructed audio input from a first microphone when a user is providing
the, at least partial, user-controlled obstruction;
receiving unobstructed audio input from a second microphone;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter in dependence upon the comparison;
filtering received audio input from the first and/or second microphone to create filtered
audio.
[0025] According to various, but not necessarily all, embodiments there is provided examples
as claimed in the appended claims.
BRIEF DESCRIPTION
[0026] Some examples will now be described with reference to the accompanying drawings in
which:
FIG. 1A & 1B shows an example of obstructing an audio source for the purpose of creating
a frequency-dependent filter;
FIG. 2A & 2B shows an example of obstructing an audio source for the purpose of creating
a frequency-dependent filter;
FIG. 2C shows an example of obstructing an audio source for the purpose of creating
a frequency-dependent filter;
FIG. 3 shows an example of a method of creating a frequency-dependent filter;
FIG. 4 shows an example of a method of creating a frequency-dependent filter after
user input;
FIGs 5A, 5B and 5C illustrate, respectively, an example of a spectrum for unobstructed
audio input, an example of a spectrum for obstructed audio input, and an example of
a difference between the spectrum for unobstructed audio input and the spectrum for
obstructed audio input;
FIG 6A illustrates an example of amplification filter based on the difference between
the spectrum for unobstructed audio input and the spectrum for obstructed audio input
illustrated in FIG 5C;
FIG 6B illustrates an example of attenuation filter based on the difference between
the spectrum for unobstructed audio input and the spectrum for obstructed audio input
illustrated in FIG 5C;
FIG 7 illustrates an example of an apparatus for creating a frequency-dependent filter;
FIG 8 illustrates an example of an apparatus for using the frequency-dependent filter;
frequency-dependent filter;
FIG 9A illustrates an example of an implementation of the apparatus;
FIG 9B illustrates an example of a computer program that enables creation of a frequency-dependent
filter.
FIGs 10A, 10B and 10C illustrate, respectively, an example of a spectrum for unobstructed
audio input, an example of a spectrum for obstructed audio input with harmonic structure,
and an example of a difference between the spectrum for unobstructed audio input and
the spectrum for obstructed audio input;
FIG 11A illustrates an example of a harmonically-extended amplification filter based
on the difference between the spectrum for unobstructed audio input and the spectrum
for obstructed audio input illustrated in FIG 10C;
FIG 11B illustrates an example of a harmonically-extended attenuation filter based
on the difference between the spectrum for unobstructed audio input and the spectrum
for obstructed audio input illustrated in FIG 10C.
DETAILED DESCRIPTION
[0027] In this description, a group of objects or features can be identified using a reference
numeral without a sub-script. Particular members of the group (if more than one) can
be (but are not necessarily) identified using a reference numeral with a sub-script.
[0028] In this description, the reference 22' will be used to reference user-obstructed
audio input to differentiate it from other audio input 22. The reference 22 will be
used to reference audio input in the absence of a user-obstruction 30. The audio input
in the absence of a user-obstruction 30 can be for the purpose of comparison to create
a filter 50 (the audio input 22 is described at this stage as 'unobstructed' audio
input to distinguish it from the obstructed audio input 22'). The audio input in the
absence of a deliberate user-obstruction 30 can be for the purpose of filtration by
the created filter 50.
[0029] The following description describes various examples in which an apparatus 100 comprises
means for:
detecting user input 134 indicating a presence of a user-controlled obstruction 30;
receiving obstructed audio input 22' from at least one obstructed microphone 20 when
a user 130 is providing an, at least partial, obstruction 30 between the at least
one obstructed microphone 20 and a first region 16;
receiving unobstructed audio input 22 from the at least one unobstructed microphone
20 when the user 130 is not providing the, at least partial, obstruction 30 between
the at least one microphone 20 and the first region 16;
comparing the obstructed audio input 22' and the unobstructed audio input 22;
creating a frequency dependent filter 50 in dependence upon the comparison; and filtering
received audio input 22 from the at least one microphone 20 to create filtered audio
that amplifies or attenuates an audio source 10 in the first region 16.
[0030] In some but not necessarily all examples, the frequency dependent filter 50 is based
on a difference in spectrum between the audio inputs 22 from the at least one unobstructed
microphone and the at least one obstructed microphone 20 caused by acoustic shadowing
by the at least partial, obstruction 30 between the at least one obstructed microphone
20 and the first region 16. In some examples, the at least partial, obstruction 30
is a hand 132 of the user 130.
[0031] In some but not necessarily all examples, the frequency dependent filter 50 provides
frequency dependent gain. The gain can be negative (attenuation) or positive (amplification).
In some examples, the gain is controlled by the user 130 to be attenuation or, alternatively,
amplification. In some examples the user provides that control via a gesture 134 of
their hand 132.
[0032] Figs 1A, 1B, 2A, 2B, 2C illustrate one or more spatially distributed audio sources
10
i which produce respective audio 12
i. The audio sources 10
i illustrated and their arrangement are only examples. There can be more or less audio
sources 10, for example there may be a single audio source 10. The positions of the
audio sources 10 can be different than illustrated. Also, the audio sources 10 can
be positioned within a three-dimensional space. The audio sources 10 can have different
spatial extent than illustrated. In some examples, an audio source can be an ambient
audio source (for example background noise). An audio source 10 can be a localized
audio source (for example a human speaking).
[0033] In Figs 1A, 1B a frequency dependent filter 50 (not illustrated in these FIGs) is
created based on audio input 22 from a single microphone 20. The comparison of obstructed
audio input 22' (FIG 1B) and the unobstructed audio input 22 (FIG 1A) is based on
audio input 22, 22' that is captured by the microphone 20 at different times. The
comparison is a time-divided comparison.
[0034] In Figs 2A, 2B, 2C a frequency dependent filter 50 (not illustrated in these FIGs)
is created based on audio input 22 from two microphones 20
1, 20
2 that are distinct and separated in space. The comparison of obstructed audio input
22' (FIG 2B) and the unobstructed audio input 22 (FIG 2A) can be based on audio input
22, 22' that is captured by the microphone 20 at different times as described for
FIGs 1A, 1B. However, the comparison of obstructed audio input 22
2' (FIG 2C) and the unobstructed audio input 22 (FIG 2C) can be based on audio input
22
1, 22
2' that is captured by different microphones 20
1, 20
2 at the same time (e.g. simultaneously or contemporaneously). The comparison is a
space-divided as the different microphones 20
1, 20
2 are at different positions.
[0035] At FIG 1A, the microphone 20 provides, for further processing, unobstructed audio
input 22 captured at a time (offset time) when the user is not providing the, at least
partial, obstruction 30 between the at least one microphone 20 and the first region
16.
[0036] At FIG 1B, the user provides an obstruction 30 between the at least one microphone
20 and a first region 16. The obstruction 30 can be a complete or partial obstruction.
The obstruction 30 obstructs the audio 12
2 produced by the audio source 10
2. The audio from the audio source 10
2 that reaches the microphone 20 (if any) is obstructed audio 12
2'. The microphone 20 captures the obstructed audio 12
2' and unobstructed audio 12
1, 12
3 from other audio sources 10
1, 10
3 (if any) to produce the obstructed audio input 22' at a time (reference time) when
the user is providing the, at least partial, obstruction 30 between the microphone
20 and a first region 16. The microphone 20 provides for further processing the obstructed
audio input 22'.
[0037] As will be described later, an apparatus 100 which may or may not comprise the microphone
20 compares the obstructed audio input 22' and the unobstructed audio input 22 and
creates a frequency dependent filter 50 in dependence upon the comparison. After the
user-provided obstruction 30 has been removed, the created frequency dependent filter
50 can then be used to filter received audio input 22 from the microphone 20 to create
filtered audio that amplifies or attenuates an audio source in the first region 16.
[0038] The offset time is a time offset relative to the reference time. The offset time
can be before or after the reference time. The offset time can be immediately before
or immediately after the reference time.
[0039] Thus, at the reference time the user creates a spatially dependent, at least partial,
obstruction 30 of audio reaching the at least one microphone. A user input that indicates
the reference time can be used to indicate a presence of the user-controlled obstruction
30 at the reference time. The apparatus 100 can be configured to detect user input
that determines the reference time, compare audio input 22 from the microphone 20
received at the reference time with audio input 22 from the microphone 20 received
at the offset time (a time offset relative to the reference time), and create the
frequency dependent filter 50 in dependence upon the comparison.
[0040] At the offset time, the microphone 20 is an unobstructed microphone 20.
At the reference time, the microphone 20 is an obstructed microphone 20.
[0041] At FIG 2A, microphones 20 provide, for further processing, unobstructed audio input
22
1, 22
2 captured at a time (offset time) when the user is not providing the, at least partial,
obstruction 30 between the at least one microphone 20 and the first region 16. In
this example a pair of microphones 20
1, 20
2 is used, but more could be used.
[0042] At FIG 2B, the user provides an obstruction 30 between the pair of microphones 20
and the first region 16. The obstruction 30 can be a complete or partial obstruction.
The obstruction 30 obstructs the audio 12
2 produced by the audio source 10
2.
[0043] The audio from the audio source 10
2 that reaches the microphone 20
1 (if any) is obstructed audio 12
2'. The microphone 20
1 captures the obstructed audio 12
2' and unobstructed audio 12
1, 12
3 from other audio sources 10
1, 10
3 (if any) to produce the obstructed audio input 22
1' at a time (reference time) when the user is providing the, at least partial, obstruction
30 between the microphone 20
1 and the first region 16. The microphone 20
1 provides the obstructed audio input 22
1' for further processing.
[0044] The audio from the audio source 10
2 that reaches the microphone 20
2 (if any) is obstructed audio 12
2'. The microphone 20
2 captures the obstructed audio 12
2' and unobstructed audio 12
1, 12
3 from other audio sources 10
1, 10
3 (if any) to produce the obstructed audio input 22
2' at the time (reference time) when the user is providing the, at least partial, obstruction
30 between the microphone 20
2 and the first region 16. The microphone 20
2 provides the obstructed audio input 22
2' for further processing.
[0045] As will be described later, an apparatus 100 which may or may not comprise the microphone(s)
20 compares the obstructed audio input 22' (obstructed audio input 22
1' and obstructed audio input 22
2') and the unobstructed audio input 22 (unobstructed audio input 22
1 and unobstructed audio input 22
2) and creates a frequency dependent filter 50 in dependence upon the comparison. In
some examples, a combination (e.g. sum) of the obstructed audio input 22
1' and obstructed audio input 22
2' is compared to a combination (e.g. sum) of the unobstructed audio input 22
1 and unobstructed audio input 22
2.
[0046] After the user-provided obstruction 30 has been removed , the created frequency dependent
filter 50 can then be used to filter received audio input 22 from the microphone 20
to create filtered audio that amplifies or attenuates the audio source 10
2 in the first region 16.
[0047] The offset time is a time offset relative to the reference time. The offset time
can be before or after the reference time. The offset time can be immediately before
or immediately after the reference time.
[0048] Thus, at the reference time the user creates a spatially dependent, at least partial,
obstruction 30 of audio reaching the microphones 20. A user input that indicates the
reference time can be used to indicate a presence of the user-controlled obstruction
30 at the reference time. The apparatus 100 can be configured to detect the user input
that determines the reference time, compare audio input 22 from the microphones 20
received at the reference time with audio input 22 from the microphones 20 received
at the offset time that has a time offset relative to the reference time, and create
the frequency dependent filter 50 in dependence upon the comparison.
[0049] At the offset time, the microphones 20
1, 20
2 are unobstructed microphones 20.
At the reference time, the microphones 20
1, 20
2 are obstructed microphones 20
[0050] At FIG 2C, microphones 20 provide, for further processing, obstructed and unobstructed
audio input. In this example a pair of microphones 20
1, 20
2 is used, but more could be used. The user provides an obstruction 30 between the
microphone 20
2 and a first region 16 (but not between the microphone 20
1 and the first region 16). The obstruction 30 can be a complete or partial obstruction.
The obstruction 30 obstructs the audio 12
2 produced by the audio source 10
2 and captured by the microphone 20
2.
[0051] When the user is providing the obstruction 30, microphone 20
1 provides for further processing unobstructed audio input 22
1 and microphone 20
2 provides for further processing obstructed audio input 22
2'.
[0052] In this example, the user-controlled obstruction 30 is between the microphone 20
2 and the first region 16 but not between the microphone 20
1 and the first region 16. Consequently, microphone 20
1 provides for further processing unobstructed audio input 22
1 captured when the user is not providing the, at least partial, obstruction 30 between
the microphone 20
1 and the first region 16 and microphone 20
2 provides for further processing obstructed audio input 22
2' captured when the user is providing the, at least partial, obstruction 30 between
the microphone 20
2 and the first region 16.
[0053] The audio from the audio source 10
2 that reaches the microphone 20
1 (if any) is unobstructed audio 12
2. The microphone 20
1 captures the unobstructed audio 12
2 and unobstructed audio 12
1, 12
3 from other audio sources 10
1, 10
3 (if any) to produce the unobstructed audio input 22
1 at a time (reference time) when the user is providing the, at least partial, obstruction
30 between the microphone 20
2 and the first region 16. The microphone 20
1 provides the unobstructed audio input 22
1 for further processing.
[0054] The audio from the audio source 10
2 that reaches the microphone 20
2 (if any) is obstructed audio 12
2'. The microphone 20
2 captures the obstructed audio 12
2' and unobstructed audio 12
1, 12
3 from other audio sources 12
1, 12
3 (if any) to produce the obstructed audio input 22
2' at the time (reference time) when the user is providing the, at least partial, obstruction
30 between the microphone 20
2 and the first region 16. The microphone 20
2 provides for further processing the obstructed audio input 22
2'.
[0055] As will be described later, an apparatus 100 which may or may not comprise some or
all of the microphones 20 compares the obstructed audio input 22' (obstructed audio
input 22
2') and the unobstructed audio input 22 (unobstructed audio input 22
1) and creates a frequency dependent filter 50 in dependence upon the comparison.
[0056] After the user-provided obstruction 30 has been removed , the created frequency dependent
filter 50 can then be used to filter received audio input 22 from the microphone(s)
20 to create filtered audio that amplifies or attenuates an audio source in the first
region 16.
[0057] Thus, at reference time the user creates a spatially dependent, at least partial,
obstruction 30 of audio reaching a microphone 20
2. A user input indicating a reference time can be used to indicate a presence of the
user-controlled obstruction 30 at the reference time. The apparatus 100 can be configured
to detect the user input that determines the reference time, compare audio input 22
2 from the microphone 20
2 received at the reference time with audio input 22
1 from the microphone 20
1 received at the reference time, and create the frequency dependent filter 50 in dependence
upon the comparison.
[0058] At the reference time, the microphones 20
2 is an obstructed microphone 20 and the microphone 20
1 is an unobstructed microphone 20
[0059] FIG 3 illustrates a method 200. The method 200 creates a frequency-dependent filter
50 (not illustrated).
[0060] At block 210, the method 200 comprises receiving obstructed audio input 22' from
at least one obstructed microphone 20 when a user is providing an, at least partial,
obstruction 30 between the at least one obstructed microphone 20 and a first region
16.
[0061] At block 212, the method 200 comprises receiving unobstructed audio input 22 from
at least one unobstructed microphone 20 when the user is not providing the, at least
partial, obstruction 30 between the at least one unobstructed microphone 20 and the
first region 16.
[0062] At block 214, the method 200 comprises comparing the obstructed audio input 22' and
the unobstructed audio input 22.
[0063] At block 216, the method 200 comprises creating a frequency dependent filter 50 in
dependence upon the comparison.
[0064] At block 218, the method 200 comprises filtering received audio input from the at
least one microphone 20 to create filtered audio that amplifies or attenuates an audio
source in the first region 16.
[0065] The filtering at block 218 occurs after removal of the obstruction 30 referenced
in block 210.
[0066] The at least one obstructed microphone 20 and the at least one unobstructed microphone
20 can be the same set of one or more microphones 20 at different times (time division).
[0067] The at least one obstructed microphone 20 and the at least one unobstructed microphone
20 can be the different sets of one or more microphones 20 at the same time (spatial
division).
[0068] The frequency dependent filter 50 can, for example, be based on a difference in spectrum
of the audio input 22 from the at least one microphone 20 caused by acoustic shadowing
by a hand of the user. The difference can be a change over time for one or more microphones
20 (time division). The difference can be a difference between microphones 20 at the
same time (spatial division).
[0069] FIG 4 illustrates a method 201 similar to method 200. The method 201 creates a frequency-dependent
filter 50 (not illustrated).
[0070] At block 202, the method 201 comprises detecting user input that determines a reference
time when a user is providing an, at least partial, obstruction 30 between the at
least one obstructed microphone 20 and a first region 16 but not providing an obstruction
30 between the at least one unobstructed microphone 20 and the first region 16.
[0071] At block 210, the method 201 comprises receiving obstructed audio input 22' from
at least one obstructed microphone 20 when a user is providing an, at least partial,
obstruction 30 between the at least one obstructed microphone 20 and a first region
16.
[0072] At block 212, the method 201 comprises receiving unobstructed audio input 22 from
at least one unobstructed microphone 20 when the user is not providing the, at least
partial, obstruction 30 between the at least one unobstructed microphone 20 and the
first region 16.
[0073] At block 214, the method 201 comprises comparing the obstructed audio input 22' and
the unobstructed audio input 22.
[0074] At block 216, the method 201 comprises creating a frequency dependent filter 50 in
dependence upon the comparison.
[0075] At block 218, the method 201 comprises filtering received audio input from the at
least one microphone 20 to create filtered audio that amplifies or attenuates an audio
source in the first region 16.
[0076] The filtering at block 218 occurs after removal of the obstruction 30 referenced
in block 210.
[0077] The at least one obstructed microphone 20 and the at least one unobstructed microphone
20 can be the same set of one or more microphones 20 at different times (time division).
In this example, at block 210, the method 201 comprises receiving obstructed audio
input 22' captured at the reference time by the microphone(s) 20 when a user is providing
an, at least partial, obstruction 30 between the at least one obstructed microphone
20 and a first region 16 and at block 212, the method 201 comprises receiving unobstructed
audio input 22 captured at an offset time (a time offset relative to the reference
time) by the microphone(s) 20 when the user is not providing the, at least partial,
obstruction 30 between the at least one unobstructed microphone 20 and the first region
16.
[0078] The at least one obstructed microphone 20 and the at least one unobstructed microphone
20 can be the different sets of one or more microphones 20 at the same time (spatial
division). In this example, at block 210, the method 201 comprises receiving obstructed
audio input 22' captured at the reference time by a first set of one or more microphones
20 when a user is providing an, at least partial, obstruction 30 between the first
set of microphones 20 and a first region 16 and at block 212, the method 201 comprises
receiving unobstructed audio input 22 captured at the reference time by a second set
of one or more microphones 20 when the user is not providing the, at least partial,
obstruction 30 between the second set of microphones 20 and the first region 16.
[0079] Fig 5A is an example of a spectral representation of unobstructed audio input 22
produced by microphone(s) 20 that capture unobstructed audio 12. The example is simplified
for the purpose of explanation. The FIG illustrates the energy (y-axis) the unobstructed
audio input 22 has within frequency bands (x-axis). Although the frequency bands are
of equal size, in other examples the frequency bands can be of different sizes. There
can also be more or less frequency bands.
[0080] FIG 5A therefore represents a frequency spectrum of the unobstructed audio input
22.
[0081] Fig 5B is an example of a spectral representation of obstructed audio input 22' produced
by microphone(s) 20 that capture obstructed audio 12' that is an obstructed variant
of the unobstructed audio 12. FIG 5B therefore represents a frequency spectrum of
the obstructed audio input 22'.
[0082] Fig 5C illustrates a difference 40 between the frequency spectrum of the unobstructed
audio input 22 (FIG 5A) and the frequency spectrum of the obstructed audio input 22'
(FIG 5B). In this example, but not necessarily all examples, the frequency spectrum
of the obstructed audio input 22' (FIG 5B) is subtracted from the frequency spectrum
of the unobstructed audio input 22 (FIG 5A). In this example, the difference 40 represents
a frequency spectrum of the audio from the first region 16 that has been obstructed.
The frequency spectrum of the audio from the first region 16 that has been obstructed
has a range 42.
[0083] The comparison previously described can, for example, determine the difference 40
and use it to create the filter 50.
[0084] The frequency spectrum of the audio from the first region 16 that has been obstructed
(e.g. Fig 5C), is converted to a frequency filter 50 in FIG 6A and 6B. The filter
applies a gain to the range 42 of the frequency spectrum.
[0085] In FIG 6A, the gain is positive (>1) within the range 42 and negative (<1) outside
the range 42. FIG 6A therefore illustrates an amplification filter 50 that is configured
to preferentially amplify the frequency spectrum of the audio from the first region
16 that has been obstructed.
[0086] In FIG 6B, the gain is negative (<1) within the range 42 and positive (>1) outside
the range 42. FIG 6B therefore illustrates an attenuation filter 50 that is configured
to preferentially attenuate the frequency spectrum of the audio from the first region
16 that has been obstructed.
[0087] It will therefore be appreciated that the frequency dependent filter 50 is configured
as a spatially dependent filter 50 that differentially amplifies or attenuates audio
sources 10 at different spatial positions.
[0088] The frequency dependent filter 50 is based on a difference 40 in spectrum of the
obstructed audio input 22' and the unobstructed audio input 22 caused by acoustic
shadowing by an obstruction 30. The obstruction 30 can for example be a hand 132 of
the user 130, that provides the, at least partial, obstruction 30 between the obstructed
microphone 20 and the first region 16.
[0089] The frequency-dependent filter 50 selectively provides a gain to spectral components
in the range 42 corresponding to spectral components of the audio signal that has
been obstructed and not captured. The gain can be controlled by a user 130. For example,
in some but not necessarily all examples the user 130 can select whether the filter
is an amplification filter or an attenuation filter. For example, in some but not
necessarily all examples the user 130 can determine the magnitude of the gain, for
example, the magnitude of amplification/ attenuation. The user input can, in some
examples, be via gesture 134 of a hand 132 (see FIG 7). That gesture can be part of
or separate to a gesture used to place the hand 132 as the obstruction 30.
[0090] In this example, the frequency dependent filter 50 can, for example, be based on
a difference in spectrum of the audio input 22 from the at least one microphone 20
caused by acoustic shadowing by a hand 132 of the user 130. The difference 40 can
be a change over time for one or more microphones 20 (time division). The difference
40 can be a difference between microphones 20 at the same time (spatial division).
[0091] FIG 7 illustrates an example of an apparatus 100 comprising:
means 112 for detecting user input 134 indicating a presence of a user-controlled
obstruction 30;
means 102, 104 for receiving obstructed audio input 22' from at least one obstructed
microphone 20 when a user 130 is providing an, at least partial, obstruction 30 between
the at least one obstructed microphone 20 and a first region 16;
means 102, 104 for receiving unobstructed audio input 22 from the at least one unobstructed
microphone 20 when the user 130 is not providing the, at least partial, obstruction
30 between the at least one microphone 20 and the first region 16;
means 104 for comparing the obstructed audio input 22' and the unobstructed audio
input 22; and
means 106 for creating a frequency dependent filter 50 in dependence upon the comparison
(e.g. in dependence upon the difference 40).
[0092] As illustrated in FIG 8, the apparatus 100 can also comprise means, frequency-dependent
filter 50, for filtering received audio input 22 from the at least one microphone
20 to create filtered audio 24 that amplifies or attenuates an audio source in the
first region 16.
[0093] The apparatus 100 additionally comprises, in this example, spectral analysis means
102 that performs spectral analysis of the received obstructed audio input 22' from
the obstructed microphone(s) 20 and spectral analysis of the received unobstructed
audio input 22 from the unobstructed microphone(s) 20. The spectral analysis means,
can for example be a spectrum analyzer that is configured to create an unobstructed
frequency spectrum of the unobstructed audio input 22 by converting the received unobstructed
audio input 22 from the time domain to the frequency domain (e.g. as shown in FIG
5A) and to create an obstructed frequency spectrum of the obstructed audio input 22'
by converting the received obstructed audio input 22' from the time domain to the
frequency domain (e.g. as shown in FIG 5B). The frequency-dependent filter 50 is generated
at block 106 based on the difference 40 between the obstructed and unobstructed frequency
spectrums.
[0094] In some time division examples, the apparatus 100 is configured to assume an obstruction
30 at the user-indicated reference time and no obstruction 30 at an offset time. The
obstructed spectrum is obtained by analyzing the audio input from microphone(s) 20
captured at the reference time and the unobstructed spectrum is obtained by analyzing
the audio input from those microphone(s) 20 captured at the offset time.
[0095] The control block 110 can be configured to control when the described methods are
used or not used.
[0096] The control block 110 can be configured to recognize a user input.
[0097] A sensor 112 can be provided to detect a user input. For example, the sensor 112
(not a microphone 20) can detect motion 134 of a hand 132 of a user 130. In some examples,
the sensor 112 can locate the hand 132 of a user 130 relative to the microphones 20.
Any suitable sensor 112 can be used.
[0098] In some examples, the control block 110 is configured to recognize that the obstruction
30 is a hand 132 of the user 130. The differential hand masking cause frequency differentiation
in the audio spectrum. In some examples, the gain is controlled by the user 130 to
be attenuation or, alternatively, amplification. In some examples the user provides
that control via gesture 134 of their hand 132.
[0099] For example, the user 130 may control the preferred behavior for the obstruction
by performing a secondary gesture 134 with the obstructing hand 132. An example may
be first placing the hand as an obstruction 30 and once the apparatus 100 provides
an output acknowledging the action, the user can may either perform an amplification
gesture (e.g. pinch-out gesture performed by moving thumb and first finger away from
each other) or an attenuation gesture (e.g. pinch-in gesture performed by moving thumb
and first finger towards each other) to control, respectively, whether the filter
50 should be an amplification filter or an attenuation filter. The extent of amplification/attenuation
could be controlled by a size of the amplification gesture (e.g. a size of an increase
in a distance between thumb and first finger) or size of the attenuation gesture (e.g.
a size of a decrease in a distance between thumb and first finger). The apparatus
100 can, for example provide feedback on the amplification/attenuation gesture to
the user 130.
[0100] In some examples, the sensor 112 is a camera. Examples of other sensors 112 can a
proximity sensor, a hover sensor, a positioning sensor or some other sensor.
[0101] The camera 112 can for example be a camera that records a visual part of an audio-visual
scene. The microphones 20 can simultaneously record the audio part of the audio-visual
scene. That audio part can be filtered by the frequency dependent filter 50 as described.
[0102] The sensor 112 can be configured to detect as the user input, a spatially specific
hand gesture 134 that provides a spatially dependent, at least partial, obstruction
30 of audio reaching the at least one microphone.
[0103] For example, the control block 110 can, in some examples, be configured to process
an output from the camera 112 to recognize a presence of a hand 132, a position of
the hand 132 and movement of the hand 132 as a user hand gesture 134. The control
block 110 can, for example, discriminate different hand gestures 134 as different
user inputs. Computer vision processing can be used to differentiate a hand 132 from
other objects and to differentiate different hand gestures 134.
[0104] A user 130 can indicate a preferred audio focus direction by placing his/her hand
between a microphone 20 and that direction. Hand gestures 134 provide an intuitive
way of selecting where to focus (i.e. selecting the first region 16), even during
audio/video capture. The focus direction selection can be performed without manual
interaction with the apparatus 100 which is especially good for example while wearing
winter gloves
[0105] The circumstances that resulted in the creation of a particular frequency-dependent
filter 50 can change over time. If the frequency-dependent filter 50 is not changed,
renewed or updated it can, in some situations, produce incorrect or undesirable results.
[0106] In one example, the apparatus 100 continues to perform spectral analysis of the audio
input 22 to detect a change in a configuration of the audio sources 10, for example
a change, addition, loss or movement of an audio source 10. The spectral analysis
can for example be limited to the range 42 of the frequency spectrum of the unobstructed
audio input 22 to detect a substantial change in that region of the frequency spectrum.
The apparatus can then prompt the user to perform a recalibration process. The recalibration
process is, for example, a repeat of the original process used to create the frequency-dependent
filter 50.
[0107] In this example or other examples, the apparatus 100 can vary the frequency-dependent
filter over time or stop using the frequency-dependent filter 50 automatically. For
example, the frequency dependent filter 50 can be a time-variable filter 50 configured
to fade over time. For example, the magnitude of the gain difference provided by the
filter 50 can be time-dependent and reduce over time (e.g. 10s) to zero. The diminution
of the filter 50 can for example be shown visually by the apparatus 100. The user
then has to re-perform the process of creating the frequency-dependent filter 50,
if it is still required.
[0108] The apparatus 100 can therefore be configured to prompt the user to perform 're-calibration'
by repeating the process of creating the frequency dependent filter 50. This can for
example comprise:
receiving obstructed audio input 22' from at least one microphone 20 when a user is
providing an, at least partial, obstruction 30 between the at least one microphone
20 and a region;
receiving unobstructed audio input 22 from the at least one microphone 20 when the
user is not providing the, at least partial, obstruction 30 between the at least one
microphone 20 and the first region 16;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter 50 in dependence upon the comparison;
filtering received audio input 22 from the at least one microphone 20 to create filtered
audio that amplifies or attenuates an audio source in the first region 16.
[0109] In some of the examples described, but not necessarily all of the examples described,
the apparatus 100 is a hand-portable apparatus of a size that can fit into a jacket
pocket. In some of the examples described, but not necessarily all of the examples
described, the apparatus 100 is a flat screen tablet apparatus such as a mobile telephone,
tablet computer, personal digital assistant or similar.
[0110] FIG 8 illustrates an example of an apparatus 100 that uses the frequency-dependent
filter 50 to filter received audio input 22 from the microphone(s) 20 to create filtered
audio 24 that amplifies or attenuates an audio source in the first region 16.
[0111] The same frequency-dependent filter 50 can be used to filter all the received audio
inputs 22 from the microphones 20 to create filtered audio 24 that amplifies or attenuates
an audio source in the first region 16. Alternatively, the same frequency-dependent
filter 50 can be used to filter a sub-set of the received audio inputs 22 from the
microphones 20 to create filtered audio 24 that amplifies or attenuates an audio source
in the first region 16.
[0112] This FIG illustrates that the received audio inputs 22 can be pre-processed at pre-processing
block 120 before being filtered. In this example, the received audio inputs 22 can
also be pre-processed before spectral analysis 102 is performed (FIG 7).
[0113] The pre-processing can, for example, comprise noise reduction, equalization, spatialization
of the microphone signals, wind noise reduction etc.
[0114] The audio input 22 that is filtered can be 'live', that is real-time or can be accessed
from a memory.
[0115] The audio output, the filtered audio 24, can be rendered 'live', that is in real-time
or can be recorded in a memory for future access.
[0116] FIG 9A illustrates an example of a controller 70 for the apparatus 100. Implementation
of a controller 70 may be as controller circuitry. The controller 70 may be implemented
in hardware alone, have certain aspects in software including firmware alone or can
be a combination of hardware and software (including firmware).
[0117] As illustrated in Fig 9A the controller 70 may be implemented using instructions
that enable hardware functionality, for example, by using executable instructions
of a computer program 76 in a general-purpose or special-purpose processor 72 that
may be stored on a computer readable storage medium (disk, memory etc.) to be executed
by such a processor 72.
[0118] The processor 72 is configured to read from and write to the memory 74. The processor
72 may also comprise an output interface via which data and/or commands are output
by the processor 72 and an input interface via which data and/or commands are input
to the processor 72.
[0119] The memory 74 stores a computer program 76 comprising computer program instructions
(computer program code) that controls the operation of the apparatus 100 when loaded
into the processor 72. The computer program instructions, of the computer program
76, provide the logic and routines that enables the apparatus to perform the methods
illustrated in the Figs. The processor 72 by reading the memory 74 is able to load
and execute the computer program 76.
[0120] The apparatus 100 therefore comprises:
at least one processor 72; and
at least one memory 74 including computer program code
the at least one memory 74 and the computer program code configured to, with the at
least one processor 72, cause the apparatus 100 at least to perform:
detecting user input indicating a presence of a user-controlled obstruction 30;
receiving obstructed audio input 22' from at least one microphone 20 when a user is
providing an, at least partial, obstruction 30 between the at least one microphone
20 and a first region 16;
receiving unobstructed audio input 22 from the at least one microphone 20 when the
user is not providing the, at least partial, obstruction 30 between the at least one
microphone 20 and the first region 16;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter 50 in dependence upon the comparison;
filtering received audio input 22 from the at least one microphone 20 to create filtered
audio that amplifies or attenuates an audio source in the first region 16.
[0121] As illustrated in Fig 9B, the computer program 76 may arrive at the apparatus 100
via any suitable delivery mechanism 78. The delivery mechanism 78 may be, for example,
a machine readable medium, a computer-readable medium, a non-transitory computer-readable
storage medium, a computer program product, a memory device, a record medium such
as a Compact Disc Read-Only Memory (CD-ROM) or a Digital Versatile Disc (DVD) or a
solid state memory, an article of manufacture that comprises or tangibly embodies
the computer program 76. The delivery mechanism may be a signal configured to reliably
transfer the computer program 76. The apparatus 100 may propagate or transmit the
computer program 76 as a computer data signal.
[0122] Computer program instructions for causing an apparatus to perform at least the following
or for performing at least the following:
detecting user input indicating a presence of a user-controlled obstruction 30;
receiving obstructed audio input 22' from at least one microphone 20 when a user is
providing an, at least partial, obstruction 30 between the at least one microphone
20 and a first region 16;
receiving unobstructed audio input 22 from the at least one microphone 20 when the
user is not providing the, at least partial, obstruction 30 between the at least one
microphone 20 and the first region 16;
comparing the obstructed audio input and the unobstructed audio input;
creating a frequency dependent filter 50 in dependence upon the comparison;
filtering received audio input 22 from the at least one microphone 20 to create filtered
audio that amplifies or attenuates an audio source in the first region 16.
[0123] The computer program instructions may be comprised in a computer program, a non-transitory
computer readable medium, a computer program product, a machine readable medium. In
some but not necessarily all examples, the computer program instructions may be distributed
over more than one computer program.
[0124] Although the memory 74 is illustrated as a single component/circuitry it may be implemented
as one or more separate components/circuitry some or all of which may be integrated/removable
and/or may provide permanent/semi-permanent/ dynamic/cached storage.
[0125] Although the processor 72 is illustrated as a single component/circuitry it may be
implemented as one or more separate components/circuitry some or all of which may
be integrated/removable. The processor 72 may be a single core or multi-core processor.
[0126] FIGs 10A, 10B, 10C, 11A, 11B are equivalent to previous FIGs 5A, 5B, 5C, 6A, 6B.
FIGs 10A, 10B, 10C, 11A illustrate that the frequency dependent filter 50 (illustrated
in FIG 6A) can be extend to apply amplification outside the range 42 at lower frequency
harmonic frequencies. FIGs 10A, 10B, 10C, 11B illustrate that the frequency dependent
filter 50 (illustrated in FIG 6B) can be extend to apply attenuation outside the range
42 at lower frequency harmonic frequencies.
[0127] Fig 10A is an example of an unobstructed frequency spectrum that is a frequency spectrum
of an unobstructed audio input 22 produced by microphone(s) 20 that captures unobstructed
audio 12. The unobstructed frequency spectrum comprises a harmonic structure (H).
[0128] Fig 10B is an example of an obstructed frequency spectrum that is a frequency spectrum
of an obstructed audio input 22' produced by microphone(s) 20 that capture obstructed
audio 12'.
[0129] Fig 10C illustrates a difference 40 between the unobstructed frequency spectrum of
the unobstructed audio input 22 (FIG 10A) and the obstructed frequency spectrum of
the obstructed audio input 22' (FIG 10B). In this example, but not necessarily all
examples, the obstructed frequency spectrum is subtracted from the unobstructed frequency
spectrum. In this example, the difference 40 represents a frequency spectrum of the
audio from the first region 16 that has been obstructed. The frequency spectrum of
the audio from the first region 16 that has been obstructed has a range 42.
[0130] The frequency spectrum of the audio from the first region 16 that has been obstructed,
is converted to a frequency filter 50 in FIG 11A and 11B. The filter applies a gain
to the range 42 of the frequency spectrum and to the harmonics H outside the range
42.
[0131] In FIG 11A, the gain is positive (>1) within the range 42 and at the harmonics H
that have a lower frequency than the range 42 and otherwise negative. It is negative
(<1) outside the range 42 including the harmonics H that have a higher frequency than
the range 42. FIG 11A therefore illustrates an amplification filter 50 that is configured
to preferentially amplify the frequency spectrum of the audio from the first region
16 that has been obstructed (and its lower frequency harmonics).
[0132] In FIG 11B, the gain is negative (<1) within the range 42 and at the harmonics H
that have a lower frequency than the range 42 and is otherwise positive (>1). It is
positive outside the range 42 including at harmonics H that have a higher frequency
than the range 42. FIG 11B therefore illustrates an amplification filter 50 that is
configured to preferentially attenuate the frequency spectrum of the audio from the
first region 16 that has been obstructed (and its lower frequency harmonics).
[0133] It will therefore be appreciated that the frequency dependent filter 50 is configured
as a spatially dependent filter 50 that differentially amplifies or attenuates audio
sources 10 at different spatial positions (and their lower frequency harmonics).
[0134] The frequency dependent filter 50 is based on a difference 40 in spectrum of the
obstructed audio input 22' and the unobstructed audio input 22 caused by acoustic
shadowing by an obstruction 30. The obstruction 30 can for example be a hand 132 of
the user 130, that provides the, at least partial, obstruction 30 between the obstructed
microphone 20 and the first region 16.
[0135] The gain can be controlled by a user 130. For example, in some but not necessarily
all examples the user can select whether the filter 50 is an amplification filter
or an attenuation filter. For example, in some but not necessarily all examples the
user 130 can determine the magnitude of the gain, for example, the magnitude of amplification/
attenuation. The user input can, in some examples, be a gesture 134 of a hand 132
(see FIG 7). That gesture 134 can be part of or separate to a gesture used to place
the hand 132 as the obstruction 30.
[0136] In this example, the frequency dependent filter 50 can, for example, be based on
a difference in spectrum of the audio input 22 from the at least one microphone 20
caused by acoustic shadowing by a hand of the user. The difference can be a change
over time for one or more microphones 20 (time division). The difference can be a
difference between microphones 20 at the same time (spatial division).
[0137] The example described in relation to FIGs 10A-10C and FIGs 11A & 11B can be useful
if the target audio at the first region 16 contains very low frequencies. The very
low frequencies may not be affected by the acoustic shadowing caused by the hand.
[0138] The extension of the filter 50 to lower frequency harmonics can occur, for example,
if the unobstructed frequency spectrum (FIG 10A) has a harmonic structure that is
absent from the obstructed frequency spectrum (FIG 10B). For example, say that the
harmonic frequencies 200, 300, 400, 500, 600, 700, 800, 900, and 1000Hz are substantially
attenuated by hand shadowing. In this case the apparatus 100 can determine that the
target audio is most likely a harmonic sound having the fundamental frequency of 100Hz.
Even if the frequency 100Hz is not affected by the hand shadowing the system can nevertheless
pick that frequency to be included in the frequency-dependent filter 50.
[0139] The term "frequency" can refer either to a single frequency or a frequency band with
certain width.
[0140] References to 'computer-readable storage medium', 'computer program product', 'tangibly
embodied computer program' etc. or a 'controller', 'computer', 'processor' etc. should
be understood to encompass not only computers having different architectures such
as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures
but also specialized circuits such as field-programmable gate arrays (FPGA), application
specific circuits (ASIC), signal processing devices and other processing circuitry.
References to computer program, instructions, code etc. should be understood to encompass
software for a programmable processor or firmware such as, for example, the programmable
content of a hardware device whether instructions for a processor, or configuration
settings for a fixed-function device, gate array or programmable logic device etc.
[0141] As used in this application, the term 'circuitry' may refer to one or more or all
of the following:
- (a) hardware-only circuitry implementations (such as implementations in only analog
and/or digital circuitry) and
- (b) combinations of hardware circuits and software, such as (as applicable):
- (i) a combination of analog and/or digital hardware circuit(s) with software/firmware
and
- (ii) any portions of hardware processor(s) with software (including digital signal
processor(s)), software, and memory(ies) that work together to cause an apparatus,
such as a mobile phone or server, to perform various functions and
- (c) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion
of a microprocessor(s), that requires software (e.g. firmware) for operation, but
the software may not be present when it is not needed for operation.
This definition of circuitry applies to all uses of this term in this application,
including in any claims. As a further example, as used in this application, the term
circuitry also covers an implementation of merely a hardware circuit or processor
and its (or their) accompanying software and/or firmware. The term circuitry also
covers, for example and if applicable to the particular claim element, a baseband
integrated circuit for a mobile device or a similar integrated circuit in a server,
a cellular network device, or other computing or network device.
[0142] The blocks illustrated in the Figs may represent steps in a method and/or sections
of code in the computer program 76. The illustration of a particular order to the
blocks does not necessarily imply that there is a required or preferred order for
the blocks and the order and arrangement of the block may be varied. Furthermore,
it may be possible for some blocks to be omitted.
[0143] Where a structural feature has been described, it may be replaced by means for performing
one or more of the functions of the structural feature whether that function or those
functions are explicitly or implicitly described.
[0144] The recording of data may comprise only temporary recording, or it may comprise permanent
recording or it may comprise both temporary recording and permanent recording, Temporary
recording implies the recording of data temporarily. This may, for example, occur
during sensing or image capture, occur at a dynamic memory, occur at a buffer such
as a circular buffer, a register, a cache or similar. Permanent recording implies
that the data is in the form of an addressable data structure that is retrievable
from an addressable memory space and can therefore be stored and retrieved until deleted
or over-written, although long-term storage may or may not occur. The use of the term
'capture' in relation to an image or audio relates to temporary recording of the data.
The use of the term 'record' or 'store' in relation to an image or audio relates to
permanent recording of the data.
[0145] The above described examples find application as enabling components of:
automotive systems; telecommunication systems; electronic systems including consumer
electronic products; distributed computing systems; media systems for generating or
rendering media content including audio, visual and audio visual content and mixed,
mediated, virtual and/or augmented reality; personal systems including personal health
systems or personal fitness systems; navigation systems; user interfaces also known
as human machine interfaces; networks including cellular, non-cellular, and optical
networks; ad-hoc networks; the internet; the internet of things; virtualized networks;
and related software and services.
[0146] The term 'comprise' is used in this document with an inclusive not an exclusive meaning.
That is any reference to X comprising Y indicates that X may comprise only one Y or
may comprise more than one Y. If it is intended to use 'comprise' with an exclusive
meaning then it will be made clear in the context by referring to "comprising only
one.." or by using "consisting".
[0147] In this description, reference has been made to various examples. The description
of features or functions in relation to an example indicates that those features or
functions are present in that example. The use of the term 'example' or 'for example'
or 'can' or 'may' in the text denotes, whether explicitly stated or not, that such
features or functions are present in at least the described example, whether described
as an example or not, and that they can be, but are not necessarily, present in some
of or all other examples. Thus 'example', 'for example', 'can' or 'may' refers to
a particular instance in a class of examples. A property of the instance can be a
property of only that instance or a property of the class or a property of a sub-class
of the class that includes some but not all of the instances in the class. It is therefore
implicitly disclosed that a feature described with reference to one example but not
with reference to another example, can where possible be used in that other example
as part of a working combination but does not necessarily have to be used in that
other example.
[0148] Although examples have been described in the preceding paragraphs with reference
to various examples, it should be appreciated that modifications to the examples given
can be made without departing from the scope of the claims.
[0149] Features described in the preceding description may be used in combinations other
than the combinations explicitly described above.
[0150] Although functions have been described with reference to certain features, those
functions may be performable by other features whether described or not.
[0151] Although features have been described with reference to certain examples, those features
may also be present in other examples whether described or not.
[0152] The term 'a' or 'the' is used in this document with an inclusive not an exclusive
meaning. That is any reference to X comprising a/the Y indicates that X may comprise
only one Y or may comprise more than one Y unless the context clearly indicates the
contrary. If it is intended to use 'a' or 'the' with an exclusive meaning then it
will be made clear in the context. In some circumstances the use of 'at least one'
or 'one or more' may be used to emphasis an inclusive meaning but the absence of these
terms should not be taken to infer any exclusive meaning.
[0153] The presence of a feature (or combination of features) in a claim is a reference
to that feature or (combination of features) itself and also to features that achieve
substantially the same technical effect (equivalent features). The equivalent features
include, for example, features that are variants and achieve substantially the same
result in substantially the same way. The equivalent features include, for example,
features that perform substantially the same function, in substantially the same way
to achieve substantially the same result.
[0154] In this description, reference has been made to various examples using adjectives
or adjectival phrases to describe characteristics of the examples. Such a description
of a characteristic in relation to an example indicates that the characteristic is
present in some examples exactly as described and is present in other examples substantially
as described.
[0155] Whilst endeavoring in the foregoing specification to draw attention to those features
believed to be of importance it should be understood that the Applicant may seek protection
via the claims in respect of any patentable feature or combination of features hereinbefore
referred to and/or shown in the drawings whether or not emphasis has been placed thereon.