TECHNOLOGICAL FIELD
[0001] Embodiments of the present invention relate to controlling a sound object.
BACKGROUND
[0002] Spatial audio rendering involves the rendering of a sound scene comprising one or
more sound objects. A sound scene refers to a representation of a sound space listened
to from a particular point of view within the sound space. A sound object is a sound
that may be located within the sound space. A rendered sound object represents a sound
rendered from a particular position in the sound space.
[0003] Sound objects may, for example, be defined using multichannel audio signals according
to a defined standard such as, for example, binaural coding, 5.1 surround sound coding,
7.1 surround sound coding etc.
BRIEF SUMMARY
[0004] According to various, but not necessarily all, embodiments of the invention there
is provided a method comprising: generating a sound object that when rendered repeatedly
loops an audio segment; and causing movement of the sound object within a sound space
while the sound object is being rendered thereby causing rendering of the audio segment
at different positions within the sound space.
[0005] According to various, but not necessarily all, embodiments of the invention there
is provided examples as claimed in the appended claims.
BRIEF DESCRIPTION
[0006] For a better understanding of various examples that are useful for understanding
the detailed description, reference will now be made by way of example only to the
accompanying drawings in which:
Fig 1 illustrates an example of a method for controlling generation and rendering
of a sound object that when rendered repeatedly loops an audio segment;
Fig 2 illustrates an example of an audio segment that is being looped;
Fig 3 illustrates an example of movement of a sound object through a sound space while
the sound object is being rendered;
Figs 4A, 4B, 4C illustrates an example of generating a sound object, causing movement
of the sound object and rendering of the sound object while moving;
Figs 5A, 5B, 5C illustrates another example of generating a sound object, causing
movement of the sound object and rendering of the sound object while moving;
Figs 6A and 6B illustrate an example in which a trajectory of a moving sound object
is dynamically controlled by a user;
Figs 6C and 6D illustrate an example in which a trajectory of a moving sound object
is a closed spatial loop;
Fig 7 illustrates an example of an apparatus configured to perform the methods as
described in relation to Figs 1 to 6B; and
Fig 8 illustrates an example of a record medium.
DETAILED DESCRIPTION
[0007] The Figures illustrate example embodiments of the present invention that controls
the rendering of a looping (repeating) audio segment 20. A sound object 10 is generated
for an audio segment 20. When the sound object 10 is rendered it loops (repeats) the
audio segment 20. Where the sound object 10 is rendered may be controlled. The audio
segment 20 can therefore be rendered at one or more different positions within the
sound space.
[0008] The sound object 10 may be moved within a sound space 12 while it is being rendered.
The audio segment 20 may loop (repeat) independently of its movement; it may loop
while moving and loop while stationary.
[0009] In this description "rendering" means providing in a form that is perceived by a
user. The rendering of a sound object may, for example, use a personal audio output
system such as headphones or a shared audio output system such as an arrangement of
loudspeakers. It is intended that the invention will have application not only to
existing systems and method of rendering an audio object but also to future, as yet
unknown, systems and method of rendering an audio object.
[0010] Fig 1 illustrates a method for controlling generation and rendering of a sound object
10 that when rendered repeatedly loops an audio segment 20. Fig 2 illustrates an example
of an audio segment 20 that is being looped 22. Fig 3 illustrates an example of a
sound object 10 being repeatedly rendered while being moved within a sound space 12.
This results in the rendering of the audio segment 20 at different positions 18 within
the sound space 12.
In more detail, Fig 1 illustrates an example of a method 100 for controlling generation
and rendering of a sound object 10.
[0011] The method 100 comprises, at block 102, generating a sound object 10 that when rendered
repeatedly loops an audio segment 20.
[0012] A sound object 10 that when rendered repeatedly loops an audio segment 20 is a looped
sound object 10. Reference to a sound object in this description is generally to a
looped sound object. The term 'generated' means that a looped sound object 10 is a
newly created sound object that did not have previous existence. It has independent
existence from any sound object associated with the original audio segment 20.
[0013] The method 100 additionally comprises, at block 104, causing movement 14 of the sound
object 10 within a sound space 12 while the sound object 10 is being rendered 16 thereby
causing rendering of the audio segment 20 at different positions 18 within the sound
space 12.
[0014] Fig 2 illustrates an example rendering 16 of the sound object 10. This involves rendering
an audio segment 20, in this example a music segment 20 and the repeated looping 22
of the audio segment 20.
[0015] The audio segment 20 has a defined content. When the audio segment is rendered this
content is rendered from start to finish producing the audio segment as an audio output.
Once the content has finished, the rendering of the content re-starts. In this way
the audio segment 20 is repeatedly looped 22.
[0016] The looping may be continuous without interruption. That is, once the content has
finished, the rendering of the content re-starts immediately. In this way the audio
segment is repeatedly and continuously looped 22.
[0017] The sound object 10 when rendered repeatedly loops the audio segment 20. Audio characteristics
of the sound object may control how the audio segment 20 is rendered. For example,
they may control volume, tone, voice, tempo of the audio segment 20. The audio characteristics
may be fixed and unchanging while the audio segment 20 is repeatedly looped or, in
some examples, they may change dynamically while the audio segment 20 is repeatedly
looped. The audio characteristics (and changes in the audio characteristics) may be
determined automatically and/or determined by user input.
[0018] The audio characteristics of the sound object 10 may also determine if and how the
rendering of the sound object 10 is terminated so that the audio segment 20 is no
longer rendered. In some, but not necessarily all, examples the audio object 10 may
have a finite life determined, for example, by time, repetitions of the loop, distance
moved by the sound object 10, or position moved to by the sound object 10 or a similar
parameter, of any conditional combination of these or similar parameters. In other
examples, the audio object 10 may continue indefinitely until it is replaced or muted.
The muting may be done by fading the audio object 10 until it is muted.
[0019] Fig 3 illustrates an example of movement 14 of a sound object 10 through a sound
space 12 while the sound object 10 (and audio segment 20) is being rendered 16.
[0020] Where the audio segment 20 is rendered 16 (the position at which it is rendered)
at a particular time is the same as the position of the sound object 10 at that particular
time and depends upon the movement 14 of the sound object 10.
[0021] What portion of the audio segment 20 is rendered 16 at a particular time depends
upon where that time falls within the repeating loop of the audio segment 20. In particular
it depends upon the time, according to the tempo of the audio segment, from the last
start of rendering the audio segment 20.
[0022] In some examples, the sound object 10 is repeatedly and continuously looped as it
moves. In this case, what portion of the audio segment 20 is rendered at a particular
time depends only upon where that time falls within the continuously repeating loop
of the audio segment. The looping of the audio segment 20 is then independent of its
movement; it may loop while moving and loop while stationary.
[0023] The Fig 3 schematically illustrates movement 14 of the sound object 10 within the
sound space 12 while the sound object 10 is being rendered 16 thereby causing rendering
of the music at different positions 18 within the sound space 12.
[0024] At the time illustrated in the Fig 3, the sound object 10 has moved along a trajectory
40 within the sound space 12 defined by positions 18 and has reached position 18
n from where the sound segment 20 is being rendered 16.
[0025] The movement of the sound object 10 along the trajectory 40 may be determined by
selection of one or more of the positions 18 along the trajectory 40. In some but
not necessarily all examples, the movement 14 of the sound object 10 along the trajectory
40 may be determined wholly or partly by selection of end position 18
T of the trajectory 40.
[0026] In some examples, this selection may be automatic and in other examples this selection
may be controlled by a user.
[0027] In some examples, this selection is dynamically controlled while the sound object
10 is moving 14. This allows a dynamic change in direction of movement of the sound
object 10. In other examples the sound object 10 is not dynamically controlled.
[0028] The movement 14 of the sound object 10 may be controlled to be smoothly varying.
This may, for example, be achieved by modelling the movement 14 of the sound object
10 based on movement of an inertial object of constant speed using a physics engine.
[0029] In some but not necessarily all examples the audio segment 20 is a music segment.
The example embodiments in Figs 4A-4C, 5A-5C and 6A-6B describe an example that renders
a music segment but other audio segments may be rendered in alternative examples,
for example as described later in the use cases.
[0030] Figs 4A-4C, 5A-5C, 6A-6D illustrate examples of generating a sound object 10 that
when rendered repeatedly loops a music segment 20.
[0031] Fig 4A illustrates an example of generating a sound object 10 that when rendered
repeatedly loops a music segment 20. The sound object 10 is generated after a user
30 performs the audio segment 20 and in response to a user input 31.
[0032] In this example, the user depresses 31 a foot pedal 32 to indicate a start of a music
segment 20 and releases the foot pedal 32 to indicate an end of a music segment 20
after playing a riff on the guitar 34. The guitar riff then becomes the music segment
20 associated with the sound object 10, so that when the sound object 10 is rendered
the music segment 20 (guitar riff) is repeatedly rendered 16.
[0033] Fig 4B illustrates how a user 30 can cause movement 14 of the sound object 10 within
the sound space 12 along a trajectory 40 while the sound object 10 is being rendered
16. As illustrated in Fig 4C this subsequently causes rendering 16 of the music segment
20 at different positions 18 within the sound space 12.
[0034] In Fig 4B, the user selects a position 18
T in the sound space 12, towards which the sound object 10 moves 14. In other examples,
the user selects a direction in the sound space 12, towards which the sound object
10 moves 14. In this example, the user 30 selects where the sound object is rendered
by performing a gesture 42, during a first repeat of the music segment 10 caused by
rendering the sound object 10, towards a position 18
T. In this example, the user 30 points towards an end-position 18
T using a head of the guitar 34 during the first repeat of the music segment 20. This
determines, at least in part, the trajectory 40 of the moving sound object 10 while
it is being rendered 16.
[0035] In this example, the sound object 10 begins to move along the trajectory 40 during,
for example at the start of, a second repeat of the looped music segment 20. The movement
14 of the sound object 10 may be at a constant speed or a slowly varying speed.
[0036] There may be different responses when the sound object 10 reaches the end position
18
T and, in some examples, these responses may be controlled by a user 30.
[0037] Fig 5A illustrates an example of generating a sound object 10 that when rendered
repeatedly loops a music segment 20. The sound object 10 is generated after a user
30 performs the music segment 20 and in response to a user input 31. In this example,
the user depresses 31 a foot pedal 32 to indicate a start of a music segment 20 and
releases the foot pedal 32 to indicate an end of a music segment 20 after playing
a riff on the guitar 34. The guitar riff then becomes the music segment 20 associated
with the sound object 10, so that when the sound object 10 is rendered the music segment
20 (guitar riff) is repeatedly rendered 16.
[0038] Fig 5B illustrates how movement 14 of the sound object 10 within the sound space
12 along a trajectory 40 is automatically determined while the sound object 10 is
being rendered 16 after the user input 31. As illustrated in Fig 5C this subsequently
causes automatic rendering 16 of the music segment 20 at different positions 18 within
the sound space 12 while the sound object 10 is being rendered 16.
[0039] In Fig 5B, the position 18
T in the sound space 12, towards which the sound object 10 moves 14 or a direction
in the sound space 12 towards which the sound object 10 moves 14 is automatically
determined.
[0040] In an example embodiment of Fig 5B, the user does not select a position 18 in the
sound space 12, towards which the sound object 10 moves 14. In this example, where
the sound object is rendered is determined automatically during a first repeat of
the music segment 10. The automatic determination selects at least one position 18
in the sound space 12, towards which the sound object 10 moves 14. The determination
of the trajectory 40 and/or positions 18 may occur automatically in dependence on
the current positions of other existing rendered sound objects in the sound space
12. For example, the sound object 10 may move to avoid other sound objects, or to
have maximum separation from adjacent sound objects, or to be maximally separated
from a particular sound object or to be further from some sound objects while closer
to other sound objects.
[0041] The trajectory 40 of the moving sound object 10 while it is being rendered 16 is
determined, at least in part, automatically.
[0042] In this example, the sound object 10 begins to move along the trajectory 40 during,
for example at the start of, a second repeat of the looped music segment 10. The movement
14 of the sound object 10 may be at a constant speed or a slowly varying speed.
[0043] Figs 6A, 6B illustrate that a trajectory 40 of a moving sound object 10 may, in some
examples, be controlled by a user 30 while the sound object 10 is being rendered 16.
[0044] In the example of Fig 6A and 6B, the trajectory 40 of the moving sound object 10
while it is being rendered 16 changes dynamically in response to a gesture 42 from
the user 30. In this example, the trajectory 40 follows the head of the guitar 34.
When the guitar head points in a direction away from the current position of the sound
object 10, the sound object 10 moves 14 by changing it's velocity to have an increasing
component towards where the guitar head is pointing. The greater the difference in
angular separation between the current position of the sound object 10 and the guitar
head, then the greater the rate at which the component of velocity toward where the
guitar head points, increases. Thus in Fig 6A, the guitar head points upwards and
the sound object 10 moves upwards and when the guitar head points downwards, in Fig
6B, the sound object 10 moves downwards.
[0045] Figs 6C, 6D illustrate that how a moving sound object 10 moves along a trajectory
40 may be controlled by a user 30. In this example, the sound object 10 circumscribes
(orbits) the trajectory 40 while moving along the trajectory repeatedly looping the
music segment 20. When the sound object 10 reaches the end position 18
T of the trajectory 40 it forms a spatial loop (orbit) at the end position 18
T. In some but not necessarily all examples, the time it takes the sound object 10
to complete the spatial loop may be the same as (or an integer multiple of) the duration
of the sound segment 20 that is repeatedly looped.
[0046] In the example of Figs 6C and 6D, the trajectory 40 is defined as previously described,
for example, in relation to Figs 4A-4C, 5A-5C or 6A-6B. In this example, the sound
object 10 is defined by the user as a spatially looping sound object 10 by a gesture
42 from the user 30. In this example, the spatial looping of the sound object 10 follows
the head of the guitar 34. The guitar head is a pointer that writes a closed spatial
loop. The user 30 then throws the sound object 10 into the trajectory 40 with a flick
of the head of the guitar 34 and the sound object 10 moves 14 in the trajectory 40
while being rendered 16 as a spatial loop that circumscribes (orbits) the trajectory
40 as illustrated in Fig 6D.
[0047] This may be repeated a number of times by defining different music sequences and
by throwing the spatially looping sound objects 10 for those looping music sequences
into the same or different trajectories.
[0048] In some examples, some or all of the spatial loops may be partially of wholly pre-defined
in shape and/or size and/or orientation. In some examples, some or all of the spatial
loops may be partially of wholly defined in shape and/or size and/or orientation by
a user before the sound object 10 is thrown into the trajectory 40. In some examples,
some or all of the spatial loops may be partially of wholly defined in shape and/or
size and/or orientation by a user after the sound object 10 is thrown.
[0049] In the preceding examples, the movement 14 of the sound object 10 along the trajectory
40 may be determined wholly or partly by selection of end position 18
T of the trajectory 40 or a direction in the sound space 12 towards which the sound
object 10 moves 14. The end position 18
T or direction may be selected by a user or selected automatically.
[0050] In some examples, the sound space 12 is divided into predetermined spatial regions
which may, for example, be of arbitrary volume.
[0051] The user 30 selects where the sound object is rendered by performing a gesture 42
that selects a predetermined spatial region that defines the end-position 18
T. The gesture 42 may, for example, be performed during a first repeat of the music
segment 10 caused by rendering the sound object 10. The gesture 42 may, for example,
comprise pointing a head of a guitar 34.
[0052] If there are not predetermined spatial regions or the selection of a predetermined
spatial region is ambiguous, the end position 18
T may, in some examples, be controlled by a user 30 while the sound object 10 is being
rendered 16. The sound object 10 while it is being rendered 16 as a repeatedly looping
music segment 20 moves along the trajectory 40. When it has reached a desired end
position 18
T the user performs a gesture that defines the end position 18
T. For example, the user may rapidly move the head of the guitar 34 up and down when
the sound object 10 has reached the desired end position 18
T.
[0053] As previously described, in addition or alternatively, to controlling the position
of the sound object 10 while it is being rendered, the audio characteristics of the
rendered sound object 10 as it moves through the sound space may be are controlled
by the user. For example, the user may, by activating a foot pedal select and change
an audio characteristic.
[0054] In some but not necessarily all examples, a user 30 is able to assign properties
to a region of the sound space 12. These properties are the inherited automatically
by other sound objects 10 that are rendered in that region. The properties may, for
example, include characteristics of trajectories 40 of moving sound objects 10 in
that region such as speed, trajectory size, trajectory shape etc. The properties may,
for example, also or alternatively include audio characteristics of moving sound objects
10 in that region such as speed, trajectory size, trajectory shape etc. A user 30
may, in some examples, assign properties to a region of the sound space 12 automatically
by locating a first sound object 10 in that region. The properties of that first sound
object 10 may or may not have been determined wholly or partly by the user. Some or
all of the properties of that first sound object 10 are then inherited by sound objects
10 later positioned in that region. If a first sound object 10 has a particular trajectory
40 within a region with inheritable properties (including inherited trajectory 40),
then when a second sound object 10 joins the first sound object 10 in that region
the first and second sound objects may move with a common trajectory 40 but with different
phase offsets- they may for example be evenly distributed along the length of the
same trajectory 40. This may be repeated when additional sound objects 10 join the
other sound objects 10 in the region. In an alternative embodiment, it may be desirable
to place the added sound objects on top of each other (no phase offset) to augment
the sound.
[0055] In some but not necessarily all examples, the user may be able to replace an existing
sound object 10 with another sound object that when rendered repeatedly loops a different
audio segment. This may be achieved by generating a sound object as described above
in relation to Fig 4A & 4B and then removing the current sound object 10 while starting
rendering of the new sound object 10. In the examples of Figs 4A, 4B, the user may
control which sound object 10 is replaced by pointing towards that sound object 10.
The control of positioning the new sound object then continues as described above
for Figs 4B, 4C and 5B, 5C. In other examples, the replacement looped sound object
10 inherits some or all the properties of the replaced sound object such as position
and/or trajectory.
[0056] Examples of applications of the above described methods can be better understood
with reference to the following use cases.
[0057] In the first and second use cases the user 30 produces audio to be looped. In accordance
with the method 100, a new looped sound object 10 will be generated and the loop will
start to play automatically. After this, for example on the first replay of the loop,
the user 30 selects where the sound object will be rendered.
[0058] In a first use case the user is a musician performing live or recording a music track.
The user 30 produces music using his or her voice or a musical instrument. In the
example illustrated in Figs 4A-4C, 5A-5C & 6A-6B the user 30 plays a guitar 34. The
user performs a guitar riff to be looped. The user 30 may indicate the audio section
30 to be looped e.g. by pressing a foot pedal 32 or some other input. The new sound
object 10 will be generated and the loop will start to play automatically
[0059] After this, on the first replay of the loop (Fig 4B), the user 30 selects a spatial
region (e.g. position 18
T) by pointing with his guitar. The loop will be subsequently moved to that spatial
region (Fig 4C).
[0060] During the second replay of the loop (Fig 6A, 6B), the user 30 moves his guitar to
start moving the sound object 10 in the sound space 12 dynamically.
[0061] The loop continues to play in the background while the rendering sound object 10
smoothly moves according to a trajectory 40 provided at that time by the user 30 (Fig
6A, 6B). The user 30 may provide the trajectory 40 in real time, for example, by moving
the guitar. The user 30 may select the trajectory 40 from a collection of presets
for example by making a certain gesture with the guitar. Preset gestures are mapped
to preset trajectories 40. Also by making a gesture, the guitarist can mute/remove
the loop.
[0062] In a second use case the user 30 is a person speaking, for example a reporter, presenter,
actor etc. The user 30 can say a phrase and cause it to be looped. The new looped
sound object 10 will be generated and the loop will start to play automatically. Then,
during the first replay of the loop, the user can assign a spatial location around
(or on top of) an associated object such that the looped sound object 10t dynamically
tracks the associated object changing the trajectory 40 . In other words, the sound
object 10 may follow a moving sound object or a visual object. In some examples, an
origin for the trajectory 40 of the rendering looped sound object 10 may be determined
by the associated object and the trajectory 40 may be defined relative to that moving
origin. For example, the rendering looped sound object 10 may orbit the associated
object.
[0063] In a third use case, the user 30 may compare different audio segments by looping
those audio segments while controlling spatially where those audio segments are looped.
In accordance with the method 100, the user 30 can generate a looped sound object
10 that loops an audio segment 20 for each audio segment and then render those sound
objects 10 at different positions 18. This may enable comparison between the different
audio segments 20 associated with the differently located looped sound objects 10.
[0064] This may be particularly useful for dialogue replacement. In a studio session, a
first looped sound object 10 is generated for an original phrase from an actor's original
performance and is rendered at a position 18 that is not distracting. This is done
according to the method 100. The actor hears the original looped phrase produced by
the rendering first looped sound object 10 and simultaneously speaks the phrase again,
which is recorded. A second looped sound object 10 is generated for the newly recorded
phrase and is rendered at a position 18 that allows comparison with the original phrase.
This is done according to the method 100. Thus, the actor and director can compare
multiple looped takes (audio segments 20) to determine which is the best one. Since
the actor or director is able to reposition the first take (first looped sound object
10) and the second take (second looped sound object 10) the takes have spatial separation
enabling them to be listened to simultaneously.
[0065] This may also be of particular use in a rehearsal situation where an actor is practicing
reciting lines (dialogue).
[0066] In a fourth use case, the generating of independent sound objects 10 that loop audio
segments 20 and their independent positioning according to the method 100 may also
be used in post-recording production, for example during spatial audio mixing. A user
interface may have "pop-up" track controllers for each of the looped sound objects
10, which appear in the user interface automatically when the sound object 10 is being
rendered.
[0067] Fig 7 illustrates an example of an apparatus 202 configured to perform the methods
as described above in relation to Figs 1 to 6B.
[0068] The apparatus 202 comprises a controller 200 configured to provide output to an audio
output device 204.
[0069] The audio output device 204.may be a device for rendering spatial audio or may be
an interface for communicating with another apparatus that is configured to render
spatial audio or configured to store data for subsequent rendering of spatial audio.
[0070] The audio output device 204.may be, for example, a personal audio output system such
as headphones or a shared audio output system such as an arrangement of loudspeakers
or some other system for rendering an audio object.
[0071] Sound objects 10 may be defined using multichannel audio signals according to a defined
standard such as, for example, binaural coding, 5.1 surround sound coding, 7.1 surround
sound coding etc.
[0072] The controller 200 is configured to generate the sound object 10 that when rendered
repeatedly loops an audio segment 20.
[0073] The audio output device enables 204 the controller to cause movement 14 of a sound
object 10 within a sound space 12 while the sound object 10 is being rendered 16 thereby
causing rendering of the audio segment 20 at different positions 18 within the sound
space 12
[0074] The apparatus 202 in some but not necessarily all examples comprises an input device
206. The input device 206 may be a device for enabling a user to provide a command
input to the controller 200.
[0075] In some but not necessarily all examples the input device 206 may be or comprise
a foot pedal 32 (as described in relation to Figs 4A, 5A)
[0076] In some but not necessarily all examples the input device 206 may be or comprise
a gesture detection device for detecting a user gesture 42 (for example, as described
in relation to Fig 4B and Figs 6A & 6B). The gesture detection device may comprise
an activity sensor, movement sensor, accelerometer, gyroscope, or such. The gesture
detection device 42 may be wirelessly connected to the controller 200. The wireless
connection may be a low power RF connection, like Bluetooth™ connection.
[0077] Gestures 42 may be detected in a number of ways. For example, depth sensors may be
used to detect movement of parts a user 30 (or instrument) and/or or image sensors
may be used to detect movement of parts of a user 30 (or instrument) and/or positional/movement
sensors attached to a limb of a user 18 (or instrument) may be used to detect movement
of the limb (or instrument).
[0078] Object tracking may be used to determine when an object or user changes position.
For example, tracking the object on a large macro-scale allows one to create a frame
of reference that moves with the object. That frame of reference can then be used
to track time-evolving changes of shape of the object, by using temporal differencing
with respect to the object. This can be used to detect small scale human motion such
as gestures, hand movement, finger movement, facial movement.
[0079] The apparatus 202 may track a plurality of objects and/or points in relation to a
user's body, for example one or more joints of the user's body (or an instrument).
In some examples, the apparatus 202 may perform full body skeletal tracking of a user's
body. The tracking of one or more objects and/or points in relation to a user's body
may be used by the apparatus 202 in gesture recognition.
[0080] A gesture may be static or moving. A moving gesture may comprise a movement or a
movement pattern comprising a series of movements. For example it could be making
a circling motion or a side to side or up and down motion or the tracing of a sign
in space. A moving gesture may involve movement of a user input object e.g. a user
body part or parts, or a further apparatus (e.g. an instrument), relative to sensors.
The body part may comprise the user's hand or part of the user's hand such as one
or more fingers and thumbs. In other examples, the user input object may comprise
a different part of the body of the user such as their head or arm. Three-dimensional
movement may comprise motion of the user input object in any of six degrees of freedom.
The motion may comprise the user input object moving towards or away from the sensors
as well as moving in a plane parallel to the sensors or any combination of such motion.
A gesture may be a non-contact gesture. A non-contact gesture does not contact the
sensors at any time during the gesture.
[0081] A gesture may be defined as evolution of displacement, of a tracked point relative
to an origin, with time. It may, for example, be defined in terms of motion using
time variable parameters such as displacement, velocity or using other kinematic parameters.
[0082] A gesture may be performed in one spatial dimension (1 D gesture), two spatial dimensions
(2D gesture) or three spatial dimensions (3D gesture).
[0083] Implementation of the controller 200 may be as controller circuitry. The controller
200 may be implemented in hardware alone, have certain aspects in software including
firmware alone or can be a combination of hardware and software (including firmware).
[0084] In the above descriptions, reference to a user 'controlling' may be considered to
be a reference to a user providing a command input via the input device.
[0085] In the above descriptions, reference to a process being 'automatic' should be understood
to mean that the process is performed by the controller 200 without further additional
input from the user.
[0086] As illustrated in Fig 7 the controller 200 may be implemented using instructions
that enable hardware functionality, for example, by using executable instructions
of a computer program 222 in a general-purpose or special-purpose processor 210 that
may be stored on a computer readable storage medium (disk, memory etc) to be executed
by such a processor 210.
[0087] The processor 210 is configured to read from and write to the memory 220. The processor
210 may also comprise an output interface via which data and/or commands are output
by the processor 210 and an input interface via which data and/or commands are input
to the processor 210.
[0088] The memory 220 stores a computer program 222 comprising computer program instructions
(computer program code) that controls the operation of the apparatus 202 when loaded
into the processor 210. The computer program instructions, of the computer program
222, provide the logic and routines that enables the apparatus to perform the methods
illustrated in Figs 1 to 6. The processor 210 by reading the memory 220 is able to
load and execute the computer program 222.
[0089] The apparatus 202 therefore comprises:
at least one processor 210; and
at least one memory 220 including computer program code
the at least one memory 220 and the computer program code configured to, with the
at least one processor 210, cause the apparatus 202 at least to perform:
generating a sound object 10 that when rendered repeatedly loops an audio segment
20; and
causing movement of the sound object 10 within a sound space 12 while the sound object
10 is being rendered 16 thereby causing rendering of the audio segment 20 at different
positions 18 within the sound space 12.
[0090] As illustrated in Fig 8, the computer program 222 may arrive at the apparatus 202
via any suitable delivery mechanism 230. The delivery mechanism 230 may be, for example,
a non-transitory computer-readable storage medium, a computer program product, a memory
device, a record medium such as a compact disc read-only memory (CD-ROM) or digital
versatile disc (DVD), an article of manufacture that tangibly embodies the computer
program 222. The delivery mechanism may be a signal configured to reliably transfer
the computer program 222. The apparatus 202 may propagate or transmit the computer
program 222 as a computer data signal.
[0091] Although the memory 220 is illustrated as a single component/circuitry it may be
implemented as one or more separate components/circuitry some or all of which may
be integrated/removable and/or may provide permanent/semi-permanent/ dynamic/cached
storage.
[0092] Although the processor 210 is illustrated as a single component/circuitry it may
be implemented as one or more separate components/circuitry some or all of which may
be integrated/removable. The processor 210 may be a single core or multi-core processor.
[0093] References to 'computer-readable storage medium', 'computer program product', 'tangibly
embodied computer program' etc. or a 'controller', 'computer', 'processor' etc. should
be understood to encompass not only computers having different architectures such
as single /multi- processor architectures and sequential (Von Neumann)/parallel architectures
but also specialized circuits such as field-programmable gate arrays (FPGA), application
specific circuits (ASIC), signal processing devices and other processing circuitry.
References to computer program, instructions, code etc. should be understood to encompass
software for a programmable processor or firmware such as, for example, the programmable
content of a hardware device whether instructions for a processor, or configuration
settings for a fixed-function device, gate array or programmable logic device etc.
[0094] As used in this application, the term 'circuitry' refers to all of the following:
- (a) hardware-only circuit implementations (such as implementations in only analog
and/or digital circuitry) and
- (b) to combinations of circuits and software (and/or firmware), such as (as applicable):
(i) to a combination of processor(s) or (ii) to portions of processor(s)/software
(including digital signal processor(s)), software, and memory(ies) that work together
to cause an apparatus, such as a mobile phone or server, to perform various functions
and
- (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s),
that require software or firmware for operation, even if the software or firmware
is not physically present. This definition of 'circuitry' applies to all uses of this
term in this application, including in any claims. As a further example, as used in
this application, the term "circuitry" would also cover an implementation of merely
a processor (or multiple processors) or portion of a processor and its (or their)
accompanying software and/or firmware. The term "circuitry" would also cover, for
example and if applicable to the particular claim element, a baseband integrated circuit
or applications processor integrated circuit for a mobile phone or a similar integrated
circuit in a server, a cellular network device, or other network device.
[0095] The blocks and processes illustrated in and described with reference to the Figs
1 to 6B may represent steps in a method and/or sections of code in the computer program
222. The illustration of a particular order to the blocks does not necessarily imply
that there is a required or preferred order for the blocks and the order and arrangement
of the block may be varied. Furthermore, it may be possible for some blocks to be
omitted.
[0096] Where a structural feature has been described, it may be replaced by means for performing
one or more of the functions of the structural feature whether that function or those
functions are explicitly or implicitly described.
[0097] As used here 'module' refers to a unit or apparatus that excludes certain parts/components
that would be added by an end manufacturer or a user. the controller 200 may be a
module.
[0098] The term 'comprise' is used in this document with an inclusive not an exclusive meaning.
That is any reference to X comprising Y indicates that X may comprise only one Y or
may comprise more than one Y. If it is intended to use 'comprise' with an exclusive
meaning then it will be made clear in the context by referring to "comprising only
one" or by using "consisting".
[0099] In this brief description, reference has been made to various examples. The description
of features or functions in relation to an example indicates that those features or
functions are present in that example. The use of the term 'example' or 'for example'
or 'may' in the text denotes, whether explicitly stated or not, that such features
or functions are present in at least the described example, whether described as an
example or not, and that they can be, but are not necessarily, present in some of
or all other examples. Thus 'example', 'for example' or 'may' refers to a particular
instance in a class of examples. A property of the instance can be a property of only
that instance or a property of the class or a property of a sub-class of the class
that includes some but not all of the instances in the class. It is therefore implicitly
disclosed that a features described with reference to one example but not with reference
to another example, can where possible be used in that other example but does not
necessarily have to be used in that other example.
[0100] Although embodiments of the present invention have been described in the preceding
paragraphs with reference to various examples, it should be appreciated that modifications
to the examples given can be made without departing from the scope of the invention
as claimed.
[0101] Features described in the preceding description may be used in combinations other
than the combinations explicitly described.
[0102] Although functions have been described with reference to certain features, those
functions may be performable by other features whether described or not.
[0103] Although features have been described with reference to certain embodiments, those
features may also be present in other embodiments whether described or not.
[0104] Whilst endeavoring in the foregoing specification to draw attention to those features
of the invention believed to be of particular importance it should be understood that
the Applicant claims protection in respect of any patentable feature or combination
of features hereinbefore referred to and/or shown in the drawings whether or not particular
emphasis has been placed thereon.
1. A method comprising:
generating a sound object that when rendered repeatedly loops an audio segment; and
causing movement of the sound object within a sound space while the sound object is
being rendered thereby causing rendering of the audio segment at a plurality of different
positions within the sound space.
2. A method as claimed in claim 1 further comprising selecting a position or direction
in the sound space, wherein movement of the sound object, within the sound space,
is towards the selected position or in the selected direction.
3. A method as claimed in claim 1 or 2, wherein the user selects at least one position
where the sound object is rendered.
4. A method as claimed in claim 1, 2 or 3, wherein the user selects at least one position
where the sound object is rendered by performing a gesture towards the position.
5. A method as claimed in any preceding claim, wherein the user selects at least one
position where the sound object is rendered by providing a user input during a first
repeated loop of the audio segment caused by rendering the sound object.
6. A method as claimed in any preceding claim, wherein at least one position where the
sound object is rendered is selected automatically.
7. A method as claimed in claim 2, wherein the selection of the position is determined
automatically in dependence on the current positions of existing rendered sound objects
in the sound space.
8. A method as claimed in any preceding claim, wherein the sound object begins to move
towards during a second repeated loop of the audio segment when the sound object caused
by rendering the sound object.
9. A method as claimed in any preceding claim, wherein the movement of the sound object
is at a constant speed or a slowly varying speed.
10. A method as claimed in any preceding claim, wherein a trajectory of the sound object
moving through the sound space, causing repeated looping of the audio segment at different
positions within the sound space, is controlled by the user.
11. A method as claimed in any preceding claim, wherein the audio characteristics of the
rendered sound object as it moves through the sound space, causing repeated looping
of the audio segment at different positions within the sound space, are controlled
by the user.
12. A method as claimed in any preceding claim, wherein the sound object when rendered
automatically renders the audio segment in a looped fashion until it is replaced or
muted.
13. A method as claimed in any preceding claim further comprising generating an additional
sound object that when rendered repeatedly loops a different audio segment; and causing
movement of the additional sound object within the sound space while the additional
sound object is being rendered thereby causing repeated looping of the different audio
segment at different positions within the sound space.
14. A method as claimed in any preceding claim, wherein the sound object is generated
after a user performs to produce the audio segment and in response to a user input
and/or wherein the sound object is defined by the user as a spatially looping sound
object.
15. An apparatus comprising means for performing the method of any preceding claim and/or
a computer program that when loaded into a processor enables the method of any preceding
claim.