TECHNICAL FIELD
[0001] The present application relates generally to capturing content. More specifically,
the present application relates to controlling at least one functionality of an apparatus
for capturing content.
BACKGROUND
[0002] The amount of multimedia content increases continuously. Users create and consume
multimedia content, and it has a big role in modern society.
SUMMARY
[0003] Various aspects of examples of the invention are set out in the claims. The scope
of protection sought for various embodiments of the invention is set out by the independent
claims. The examples and features, if any, described in this specification that do
not fall under the scope of the independent claims are to be interpreted as examples
useful for understanding various embodiments of the invention.
[0004] According to a first aspect of the invention, there is provided an apparatus comprising
means for performing: receiving orientation information relating to an orientation
of an audio device operatively connected to an apparatus, determining, based on the
orientation information, an orientation of the audio device with respect to the apparatus,
determining, based on the orientation of the audio device with respect to the apparatus,
a direction of capturing content by the apparatus, and controlling at least one functionality
of the apparatus for capturing content in the determined direction.
[0005] According to a second aspect of the invention, there is provided a method comprising
receiving orientation information relating to an orientation of an audio device operatively
connected to an apparatus, determining, based on the orientation information, an orientation
of the audio device with respect to the apparatus, determining, based on the orientation
of the audio device with respect to the apparatus, a direction of capturing content
by the apparatus, and controlling at least one functionality of the apparatus for
capturing content in the determined direction.
[0006] According to a third aspect of the invention, there is provided a computer program
comprising instructions for causing an apparatus to perform at least the following:
receiving orientation information relating to an orientation of an audio device operatively
connected to an apparatus, determining, based on the orientation information, an orientation
of the audio device with respect to the apparatus, determining, based on the orientation
of the audio device with respect to the apparatus, a direction of capturing content
by the apparatus, and controlling at least one functionality of the apparatus for
capturing content in the determined direction.
[0007] According to a fourth aspect of the invention, there is provided an apparatus comprising
at least one processor and at least one memory including computer program code, the
at least one memory and the computer program code configured to with the at least
one processor, cause the apparatus at least to: receive orientation information relating
to an orientation of an audio device operatively connected to the apparatus, determine,
based on the orientation information, an orientation of the audio device with respect
to the apparatus, determine, based on the orientation of the audio device with respect
to the apparatus, a direction for capturing content by the apparatus, and control
at least one functionality of the apparatus for capturing content in the determined
direction.
[0008] According to a fifth aspect of the invention, there is provided a non-transitory
computer readable medium comprising program instructions for causing an apparatus
to perform at least the following: receiving orientation information relating to an
orientation of an audio device operatively connected to an apparatus, determining,
based on the orientation information, an orientation of the audio device with respect
to the apparatus, determining, based on the orientation of the audio device with respect
to the apparatus, a direction of capturing content by the apparatus, and controlling
at least one functionality of the apparatus for capturing content in the determined
direction.
[0009] According to a sixth aspect of the invention, there is provided a computer readable
medium comprising program instructions for causing an apparatus to perform at least
the following: receiving orientation information relating to an orientation of an
audio device operatively connected to an apparatus, determining, based on the orientation
information, an orientation of the audio device with respect to the apparatus, determining,
based on the orientation of the audio device with respect to the apparatus, a direction
of capturing content by the apparatus, and controlling at least one functionality
of the apparatus for capturing content in the determined direction.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] Some example embodiments will now be described with reference to the accompanying
drawings:
Figure 1 shows a block diagram of an example apparatus in which examples of the disclosed
embodiments may be applied;
Figure 2 shows a block diagram of another example apparatus in which examples of the
disclosed embodiments may be applied;
Figure 3 illustrates an example of capturing content;
Figure 4 illustrates another example of capturing content;
Figure 5 illustrates a further example of capturing content;
Figure 6 illustrates a yet further example of capturing content;
Figure 7 illustrates an example method; and
Figure 8 illustrates another example method.
DETAILED DESCRIPTION
[0011] The following embodiments are exemplifying. Although the specification may refer
to "an", "one", or "some" embodiment(s) in several locations of the text, this does
not necessarily mean that each reference is made to the same embodiment(s), or that
a particular feature only applies to a single embodiment. Single features of different
embodiments may also be combined to provide other embodiments.
[0012] Example embodiments relate to an apparatus configured to an apparatus configured
to receive orientation information relating to an orientation of an audio device operatively
connected to the apparatus, determine, based on the orientation information, an orientation
of the audio device with respect to the apparatus, determine, based on the orientation
of the audio device with respect to the apparatus, a direction for capturing content
by the apparatus, and control at least one functionality of the apparatus for capturing
content in the determined direction.
[0013] Some example embodiments relate to controlling content capture options with an audio
device such as an ear bud. Some example embodiments relate to capturing spatial audio
using an audio device such as an ear bud.
[0014] Spatial audio may comprise a full sphere surround-sound to mimic the way people perceive
audio in real life. Spatial audio may comprise audio that appears from a user's position
to be assigned to a certain direction and/or distance. Therefore, the perceived audio
may change with the movement of the user or with the user turning. Spatial audio may
comprise audio created by sound sources, ambient audio or a combination thereof. Ambient
audio may comprise audio that might not be identifiable in terms of a sound source
such as traffic humming, wind or waves, for example. The full sphere surround-sound
may comprise a spatial audio field and the position of the user or the position of
the capturing device may be considered as a reference point in the spatial audio field.
According to an example embodiment, a reference point comprises the centre of the
audio field.
[0015] Spatial audio may be captured with, for example, a capturing device comprising a
plurality of microphones configured to capture audio signals around the capturing
device. In addition to capturing audio signals, the capturing device may also be configured
to capture different types of information such as one or more parameters relating
to the captured audio signals and/or visual information. The captured parameters may
be stored with the captured audio or in a separate file. A capturing device may be,
for example, a camera, a video recorder or a smartphone.
[0016] Spatial audio may comprise one or more parameters such as an audio focus parameter
and/or an audio zoom parameter. An audio parameter may comprise a parameter value
with respect to a reference point such as the position of the user or the position
of the capturing device. Modifying a spatial audio parameter value may cause a change
in spatial audio perceived by a listener.
[0017] An audio focus feature allows a user to focus on audio in a desired direction with
respect to other directions when capturing content and/or playing back content. Therefore,
an audio focus feature also allows a user to at least partially eliminate background
noises. When capturing content, in addition to capturing audio, also the direction
of sound is captured. A direction of sound may be defined with respect to a reference
point. For example, a direction of sound may comprise an angle with respect to a reference
point or a discrete direction such as front, back, left, right, up and/or down with
respect to a reference point, or a combination thereof. The reference point may correspond
to, for example, a value of 0 degrees or no audio focus direction in which case, at
the reference point, the audio comprises surround sound with no audio focus. An audio
focus parameter may also comprise one or more further levels of detail such as horizontal
focus direction and/or vertical focus direction.
[0018] An audio zoom feature allows a user to zoom in on a sound. Zooming in on a sound
comprises adjusting an amount of audio gain associated with a particular direction.
Therefore, an audio zoom parameter corresponds to sensitivity to a direction of sound.
Audio zoom may be performed using audio beamforming with which a user may be able
to control, for example, the size, shape and/or direction of the audio beam. Performing
audio zooming may comprise controlling audio signals coming from a particular direction
while attenuating audio signals coming from other directions. For example, an audio
zoom feature may allow controlling audio gain. Audio gain may comprise an amount of
gain set to audio input signals coming from a certain direction. An audio zoom parameter
value may be defined with respect to a reference point. For example, an audio zoom
parameter may be a percentage value and the reference point may correspond to, for
example, a value of 0 % in which case, at the reference point, the audio comprises
surround sound with no audio zooming. As another example, an audio zoom feature may
allow delaying different microphone signals differently and then summing the signals
up, thereby enabling spatial filtering of audio.
[0019] Audio zooming may be associated with zooming visual information. For example, if
a user records a video and zooms in on an object, the audio may also be zoomed in
on the object such that, for example, sound generated by the object is emphasized
and other sounds are attenuated. In other words, spatial audio parameters may be controlled
by controlling the video zoom.
[0020] Mobile phones are more and more used in professional audio/video capture due to increased
capabilities. In some situations, it would be useful to have additional microphones
such as a close-up microphone that is close to a sound source. However, using a plurality
of microphones might be challenging in terms of organizing the capture such that it
would be easy to choose a microphone to be used or in which ratio microphones are
used.
[0021] Figure 1 is a block diagram depicting an apparatus 100 operating in accordance with
an example embodiment of the invention. The apparatus 100 may be, for example, an
electronic device such as a chip or a chipset. The apparatus 100 comprises one or
more control circuitry, such as at least one processor 110 and at least one memory
160, including one or more algorithms such as computer program code 120 wherein the
at least one memory 160 and the computer program code are 120 configured, with the
at least one processor 110 to cause the apparatus 100 to carry out any of example
functionalities described below.
[0022] In the example of Figure 1, the processor 110 is a control unit operatively connected
to read from and write to the memory 160. The processor 110 may also be configured
to receive control signals received via an input interface and/or the processor 110
maybe configured to output control signals via an output interface. In an example
embodiment the processor 110 may be configured to convert the received control signals
into appropriate commands for controlling functionalities of the apparatus 100.
[0023] The at least one memory 160 stores computer program code 120 which when loaded into
the processor 110 control the operation of the apparatus 100 as explained below. In
other examples, the apparatus 100 may comprise more than one memory 160 or different
kinds of storage devices.
[0024] Computer program code 120 for enabling implementations of example embodiments of
the invention or a part of such computer program code may be loaded onto the apparatus
100 by the manufacturer of the apparatus 100, by a user of the apparatus 100, or by
the apparatus 100 itself based on a download program, or the code can be pushed to
the apparatus 100 by an external device. The computer program code 120 may arrive
at the apparatus 100 via an electromagnetic carrier signal or be copied from a physical
entity such as a computer program product, a memory device or a record medium such
as a Compact Disc (CD), a Compact Disc Read-Only Memory (CD-ROM), a Digital Versatile
Disk (DVD) or a Blu-ray disk.
[0025] Figure 2 is a block diagram depicting an apparatus 200 in accordance with an example
embodiment of the invention. The apparatus 200 may be an electronic device such as
a hand-portable device, a mobile phone or a Personal Digital Assistant (PDA), a Personal
Computer (PC), a laptop, a desktop, a tablet computer, a wireless terminal, a communication
terminal, a game console, a music player, an electronic book reader (e-book reader),
a positioning device, a digital camera, a household appliance, a CD-, DVD or Blu-ray
player, or a media player.
[0026] According to an example embodiment, the apparatus 200 comprises a mobile computing
device. According to another example embodiment, the apparatus 200 comprises a part
of a mobile communication device. The mobile computing device may comprise, for example,
a mobile phone, a tablet computer, or the like. In the examples below it is assumed
that the apparatus 200 is a mobile computing device or a part of it.
[0027] In the example embodiment of Figure 2, the apparatus 200 is illustrated as comprising
the apparatus 100, a microphone array 210, one or more loudspeakers 230 and a user
interface 220 for interacting with the apparatus 200 (e.g. a mobile computing device).
The apparatus 200 may also comprise a display configured to act as a user interface
220. For example, the display may be a touch screen display. In an example embodiment,
the display and/or the user interface 220 may be external to the apparatus 200, but
in communication with it.
[0028] The microphone array 210 comprises a plurality of microphones for capturing audio.
The plurality of microphones may be configured to work together to capture audio,
for example, at different sides of a device. For example, a microphone array comprising
two microphones may be configured to capture audio from a right side and a left side
of a device.
[0029] According to an example embodiment, the apparatus 200 is configured to apply one
or more audio focus operations to emphasize audio signals arriving from a particular
direction and/or attenuate sounds coming from other directions and/or one or more
audio zoom operations to switch between focused and non-focused audio, for example,
in conjunction with a camera.
[0030] The apparatus 200 is further configured to perform one or more spatial filtering
methods for achieving audio focus and/or audio zoom. The one or more spatial filtering
methods may comprise, for example, beamforming and/or parametric spatial audio.
[0031] Beamforming may comprise forming an audio beam by selecting a particular microphone
arrangement for capturing spatial audio information from a first direction and/or
attenuating sounds coming from a second direction and processing the received audio
information. In other words, a microphone array may be used to form a spatial filter
which is configured to extract a signal from a specific direction and/or reduce contamination
of signals from other directions.
[0032] Parametric spatial audio processing comprises analysing a spatial audio field into
a directional component with a direction-of-arrival parameter and ambient component
without a direction-of-arrival parameter and changing the direction-of-arrival parameter
at which directional signal components are enhanced.
[0033] According to an example embodiment, the apparatus 200 is configured to control a
direction of audio focus. Controlling a direction of audio focus may comprise, for
example, changing a direction of an audio beam with respect to a reference point in
a spatial audio field. For example, changing a direction of an audio beam may comprise
changing the direction of the audio beam from a first direction to a second direction.
When the audio beam is directed to a first direction, audio signals from that direction
are emphasized and when the audio beam is directed to a second direction, audio signals
from that direction are emphasized.
[0034] The apparatus 200 may be configured to control a direction of an audio beam by switching
from a first microphone arrangement to a second microphone arrangement, by processing
the captured audio information using an algorithm with different parameters and/or
using a different algorithm for processing the captured audio information. For example,
in the case of a Delay-Sum beamformer, the beam direction steering can be accomplished
by adjusting the values of steering delays so that signals arriving from a particular
direction are aligned before they are summed.
[0035] Referring back to Figure 2, the user interface 220 may also comprise a manually operable
control such as a button, a key, a touch pad, a joystick, a stylus, a pen, a roller,
a rocker, a keypad, a keyboard or any suitable input mechanism for inputting and/or
accessing information. Further examples include a camera, a speech recognition system,
eye movement recognition system, acceleration-, tilt- and/or movement-based input
systems. Therefore, the apparatus 200 may also comprise different kinds of sensors
such as one or more gyro sensors, accelerometers, magnetometers, position sensors
and/or tilt sensors.
[0036] According to an example embodiment, the apparatus 200 is configured to establish
radio communication with another device using, for example, a Bluetooth, WiFi, radio
frequency identification (RFID), or a near field communication (NFC) connection. For
example, the apparatus 200 may be configured to establish radio communication with
an audio device 250 such as a wireless headphone, augmented/virtual reality device
or the like.
[0037] According to an example embodiment, the apparatus 200 is operatively connected to
an audio device 250. According to an example embodiment, the apparatus 200 is wirelessly
connected to the audio device 250. For example, the apparatus 200 may be connected
to the audio device 250 over a Bluetooth connection, or the like. Similarly to the
apparatus 200, the audio device 250 may comprise at least one microphone array for
capturing audio signals and at least one loudspeaker for playing back received audio
signals. The audio device 250 may further be configured to filter out background noise
and/or detect in-ear placement. The audio device 250 may comprise a headphone such
as a wireless headphone.
[0038] A headphone may comprise a single headphone such as an ear bud or a pair of headphones
such as a pair of ear buds configured to function as a pair. For example, the audio
device 250 may comprise a first wireless headphone and a second wireless headphone
such that the first wireless headphone and the second wireless headphone are configured
to function as a pair. Functioning as a pair may comprise, for example, providing
stereo output for a user using the first wireless headphone and the second wireless
headphone. The first wireless headphone and the second wireless headphone may also
be configured such that the first wireless headphone and the second wireless headphone
may be used separately and/or independently of each other. For example, same or different
audio information may be provided to the first wireless headphone and the second wireless
headphone, or audio information may be directed to one wireless headphone and the
second wireless headphone may act as a microphone.
[0039] According to an example embodiment, the apparatus 200 is configured to communicate
with the audio device 250. Communicating with the audio device 250 may comprise providing
to and/or receiving information from the audio device 250. According to an example
embodiment, communicating with the audio device 250 comprises providing audio signals
and/or receiving audio signals. For example, the apparatus 200 may be configured to
provide audio signals to the audio device 250 and receive audio signals from the audio
device 250.
[0040] According to an example embodiment, the apparatus 200 is configured to determine
an orientation and/or a position of the audio device 250 with respect to the apparatus
200.
[0041] According to an example embodiment, the apparatus 200 is configured to determine
the orientation and/or position of the audio device 250 using Bluetooth technology
such as Bluetooth Low Energy (BLE). According to an example embodiment, the audio
device 250 comprises at least one Bluetooth antenna for transmitting data and the
apparatus 200 comprises an array of phased antennas for receiving and/or transmitting
data. The array of phased antennas may be configured to measure the angle-of-departure
(AoD) or angle-of-arrival (AoA) comprising measuring both azimuth and elevation angles.
[0042] When performing AoA measurement, the apparatus 200 may be configured to execute antenna
switching when receiving an AoA packet from the audio device 250. The apparatus 200
may then utilize the amplitude and phase samples together with its own antenna array
information to estimate the AoA of a packet received from the audio device 250.
[0043] Performing AoD measurement may be based on broadcasting by the apparatus 200 the
AoD signals, location and properties of a Bluetooth beacon that enables the audio
device 250 to calculate its own position.
[0044] According to another example embodiment, the apparatus 200 is configured to determine
the orientation and/or position of the audio device 250 using acoustic localization.
Acoustic localization may comprise receiving microphone signals comprising recorded
signals from surroundings and determine a time difference of arrival (TDoA) between
microphones. TDoA may be determined based on inter-microphone delays that may be determined
through correlations and the geometry of a microphone array.
[0045] According to a further example embodiment, the apparatus 200 is configured to determine
a position of the audio device 250 using Global Positioning System (GPS) coordinates
and Bluetooth technology to determine a relative location of the audio device 250.
[0046] According to an example embodiment, the apparatus 200 is configured to receive orientation
information relating to an orientation of the audio device 250 operatively connected
to the apparatus 200. The orientation information may comprise measurement data indicating
the orientation of the audio device 250 or a signal based on which the orientation
of the audio device 250 may be determined. For example, the orientation information
may comprise information received from one or more orientation sensors comprised by
the audio device 250. The orientation information may comprise raw data or pre-processed
data. As another example, the orientation information may comprise a Bluetooth signal.
[0047] According to an example embodiment, the apparatus 200 is configured to determine,
based on the orientation information, an orientation of the audio device 250 with
respect to the apparatus 200.
[0048] Determining the orientation of the audio device 250 with respect to the apparatus
200 may comprise comparing the orientation of the audio device 250 with an orientation
of the apparatus 200. For example, the apparatus 200 may be configured to compare
measurement data indicating an orientation of the audio device 250 with measurement
data indicating an orientation of the apparatus 200.
[0049] As another example, the apparatus 200 may be configured to determine the orientation
of the audio device 250 with respect to the apparatus 200 based on characteristics
of a wireless connection between the apparatus 200 and the audio device 250. For example,
assuming the apparatus 200 and the audio device are wirelessly connected using a Bluetooth
connection, the apparatus 200 may be configured to determine the orientation of the
audio device 250 with respect to the apparatus 200 based on a Bluetooth signal using
a Bluetooth Angle of Arrival (AoA) or a Bluetooth Angle of Departure (AoD) algorithm.
[0050] According to an example embodiment, the apparatus 200 is configured to determine
a direction of the audio device 250 with respect to the apparatus 200. The apparatus
200 may be configured to determine the direction of the audio device with respect
to the apparatus 200 based on the orientation information relating to the audio device
250 and the apparatus 200 or based on characteristics of a wireless connection between
the apparatus 200 and the audio device 250.
[0051] According to an example embodiment, determining the orientation of the audio device
250 with respect to the apparatus 200 comprises determining a pointing direction of
the audio device 250.
[0052] A pointing direction comprises a direction to which the audio device 250 points.
The audio device 250 may be associated with a reference point that is used for determining
the pointing direction. For example, assuming the audio device 250 comprises a wireless
ear bud comprising a pointy end at one end of the ear bud determined as a reference
point, the pointing direction may be determined based on a direction to which the
ear bud points.
[0053] According to an example embodiment, the apparatus 200 is configured to determine,
based on the orientation of the audio device 250 with respect to the apparatus 200,
a direction for capturing content by the apparatus 200. The direction for capturing
content by the apparatus 200 may comprise an absolute direction, a direction relative
to the apparatus 200, an approximate direction or a direction within predefined threshold
values.
[0054] According to an example embodiment, a direction for capturing content by the apparatus
200 comprises a direction corresponding to the orientation of the audio device 250.
According to an example embodiment, a direction corresponding to the orientation of
the audio device 250 comprises a pointing direction of the audio device 250.
[0055] The apparatus 200 may be configured to determine a direction for capturing content
by the apparatus 200 in response to receiving information on an activation of the
audio device 250 or in response to receiving information on a change of orientation
of the audio device. An activation of the audio device 250 may comprise a change of
mode of the audio device 250. For example, assuming the audio device 250 comprises
an ear bud, activation of the audio device based on a change of mode of the ear bud
may comprise removing the ear bud from an in-ear position. A change of orientation
may comprise a change of orientation that is above a predefined threshold value.
[0056] According to an example embodiment, the apparatus 200 is configured to control at
least one functionality of the apparatus 200 for capturing content in the determined
direction. According to an example embodiment, capturing content comprises capturing
content using the apparatus 200. According to an example embodiment, capturing content
comprises capturing video content comprising audio and visual information. According
to another example embodiment, capturing content comprises capturing audio content.
According to a further example embodiment, capturing content comprises capturing visual
content.
[0057] Controlling a functionality relating to capturing content may comprise controlling
capturing audio and/or controlling capturing visual information. For example, controlling
a functionality relating to capturing content may comprise controlling one or more
microphones and/or one or more cameras.
[0058] The apparatus 200 may be configured to control at least one functionality of the
apparatus 200 by controlling a component of the apparatus 200. According to an example
embodiment, controlling the at least one functionality of the mobile computing device
comprises activating a camera. The apparatus 200 may comprise a plurality of cameras.
According to an example embodiment, the camera comprises a first camera located on
a first side of the apparatus 200. According to example embodiment, the camera comprises
a second camera located at a second side of the apparatus 200. The first camera and
the second camera may be configured to record images/video at opposite sides of the
apparatus 200. The first camera may comprise, for example, a front camera and the
second camera may comprise, for example, a back camera.
[0059] According to an example embodiment, activating the camera comprises activating a
camera comprising a field of view in a direction corresponding to the direction for
capturing content. A field of view of a camera may comprise a scene that is visible
through the camera at a particular position and orientation when taking a picture
or recording video. Objects outside the field of view when the picture is taken may
not be recorded.
[0060] According to an example embodiment, controlling the at least one functionality of
the mobile computing device comprises controlling at least one microphone array.
[0061] Controlling a microphone array may comprise controlling the microphone array to capture
audio in a particular direction. Capturing audio in a particular direction may comprise
performing an audio focus operation by, for example, forming a directional beam pattern
towards the particular direction. Beamforming in terms of using a particular beam
pattern enables a microphone array to be more sensitive to sound coming from one or
more particular directions than sound coming from other directions.
[0062] According to an example embodiment, controlling the at least one microphone array
comprises controlling the at least one microphone array to focus audio in the direction
for capturing content. Focusing audio in the direction for capturing content may comprise
performing spatial filtering such as performing beamforming or parametric spatial
audio processing.
[0063] The at least one microphone may comprise at least one microphone array comprised
by the apparatus 200 or at least one microphone array comprised by the audio device
250. According to an example embodiment, the at least one microphone array comprises
a microphone array of the audio device 250 or a microphone array of the apparatus
200.
[0064] The apparatus 200 may be configured to select a microphone array to be controlled.
For example, the apparatus 200 may be configured to select a microphone array closest
to a capturing target such as a sound source. Therefore, the apparatus 200 may be
configured to select the microphone array to be controlled based on the respective
positions of at least a first microphone array and a second microphone array.
[0065] According to an example embodiment, the apparatus 200 is configured to receive position
information relating to a position of the audio device 250. Position information relating
to the position of the audio device 250 may comprise, for example, measurement data
indicating the position of the audio device 250, coordinates indicating the position
of the audio device and/or a signal such as a Bluetooth signal based on which the
position of the audio device 250 may be determined.
[0066] According to an example embodiment, the apparatus 200 is configured to determine,
based on the position information, a position of the audio device 250 with respect
to the apparatus 200.
[0067] Without limiting the scope of the claims, an advantage of determining a position
of the audio device 250 with respect to the apparatus 200 is that the apparatus 200
may determine which of the audio device 250 and the apparatus 200 is closer to a capturing
target, thereby enabling better audio quality.
[0068] According to an example embodiment, the apparatus 200 is configured to determine
the microphone array closest to the capturing target based on the position of the
audio device 250 with respect to the apparatus 200. According to an example embodiment,
the apparatus 200 is configured to control the microphone array closest to the capturing
target.
[0069] The apparatus 200 may be configured to allow for a user to control audio capture
using at least one microphone array. According to an example embodiment, the apparatus
is configured to provide a user interface on the mobile computing device for controlling
the capturing of content.
[0070] According to an example embodiment, the apparatus 200 comprises means for performing
the features of the claimed invention, wherein the means for performing comprises
at least one processor 110, at least one memory 160 including computer program code
120, the at least one memory 160 and the computer program code 120 configured to,
with the at least one processor 110, cause the performance of the apparatus 200. The
means for performing the features of the claimed invention may comprise means for
receiving orientation information relating to an orientation of an audio device operatively
connected to an apparatus, means for determining, based on the orientation information,
an orientation of the audio device with respect to the apparatus, means for determining,
based on the orientation of the audio device with respect to the apparatus, a direction
of capturing content by the apparatus, and means for controlling at least one functionality
of the apparatus for capturing content in the determined direction.
[0071] The apparatus 200 may further comprise means for receiving position information relating
to a position of the audio device, means for determining, based on the position information,
a position of the audio device with respect to the apparatus 200 and means for determining
the microphone array closest to the capturing target based on the position of the
audio device with respect to the apparatus 200. The apparatus 200 may further comprise
means for providing a user interface on the apparatus to control capturing content.
[0072] Figure 3 illustrates an example of capturing content. In the example of Figure 3,
the apparatus 200 is a mobile computing device configured to communicate with the
audio device 250 such as an ear bud and both the apparatus 200 and the audio device
250 comprise at least one microphone array for capturing audio.
[0073] The apparatus 200 is configured to determine a position and/or orientation of the
audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates
together with Bluetooth technology.
[0074] In the example of Figure 3, a first user 301 interviews a second user 302 that is
a capturing target. The first user 301 records the interview by capturing video of
the interview using the apparatus 200 and the audio device 250. The audio device 250
is configured to function as pair with another audio device 304.
[0075] In the example of Figure 3, the apparatus 200 receives position information from
the audio device 250 and determines a position of the audio device 250 with respect
to the apparatus 200. The apparatus 200 further determines, based on the position
of the audio device 250 with respect to the apparatus 200, which of the microphone
array of the apparatus 200 and the microphone array of the audio device is the closest
to the capturing target (second user 302). In the example of Figure 3, the audio device
250 is closer to the capturing target (second user 302) than the apparatus 200.
[0076] As the audio device 250 is determined as the closest to the capturing target (second
user 302), the apparatus 200 determines that the microphone array of the audio device
250 is to be used for capturing audio such that the microphone array of the audio
device 250 is controlled to focus audio capturing towards the second user 302. The
audio focus is illustrated with dashed line 305.
[0077] The apparatus further receives orientation information relating to the audio device
250 and determines, based on the orientation information, the orientation of the audio
device 250 with respect to the apparatus 200. The apparatus 200 further determines,
based on the orientation of the audio device 250 with respect to the apparatus 200,
a direction for capturing content by the apparatus 200. In the example of Figure 3,
the orientation of the audio device 250 is such that the audio device 250 points towards
the second user 302. The apparatus 200 determines that the orientation of the audio
device 250 corresponds to the field of view of the back camera 303 and activates the
back camera 303.
[0078] The apparatus 200 further provides a user interface 306 for the first user 301 for
controlling the capturing of content. The user interface 306 comprises an illustration
of the audio focus 307 provided by the microphone array of the audio device 250 for
enabling the first user 301 to control the audio focus parameters.
[0079] Figure 4 illustrates an example of capturing content. In the example of Figure 4,
the apparatus 200 is a mobile computing device configured to communicate with audio
device 250. Similarly to Figure 3, both the apparatus 200 and the audio device 250
comprise a microphone array for capturing audio.
[0080] Similarly to the example of Figure 3, the apparatus 200 is configured to determine
a position and/or orientation of the audio device 250 using Bluetooth technology,
acoustic localization and/or GPS coordinates together with Bluetooth technology.
[0081] In the example of Figure 4, a first user 301 interviews a second user 302 that is
a capturing target. The first user 301 records the interview by capturing video of
the interview using the apparatus 200 and the audio device 250. The audio device 250
is configured to function as pair with another audio device 304.
[0082] In the example of Figure 4, the apparatus 200 receives position information from
the audio device 250 and determines a position of the audio device 250 with respect
to the apparatus 200. The apparatus 200 further determines, based on the position
of the audio device 250 with respect to the apparatus 200, which of the microphone
array of the apparatus 200 and the microphone array of the audio device 250 is the
closest to the second user 302. In the example of Figure 4, the apparatus 200 is closer
to the second user 302 than the audio device 250.
[0083] As the apparatus 200 is determined as the closest to the capturing target (second
user 302), the apparatus 200 determines that the microphone array of the apparatus
200 is to be used for capturing audio such that the microphone array of the apparatus
200 is controlled to focus audio capturing towards the second user 302. The audio
focus is illustrated with dashed line 405.
[0084] The apparatus further receives orientation information relating to the audio device
250 and determines, based on the orientation information, the orientation of the audio
device 250 with respect to the apparatus 200. The apparatus 200 further determines,
based on the orientation of the audio device 250 with respect to the apparatus 200,
a direction for capturing content by the apparatus 200. In the example of Figure 4,
the orientation of the audio device 250 is such that the audio device 250 points towards
the second user 302. The apparatus 200 determines that the orientation of the audio
device 250 corresponds to the field of view of the back camera 303 and activates the
back camera 303.
[0085] The apparatus 200 further provides a user interface 306 for the first user 301 for
controlling the capturing of content. The user interface 306 comprises an illustration
of the audio focus 407 provided by the microphone array of the apparatus 200 for enabling
the first user 301 to control the audio focus parameters.
[0086] Figure 5 illustrates a further example of capturing content. In the example of Figure
5, the apparatus 200 is a mobile computing device configured to communicate with audio
device 250. Similarly to Figures 3 and 4, both the apparatus 200 and the audio device
250 comprise a microphone array for capturing audio.
[0087] Similarly to the examples of Figures 3 and 4, the apparatus 200 is configured to
determine a position and/or orientation of the audio device 250 using Bluetooth technology,
acoustic localization and/or GPS coordinates together with Bluetooth technology.
[0088] In the example of Figure 5, a first user 301 interviews a second user 302 such that
the first user 301 is a capturing target. The first user 301 records the interview
by capturing video of the interview using the apparatus 200 and the audio device 250.
The audio device 250 is configured to function as pair with another audio device 304.
[0089] In the example of Figure 5, the apparatus 200 receives position information from
the audio device 250 and determines a position of the audio device 250 with respect
to the apparatus 200. The apparatus 200 further determines, based on the position
of the audio device 250 with respect to the apparatus 200, which of the microphone
array of the apparatus 200 and the microphone array of the audio device is the closest
to the first user 301. In the example of Figure 5, the audio device 250 is closer
to the first user 301 than the apparatus 200.
[0090] As the audio device 250 is determined as the closest to the capturing target (first
user 301), the apparatus 200 determines that the microphone array of the audio device
250 is to be used for capturing audio such that the microphone array is controlled
to focus audio capturing towards the capturing target (first user 301).
[0091] The apparatus further receives orientation information relating to the audio device
250 and determines, based on the orientation information, the orientation of the audio
device 250 with respect to the apparatus 200. The apparatus 200 further determines,
based on the orientation of the audio device 250 with respect to the apparatus 200,
a direction for capturing content by the apparatus 200. In the example of Figure 5,
the orientation of the audio device 250 is such that the audio device 250 points towards
the first user 301. The apparatus 200 determines that the orientation of the audio
device 250 corresponds to the field of view of a front camera and activates the front
camera.
[0092] The apparatus 200 further provides a user interface 306 for the first user 301 for
controlling the capturing of content. The user interface 306 comprises an illustration
of the audio focus 507 provided by the microphone array of the audio device 250 for
enabling the first user 301 to control the audio focus parameters.
[0093] Figure 6 illustrates a yet further example of capturing content. In the example of
Figure 6, the apparatus 200 is a mobile computing device configured to communicate
with audio device 250. In the example of Figure 6, both the apparatus 200 and the
audio device 250 comprise a microphone array for capturing audio.
[0094] Similarly to the examples of Figures 3 to 5, the apparatus 200 is configured to determine
a position and/or orientation of the audio device 250 using Bluetooth technology,
acoustic localization and/or GPS coordinates together with Bluetooth technology.
[0095] In the example of Figure 6, a first user 301 interviews a second user 302 such that
the first user 301 is a capturing target. The first user 301 records the interview
by capturing video of the interview using the apparatus 200 and the audio device 250.
The audio device 250 is configured to function as pair with another audio device 304.
[0096] In the example of Figure 6, the apparatus 200 receives position information from
the audio device 250 and determines a position of the audio device 250 with respect
to the apparatus 200. The apparatus 200 further determines, based on the position
of the audio device 250 with respect to the apparatus 200, which of the microphone
array of the apparatus 200 and the microphone array of the audio device is the closest
to the capturing target (first user 301). In the example of Figure 6, the apparatus
200 is closer to the capturing target (first user 301) than the audio device 250.
[0097] As the apparatus 200 is determined as the closest to the capturing target (first
user 301), the apparatus 200 determines that the microphone array of the apparatus
200 is to be used for capturing audio such that the microphone array is controlled
to focus audio capturing towards the capturing target (first user 301).
[0098] The apparatus further receives orientation information relating to the audio device
250 and determines, based on the orientation information, the orientation of the audio
device 250 with respect to the apparatus 200. The apparatus 200 further determines,
based on the orientation of the audio device 250 with respect to the apparatus 200,
a direction for capturing content by the apparatus 200. In the example of Figure 6,
the orientation of the audio device 250 is such that the audio device 250 points towards
the first user 301. The apparatus 200 determines that the orientation of the audio
device 250 corresponds to the field of view of a front camera and activates the front
camera.
[0099] The apparatus 200 further provides a user interface 306 for the first user 301 for
controlling the capturing of content. The user interface 306 comprises an illustration
of the audio focus 607 provided by the microphone array of the apparatus 200 for enabling
the first user 301 to control the audio focus parameters.
[0100] Figure 7 illustrates an example method 700 incorporating aspects of the previously
disclosed embodiments. More specifically the example method 700 illustrates controlling
at least one functionality of the apparatus 200 for capturing content in a determined
direction. The method may be performed by the apparatus 200 such as a mobile computing
device.
[0101] The apparatus 200 is configured to determine a position and/or orientation of the
audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates
together with Bluetooth technology.
[0102] The method starts with receiving 705 orientation information relating to an orientation
of an audio device 250 operatively connected to the apparatus 200. The orientation
information may comprise measurement data indicating the orientation of the audio
device 250 or a signal based on which the orientation of the audio device 250 may
be determined.
[0103] The method continues with determining 710, based on the orientation information,
an orientation of the audio device 250 with respect to the apparatus 200. Determining
the orientation of the audio device 250 with respect to the apparatus 200 may comprise
comparing the orientation of the audio device 250 with an orientation of the apparatus
200. The orientation of the audio device 250 with respect to the apparatus 200 may
comprise determining a direction of the audio device 250.
[0104] The method further continues with determining 715, based on the orientation of the
audio device 250 with respect to the apparatus 200, a direction for capturing content
by the apparatus 200. A direction for capturing content by the apparatus 200 comprises
a direction corresponding to the orientation of the audio device 250.
[0105] The method further continues with controlling 720 at least one functionality of the
apparatus 200 for capturing content in the determined direction. Capturing content
may comprise, for example, capturing video content comprising audio and visual information.
[0106] Figure 8 illustrates another example method 800 incorporating aspects of the previously
disclosed embodiments. More specifically the example method 800 illustrates selecting
a microphone array for capturing audio and controlling at least one functionality
of the apparatus 200 for capturing content in a determined direction. The method may
be performed by the apparatus 200 such as a mobile computing device.
[0107] The apparatus 200 is configured to determine a position and/or orientation of the
audio device 250 using Bluetooth technology, acoustic localization and/or GPS coordinates
together with Bluetooth technology.
[0108] The method starts with receiving 805 position information relating to the audio device
250 and determining 810 a position of the audio device 250 with respect to the apparatus
200.
[0109] The method continues with determining 815 a microphone array closest to a capturing
target. The microphone array closest to the capturing target is selected as a microphone
array for capturing audio.
[0110] The method further continues with receiving 820 orientation information relating
to an orientation of an audio device 250 operatively connected to the apparatus 200.
The orientation information may comprise measurement data indicating the orientation
of the audio device 250 or a signal based on which the orientation of the audio device
250 may be determined.
[0111] The method further continues with determining 825, based on the orientation information,
an orientation of the audio device 250 with respect to the apparatus 200. Determining
the orientation of the audio device 250 with respect to the apparatus 200 may comprise
comparing the orientation of the audio device 250 with an orientation of the apparatus
200. The orientation of the audio device 250 with respect to the apparatus 200 may
comprise determining a direction of the audio device 250.
[0112] The method further continues with determining 830, based on the orientation of the
audio device 250 with respect to the apparatus 200, a direction of capturing content
by the apparatus 200. A direction for capturing content by the apparatus 200 comprises
a direction corresponding to the orientation of the audio device 250.
[0113] The method further continues with controlling 835 at least one functionality of the
apparatus 200 for capturing content in the determined direction. Capturing content
may comprise, for example, capturing video content comprising audio and visual information.
[0114] Without limiting the scope of the claims, an advantage of controlling at least one
functionality of an apparatus based on an orientation of an audio device is that a
direction of a capturing target may be indicated using the audio device. Another advantage
is that audio may be captured using different microphone arrays in a controlled manner.
A further advantage is that a visual information may be captured using different cameras
in a controlled manner.
[0115] Without in any way limiting the scope, interpretation, or application of the claims
appearing below, a technical effect of one or more of the example embodiments disclosed
herein is that a high quality audio/video capture may be provided using a distributed
content capturing. Another technical effect may be a dynamic control of content capturing.
[0116] As used in this application, the term "circuitry" may refer to one or more or all
of the following: (a) hardware-only circuit implementations (such as implementations
in only analog and/or digital circuitry) and (b) combinations of hardware circuits
and software, such as (as applicable): (i) a combination of analog and/or digital
hardware circuit(s) with software/firmware and (ii) any portions of hardware processor(s)
with software (including digital signal processor(s)), software, and memory(ies) that
work together to cause an apparatus, such as a mobile phone or server, to perform
various functions) and (c) hardware circuit(s) and or processor(s), such as a microprocessor(s)
or a portion of a microprocessor(s), that requires software (e.g., firmware) for operation,
but the software may not be present when it is not needed for operation.
[0117] This definition of circuitry applies to all uses of this term in this application,
including in any claims. As a further example, as used in this application, the term
circuitry also covers an implementation of merely a hardware circuit or processor
(or multiple processors) or portion of a hardware circuit or processor and its (or
their) accompanying software and/or firmware. The term circuitry also covers, for
example and if applicable to the particular claim element, a baseband integrated circuit
or processor integrated circuit for a mobile device or a similar integrated circuit
in server, a cellular network device, or other computing or network device.
[0118] Embodiments of the present invention may be implemented in software, hardware, application
logic or a combination of software, hardware and application logic. The software,
application logic and/or hardware may reside on the apparatus, a separate device or
a plurality of devices. If desired, part of the software, application logic and/or
hardware may reside on the apparatus, part of the software, application logic and/or
hardware may reside on a separate device, and part of the software, application logic
and/or hardware may reside on a plurality of devices. In an example embodiment, the
application logic, software or an instruction set is maintained on any one of various
conventional computer-readable media. In the context of this document, a 'computer-readable
medium' may be any media or means that can contain, store, communicate, propagate
or transport the instructions for use by or in connection with an instruction execution
system, apparatus, or device, such as a computer, with one example of a computer described
and depicted in FIGURE 2. A computer-readable medium may comprise a computer-readable
storage medium that may be any media or means that can contain or store the instructions
for use by or in connection with an instruction execution system, apparatus, or device,
such as a computer.
[0119] If desired, the different functions discussed herein may be performed in a different
order and/or concurrently with each other. Furthermore, if desired, one or more of
the above-described functions may be optional or may be combined.
[0120] Although various aspects of the invention are set out in the independent claims,
other aspects of the invention comprise other combinations of features from the described
embodiments and/or the dependent claims with the features of the independent claims,
and not solely the combinations explicitly set out in the claims.
[0121] It will be obvious to a person skilled in the art that, as the technology advances,
the inventive concept can be implemented in various ways. The invention and its embodiments
are not limited to the examples described above but may vary within the scope of the
claims.