TECHNICAL FIELD
[0001] The disclosure relates to method of adjusting a configuration of a hearing device
configured to be worn at an ear of a user to the individual needs of the user, according
to the preamble of claim 1. The disclosure further relates to a system for fitting
a hearing device configured to be worn at an ear of a user to the individual needs
of a user, according to the preamble of claim 15.
BACKGROUND
[0002] Hearing devices may be used to improve the hearing capability or communication capability
of a user, for instance by compensating a hearing loss of a hearing-impaired user,
in which case the hearing device is commonly referred to as a hearing instrument such
as a hearing aid, or hearing prosthesis. A hearing device may also be used to output
sound based on an audio signal which may be communicated by a wire or wirelessly to
the hearing device. A hearing device may also be used to reproduce a sound in a user's
ear canal detected by an input transducer such as a microphone or a microphone array.
The reproduced sound may be amplified to account for a hearing loss, such as in a
hearing instrument, or may be output without accounting for a hearing loss, for instance
to provide for a faithful reproduction of detected ambient sound and/or to add audio
features of an augmented reality in the reproduced ambient sound, such as in a hearable.
A hearing device may also provide for a situational enhancement of an acoustic scene,
e.g. beamforming and/or active noise cancelling (ANC), with or without amplification
of the reproduced sound. A hearing device may also be implemented as a hearing protection
device, such as an earplug, configured to protect the user's hearing. Different types
of hearing devices configured to be be worn at an ear include earbuds, earphones,
hearables, and hearing instruments such as receiver-in-the-canal (RIC) hearing aids,
behind-the-ear (BTE) hearing aids, in-the-ear (ITE) hearing aids, invisible-in-the-canal
(IIC) hearing aids, completely-in-the-canal (CIC) hearing aids, cochlear implant systems
configured to provide electrical stimulation representative of audio content to a
user, a bimodal hearing system configured to provide both amplification and electrical
stimulation representative of audio content to a user, or any other suitable hearing
prostheses. A hearing system comprising two hearing devices configured to be worn
at different ears of the user is sometimes also referred to as a binaural hearing
device. A hearing system may also comprise a hearing device, e.g., a single monaural
hearing device or a binaural hearing device, and a user device, e.g., a smartphone
and/or a smartwatch, communicatively coupled to the hearing device.
[0003] Hearing devices are often employed in conjunction with communication devices, such
as smartphones or tablets, for instance when listening to sound data processed by
the communication device and/or during a phone conversation operated by the communication
device. More recently, communication devices have been integrated with hearing devices
such that the hearing devices at least partially comprise the functionality of those
communication devices. A hearing system may comprise, for instance, a hearing device
and a communication device.
[0004] In recent times, some hearing devices are also increasingly equipped with different
sensor types. Traditionally, those sensors often include an input transducer to detect
a sound, e.g., a sound detector such as a microphone or a microphone array. An amplified
and/or signal processed version of the detected sound may then be outputted to the
user by an output transducer, e.g., a receiver, loudspeaker, or electrodes to provide
electrical stimulation representative of the outputted signal. In an effort to provide
the user with even more information about himself and/or the ambient environment,
various other sensor types are progressively implemented, in particular sensors which
are not directly related to the sound reproduction and/or amplification function of
the hearing device. Those sensors include inertial sensors, such as accelerometers,
allowing to monitor the user's movements. Physiological sensors, such as optical sensors
and bioelectric sensors, are mostly employed for monitoring the user's health.
[0005] When a hearing device is initially provided to a user, and during follow-up tests
and checkups thereafter, it is usually necessary to "fit" the hearing device to the
user. Traditionally, fitting of a hearing device to a user is typically performed
by an audiologist, health care professional (HCP), or the like who presents, e.g.,
during a during a hearing device fitting session, various stimuli having different
loudness levels, e.g., at different frequencies, to the user. The audiologist relies
on subjective feedback from the user as to how such stimuli are perceived. The subjective
feedback may then be used to generate an audiogram that indicates individual hearing
thresholds and loudness comfort levels of the user. Depending on the audiogram, a
current configuration of the hearing device can be adjusted, e.g., to provide for
an amplification of sound compensating an individual hearing loss of the user. Additionally
or alternatively, a fitting of the hearing device to the individual needs of the user
can provide for an adjustment of a current configuration of the hearing device in
various other aspects including, e.g., adjusting of a gain model, frequency and/or
gain compression, feedback control, beamforming, noise suppression, communication
properties such as wireless communication, speech enhancement, an enhancement of a
music content in the audio signal and/or other audio signal processing algorithms
executed by the hearing device.
[0006] In more recent times, the user has been enabled to handle at least part of the aspects
required for the fitting of the hearing device on his own. E.g., when the user is
not fully content with the fitting of his hearing device performed by the HCP, the
user may perform a readjustment and/or fine tuning of one or more configuration parameters
indicative of the current configuration of the hearing device. As another example,
some hearing devices which can be purchased over the counter (OTC) may be fitted by
the user himself, e.g., with regard to a desired amplification characteristics and/or
other configuration parameters of the hearing device without requiring an additional
assistance of an HCP. Furthermore, other configuration parameters of the hearing device
such as a control of volume, noise reduction, beamforming, spectral composition and/or
the like can be individually adjusted by the user himself.
[0007] To this end, a computer implemented program, such as an App running on a smartphone,
may be provided to the user allowing the user to enter a user command indicative of
an adjustment desired by the user of at least one configuration parameter of the hearing
device. The program may provide for a graphical user interface which may be displayed,
e.g., on a screen of a smartphone. The user interface may include one or more input
interfaces each allowing to input a respective user command. However, such interaction
can be rather complex or tedious, e.g., in cases where there are numerous fitting
options to be addressed. Moreover, with such graphical user interfaces, it can be
difficult for the user to easily identify and address all of the possible fitting
options or finding one of the possible fitting options addressing his particular needs.
[0008] To mitigate those disadvantages, an input support may be provided to the user which
can facilitate entering of the user command for the user. For instance, when multiple
input interfaces for entering different user commands are displayed, one of the input
interfaces could be highlighted to attract the users attention, or an input option
of the user command representing a possible adjustment of the configuration parameter
could be presented to the user, or a support message could be outputted to the user.
Such an input support, however, could also have negative side effects. E.g., when
the user is rather experienced in the fitting process or currently exploring a desired
adjustment by entering a dedicated user command, he may feel distracted or confused
by the input support. Generally, in some situations, the input support may be perceived
as helpful and, in other situations, it may also be perceived as useless. In particular,
providing additional information about the fitting as an input support may only facilitate
the fitting process when the user is stuck or overtaxed by the fitting procedure.
SUMMARY
[0009] It is an object of the present disclosure to avoid at least one of the above mentioned
disadvantages and to provide for a user-friendlier adjustability of a current configuration
of a hearing device, in particular with regard to the individual needs of the user.
It is a further object to not overload the user with potentially needless and/or misleading
information during an adjustment of the hearing device configuration and/or to provide
input support only in situations in which it would be helpful and/or desired by the
user. It is another object to provide a user interface for hearing device configuration
adjustment optimized for the user's individual needs when it comes to assisting the
user in performing the adjustment. It is yet another object to provide a hearing system
which is configured to operate in such a manner.
[0010] At least one of these objects can be achieved by a method of adjusting a configuration
of a hearing device comprising the features of claim 1 and/or a hearing system comprising
the features of claim 15. Advantageous embodiments of the invention are defined by
the dependent claims and the following description.
[0011] Accordingly, the present disclosure proposes a method of adjusting a configuration
of a hearing device configured to be worn at an ear of a user to the individual needs
of the user, wherein the hearing device is communicatively coupled to a communication
device, the method comprising
- initiating querying of a user command to be entered by the user via a user interface
included in the communication device, the user command indicative of an adjustment
desired by the user of at least one configuration parameter indicative of a current
configuration of the hearing device;
- adjusting, depending on the user command, the configuration parameter;
- receiving image information representative of a facial expression of the user; and
- initiating presenting, depending on the facial expression, an input support to the
user facilitating entering of the user command.
[0012] In this way, by taking into account the facial expression of the user when performing
the fitting, the input support can be presented in suitable situations, e.g., when
the facial expression indicates a certain frustration and/or bafflement and/or confusion
and/or astonishment and/or helplessness and/or stress level and/or insecurity of the
user. In particular, the facial expression of the user may be taken as an indicator
of a cognitive or mental load of the user when operating the user interface. Restricting
a presenting of the input support to those situations can thus improve an ease of
operation and/or handling and/or user friendliness of the user interface.
[0013] Independently, the present disclosure also proposes a non-transitory computer-readable
medium storing instructions that, when executed by a processor, cause a hearing device
to perform operations of the method.
[0014] Independently, the present disclosure also proposes a system for adjusting a configuration
of a hearing device configured to be worn at an ear of a user to the individual needs
of a user, the system comprising a hearing device configured to be worn at an ear
of the user and a communication device communicatively coupled to the hearing device,
wherein the hearing device and/or the communication device comprises a processor configured
to
- initiate querying of a user command to be entered by the user via a user interface
included in the communication device, the user command indicative of an adjustment
desired by the user of at least one configuration parameter indicative of a current
configuration of the hearing device;
- adjust, depending on the user command, the configuration parameter;
- receive image information representative of a facial expression of the user; and
- initiate presenting, depending on the facial expression, an input support to the user
facilitating entering of the user command.
[0015] Subsequently, additional features of some implementations of the method and/or the
hearing system are described. Each of those features can be provided solely or in
combination with at least another feature. The features can be correspondingly provided
in some implementations of the method and/or the hearing system.
[0016] In some implementations, the facial expression comprises at least one of
- a position and/or orientation of the user's eyebrows, e.g. a raising and/or narrowing
of the eyebrows, and/or the like;
- a dilation and/or position and/or movement of the user's pupils;
- a wrinkling of the user's forehead, e.g. a frowning between the eyebrows, and/or the
like; and
- a shape of the user's mouth.
[0017] In some implementations, the input support comprises at least one of
- modifying, on the user interface, at least one input interface for inputting the user
command, e.g., from a plurality of input interfaces for inputting different user commands;
- adding, on the user interface, at least one input interface for inputting the user
command;
- presenting, on the user interface, an input option of the user command representing
a possible adjustment of the configuration parameter;
- changing, on the user interface, a layout on which at least one input interface for
inputting the user command is presented to the user; and
- outputting a support message to the user.
[0018] In some implementations, the input option may be presented as a proposal for a user
command representing a possible adjustment of the configuration parameter.
[0019] In some implementations, the modifying the input interface may comprise at least
of
- highlighting, on the user interface, at least one input interface for inputting the
user command, e.g., from a plurality of input interfaces for inputting different user
commands; and/or
- masking and/or removing, on the user interface, at least one input interface for inputting
the user command of a plurality of input interfaces.
[0020] In some implementations, the method further comprises
- selecting at least one input interface for inputting the user command, e.g., from
a plurality of input interfaces for inputting different user commands, wherein said
presenting the input support comprises a modifying of the selected input interface;
and/or
- determining an input option of the user command representing a possible adjustment
of the configuration parameter, wherein said presenting the input support comprises
presenting of the determined input option.
[0021] In some implementations, the at least one input interface is selected from a plurality
of input interfaces for inputting different user commands. In some implementations,
the input option is determined as a proposal for a user command representing a possible
adjustment of the configuration parameter.
[0022] In some implementations, the at least one input interface is selected and/or the
input option is determined depending on at least one of sensor data, an audio signal,
and at least one user command previously entered by the user and/or at least one input
interface previously employed by the user to enter the user command.
[0023] In some implementations, the method further comprises receiving sensor data from
a sensor including at least one of
an input transducer configured to provide at least part of the sensor data as an audio
signal indicative of sound detected in the environment of the user;
a displacement sensor configured to provide at least part of the sensor data as displacement
data indicative of a displacement of the hearing device;
a location sensor configured to provide at least part of the sensor data as location
data indicative of a current location of the user;
a clock configured to provide at least part of the sensor data as time data indicative
of a current time;
a physiological sensor configured to provide at least part of the sensor data as physiological
data indicative of a physiological property of the user; and
an environmental sensor configured to provide at least part of the sensor data as
environmental data indicative of a property of the environment of the user,
wherein the input support is presented depending on the sensor data.
[0024] In some implementations, an input interface for the user command may be selected
and/or added and/or modified depending on the sensor data and/or an input option of
the user command representing a possible adjustment of the configuration parameter
may be determined and/or presented depending on the sensor data.
[0025] In some implementations, the method further comprises
- logging, in a memory, the sensor data,
wherein an input option of the user command representing a possible adjustment of
the configuration parameter is predicted based on the logged sensor data; and/or an
input interface is predicted based on the logged sensor data. In some implementations,
the predicted input option may be determined to be comprised in the input support
and/or the predicted input interface may be selected to be comprised in the input
support.
[0026] In some implementations, the method further comprises
- determining an interaction time of the user with the user interface, wherein the input
support is presented depending on the interaction time. E.g., when the interaction
time exceeds a predetermined threshold, the input support may be presented.
[0027] In some implementations, the method further comprises
- receiving, from an audio input unit, an audio signal, wherein the input support is
presented depending on the audio signal. In some implementations, the audio input
unit is a input transducer and/or audio signal receiver.
[0028] In some implementations, an input interface for the user command may be selected
and/or added and/or modified depending on the audio signal and/or an input option
of the user command representing a possible adjustment of the configuration parameter
may be determined and/or presented depending on the audio signal.
[0029] In some implementations, the method further comprises
- logging, in a memory, the audio signal,
wherein an input option of the user command representing a possible adjustment of
the configuration parameter is predicted based on the logged audio signal; and/or
an input interface is predicted based on the logged audio signal. In some implementations,
the predicted input option may be determined to be comprised in the input support
and/or the predicted input interface may be selected to be comprised in the input
support.
[0030] In some implementations, the method further comprises
- logging, in a memory, one or more user commands previously entered by the user,
wherein an input option of the user command representing a possible adjustment of
the configuration parameter is predicted based on the logged user commands; and/or
- logging, in a memory, one or more input interfaces for inputting the user command
which have been previously used by the user to enter the user command,
wherein an input interface is predicted based on the logged user commands.
[0031] In some implementations, the predicted input option may be determined to be comprised
in the input support and/or the predicted input interface may be selected to be comprised
in the input support.
[0032] In some implementations, the method further comprises
- logging, in a memory, the audio signal, one or more user commands previously entered
by the user, wherein an input option of the user command representing a possible adjustment
of the configuration parameter is predicted based on the logged user commands; and/or
- logging, in a memory, one or more input interfaces for inputting the user command
which have been previously used by the user to enter the user command, wherein an
input interface is predicted based on the logged user commands. In some implementations,
the predicted input option may be determined to be comprised in the input support
and/or the predicted input interface may be selected to be comprised in the input
support.
[0033] In some implementations, the configuration parameter comprises at least one of
- an amplification, e.g., gain, of an audio signal outputted by the hearing device,
e.g., an audio signal received by an input transducer;
- a control of a feedback of an audio signal outputted by the hearing device;
- a property of a beamforming algorithm executed by the hearing device;
- a property of a noise suppression algorithm executed by the hearing device;
- a property of a communication port included in the hearing device;
- a selection of an audio processing algorithm executed by the hearing device, e.g.,
from a plurality of different audio processing algorithms;
- an enhancement of a speech content in an audio signal outputted by the hearing device;
and
- an enhancement of a music content in an audio signal outputted by the hearing device.
[0034] In some implementations, the user interface comprises at least one of a slider, a
touch screen, a push button, and a text and/or numerical input field allowing to input
the adjustment desired by the user.
[0035] In some implementations, the image information is provided by an optical sensor included
in the communication device. In some implementations, the communication device comprises
at least one of a mobile phone; a tablet; a smartwatch; and goggles.
[0036] In some implementations, the method further comprises
- relating the image information to previously recorded image information of the user's
face and/or to previously recorded image information of people different from the
user.
[0037] In some implementations, the communication device comprises a display and the input
support is displayed on the display.
[0038] In some implementations, the input support comprises a voice message outputted to
the user by an output transducer included in the hearing device.
[0039] In some implementations, the hearing device comprises a processor configured to process
an audio signal to generate a processed audio signal; and an output transducer configured
to output an output audio signal based on the processed audio signal so as to stimulate
the user's hearing. In some implementations, the hearing device further comprises
an audio input unit configured to provide for the audio signal. In some implementations,
the audio input unit comprises an input transducer configured to provide an audio
signal indicative of a sound detected in the environment of the user. In some implementations,
the audio input unit comprises an audio signal receiver configured to receive the
audio signal from a remote location, e.g., as a radio frequency (RF) signal.
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] Reference will now be made in detail to embodiments, examples of which are illustrated
in the accompanying drawings. The drawings illustrate various embodiments and are
a part of the specification. The illustrated embodiments are merely examples and do
not limit the scope of the disclosure. Throughout the drawings, identical or similar
reference numbers designate identical or similar elements. In the drawings:
- Fig. 1
- schematically illustrates a hearing system comprising an exemplary hearing device
and an exemplary communication device;
- Fig. 2
- schematically illustrates an exemplary sensor unit comprising one or more sensors
which may be implemented in the hearing device illustrated in Fig. 1;
- Fig. 3
- schematically illustrates an embodiment of the hearing device illustrated in Fig.
1 as a RIC hearing aid;
- Figs. 4, 5
- schematically illustrate exemplary communication devices;
- Figs. 6A, 6B
- schematically illustrate a user interacting with a user interface included in a communication
device;
- Fig. 7
- schematically illustrates a communication device querying a user command to be entered
by the user;
- Figs. 8A - 8C
- schematically illustrate the communication device illustrated in Fig. 7, wherein an
input support is presented to the user; and
- Figs. 9-12
- schematically illustrate exemplary methods of adjusting a configuration of a hearing
device according to principles described herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0041] FIG. 1 illustrates an exemplary hearing system 101 comprising a hearing device 110
and a communication device 150. Hearing device 110 is configured to be worn at an
ear of a user. Hearing device 110 may be implemented by any type of hearing device
configured to enable or enhance hearing or a listening experience of a user wearing
hearing device 110. For example, hearing device 110 may be implemented by a hearing
aid configured to provide an amplified version of audio content to a user, a sound
processor included in a cochlear implant system configured to provide electrical stimulation
representative of audio content to a user, a sound processor included in a bimodal
hearing system configured to provide both amplification and electrical stimulation
representative of audio content to a user, or any other suitable hearing prosthesis,
or an earbud or an earphone or a hearable.
[0042] Different types of hearing device 110 can also be distinguished by the position at
which they are worn at the ear. Some hearing devices, such as behind-the-ear (BTE)
hearing aids and receiver-in-the-canal (RIC) hearing aids, typically comprise an earpiece
configured to be at least partially inserted into an ear canal of the ear, and an
additional housing configured to be worn at a wearing position outside the ear canal,
in particular behind the ear of the user. Some other hearing devices, as for instance
earbuds, earphones, hearables, in-the-ear (ITE) hearing aids, invisible-in-the-canal
(IIC) hearing aids, and completely-in-the-canal (CIC) hearing aids, commonly comprise
such an earpiece to be worn at least partially inside the ear canal without an additional
housing for wearing at the different ear position.
[0043] Communication device 150 may be implemented by any type of communication device configured
to communicate data with hearing device 110. For instance, communication device 150
may be implemented as a wearable device configured to be worn be the user, e.g., smart
glasses or a smart watch, or as a portable device configured to be ported by the user,
e.g., a smart phone, a tablet, or a laptop. Communication device 150 may also be implemented
as a stationary device, e.g., a desktop computer.
[0044] As shown, communication device 150 includes a communication port 159 which can be
communicatively coupled to a communication port 119 included in hearing device 110.
Communication ports 119, 159 may be implemented by any suitable data transmitter and/or
data receiver and/or data transducer configured to exchange data with another device.
Communication ports 119, 159 may be configured for wired and/or wireless data communication,
e.g., via a communication link 122. For instance, data may be communicated in accordance
with a Bluetooth
TM protocol and/or by any other type of radio frequency (RF) communication.
[0045] Communication device 150 further comprises a user interface 157. User interface 157
may be implemented by any suitable interface allowing a user to enter a user command.
E.g., the user command may be provided as interaction data indicative of an interaction
of the user with user interface 157. For instance, user interface 157 may be implemented
as a touch sensor, e.g., a touch screen, and/or a push button and/or a slide and/or
a toggle and/or a displacement sensor such as an accelerometer and/or a keyboard and/or
a mouse and/or a speech detector configured to recognize speech and/or transform speech
into a user command. Communication device 150 may further include a processor 152
communicatively coupled to communication port 159 and user interface 157. Communication
device 150 may further include a memory communicatively coupled to processor 152.
Communication device 150 may include additional or alternative components as may serve
a particular implementation.
[0046] As shown, hearing system 101 further comprises an optical sensor 154. Optical sensor
154 may be implemented by any sensor configured to capture image data of the user,
e.g., from the user's face. For instance, optical sensor 154 may be implemented as
a camera, e.g., a video camera, a digital camera, a CCD camera, a framing camera,
a selfie camera, and/or the like. As illustrated, optical sensor 154 may comprise
an internal optical sensor 155 which may be included in communication device 150 and/or
an external optical sensor 165 which may be provided externally from communication
device 150. In some implementations, optical sensor 154 may comprise an optical sensor
included in hearing device 110. Image data provided by external optical sensor 165
and/or other information related to the image data, such as information about a facial
expression of the user, may be transmitted to communication device 150. For instance,
e.g., when communication device 150 is implemented as a smartphone, smart glasses
or a computer, internal optical sensor 155 may be implemented as a camera included
in communication device 150. Additionally or alternatively, e.g., when communication
device 150 is implemented as a smartwatch, external optical sensor 165 may be implemented
as a camera included in another communication device communicatively coupled to communication
device 150.
[0047] As shown, hearing device 110 includes a processor 112 communicatively coupled to
communication port 119, a memory 113, an audio input unit 114, and an output transducer
117. Audio input unit 114 may comprise at least one input transducer 115 and/or an
audio signal receiver 116 configured to provide an input audio signal. Hearing device
110 may further include a sensor unit 118 communicatively coupled to processor 112.
Hearing device 110 may include additional or alternative components as may serve a
particular implementation. Input transducer 115 may be implemented by any suitable
device configured to detect sound in the environment of the user and to provide an
input audio signal indicative of the detected sound, e.g., a microphone or a microphone
array. Output transducer 117 may be implemented by any suitable audio transducer configured
to output an output audio signal to the user, for instance a receiver of a hearing
aid, an output electrode of a cochlear implant system, or a loudspeaker of an earbud.
[0048] A processor of hearing system 101, which may be configured to execute on or more
of the operations described above and below, may be implemented as a single processing
device, e.g., processor 112 of hearing device 110 or processor 152 of communication
device 150, or may be implemented as a processor comprising multiple processing units.
The processing units may cooperate as a distributed processing system and/or in a
master-slave configuration and/or may perform different processing tasks independently
from one another. E.g., processor 112 of hearing device 110 may be a first processing
unit, and processor 152 of communication device 150 may be a second processing unit.
Another processing unit may be implemented in optical sensor 154. E.g., the processing
units of processor 112, 152 may communicate data via communication ports 119, 159.
Processor 112, 152 may be communicatively coupled to optical sensor 154, e.g., via
a fixed connection and/or via communication ports 119, 159.
[0049] Processor 112, 152 is configured to initiate querying of a user command to be entered
by the user via user interface 157, the user command indicative of an adjustment desired
by the user of at least one configuration parameter indicative of a current configuration
of hearing device 100; to adjust, depending on the user command, the configuration
parameter; to receive image information representative of a facial expression of the
user, which may be captured by optical sensor 154; and initiate presenting, depending
on the facial expression, an input support to the user facilitating entering of the
user command. These and other operations, which may be performed by processor 112,
152 are described in more detail in the description that follows.
[0050] Memory 113 may be implemented by any suitable type of storage medium and is configured
to maintain, e.g. store, data controlled by processor 112, 152 in particular data
generated, accessed, modified and/or otherwise used by processor 112, 152. For example,
memory 113 may be configured to store one or more configuration parameters indicative
of a current configuration of hearing device 110. The configuration parameters may
be adjusted, e.g., after being accessed by processor 112, 152. The adjusted configuration
parameters may be stored, e.g., by overwriting previously stored configuration parameters,
in memory 113.
[0051] As another example, memory 113 may be configured to store instructions used by processor
112, 152 to modify the audio signal received from audio input unit 114, e.g., audio
processing instructions in the form of one or more audio processing algorithms. The
audio processing algorithms may comprise different audio processing instructions of
processing the input audio signal received from input transducer 115 and/or audio
signal receiver 116. For instance, the audio processing algorithms may provide for
at least one of a gain model (GM) defining an amplification characteristic, a noise
cancelling (NC) algorithm, a wind noise cancelling (WNC) algorithm, a reverberation
cancelling (RevC) algorithm, a feedback cancelling (FC) algorithm, a speech enhancement
(SE) algorithm, a gain compression (GC) algorithm, a noise cleaning algorithm, a binaural
synchronization (BS) algorithm, a beamforming (BF) algorithm, in particular static
and/or adaptive beamforming, and/or the like. A plurality of the audio processing
algorithms may be executed by processor 112, 152 in a sequence and/or in parallel
to generate a processed audio signal.
[0052] Memory 113 may comprise a non-volatile memory from which the maintained data may
be retrieved even after having been power cycled, for instance a flash memory and/or
a read only memory (ROM) chip such as an electrically erasable programmable ROM (EEPROM).
A non-transitory computer-readable medium may thus be implemented by memory 113. Memory
113 may further comprise a volatile memory, for instance a static or dynamic random
access memory (RAM). A corresponding memory may be implemented in communication device
150. Processor 112, 152 may be configured to access memory 113 included in hearing
device 110 and/or the memory included in communication device 150.
[0053] As illustrated, hearing device 110 may comprise an input transducer 115. Input transducer
115 may be implemented by any suitable device configured to detect sound in the environment
of the user, e.g., a microphone or a microphone array, and/or to detect sound in the
inside the ear canal of the user, e.g., an ear canal microphone, and to provide an
audio signal indicative of the detected sound. As illustrated, hearing device 110
may comprise an audio signal receiver 116. Audio signal receiver 116 may be implemented
by any suitable data receiver and/or data transducer configured to receive an input
audio signal from a remote audio source. For instance, the remote audio source may
be a wireless microphone, such as a table microphone, a clip-on microphone and/or
the like, and/or a portable device, such as a smartphone, smartwatch, tablet and/or
the like, and/or any another data transceiver configured to transmit the input audio
signal to audio signal receiver 116. E.g., the remote audio source may be a streaming
source configured for streaming the input audio signal to audio signal receiver 116.
Audio signal receiver 116 may be configured for wired and/or wireless data reception
of the input audio signal. For instance, the input audio signal may be received in
accordance with a Bluetooth
™ protocol and/or by any other type of radio frequency (RF) communication.
[0054] As illustrated, hearing device 110 may comprise a sensor unit 118 comprising at least
one sensor communicatively coupled to processor 112, 152, e.g., in addition to input
transducer 115. Some examples of a sensor which may be implemented in sensor unit
118 are illustrated in Fig. 2. Alternatively or additionally, sensor unit 118 may
be included in communication device 150 and/or an auxiliary device communicatively
coupled with hearing device 110 and/or communication device 150.
[0055] As illustrated in FIG. 2, sensor unit 118 may include at least one environmental
sensor configured to provide environmental data indicative of a property of the environment
of the user, e.g., in addition to the audio signal provided by input transducer 115,
for example an optical sensor 130 configured to detect light in the environment, e.g.,
a camera configured to provide image information from the user's environment, and/or
a barometric sensor 131 and/or an ambient temperature sensor 132. Sensor unit 118
may include at least one physiological sensor configured to provide physiological
data indicative of a physiological property of the user, for example an optical sensor
133 and/or a bioelectric sensor 134 and/or a body temperature sensor 135. Optical
sensor 133 may be configured to emit the light at a wavelength absorbable by an analyte
contained in blood such that the physiological sensor data comprises information about
the blood flowing through tissue at the ear. E.g., optical sensor 133 can be configured
as a photoplethysmography (PPG) sensor such that the physiological sensor data comprises
PPG data, e.g. a PPG waveform. Bioelectric sensor 134 may be implemented as a skin
impedance sensor and/or an electrocardiogram (ECG) sensor and/or an electroencephalogram
(EEG) sensor and/or an electrooculography (EOG) sensor.
[0056] Sensor unit 118 may include a movement sensor 136 configured to provide movement
data indicative of a movement of the user, for example an accelerometer and/or a gyroscope
and/or a magnetometer. Sensor unit 118 may include at least one location sensor 138
configured to provide location data indicative of a current location of the user,
for instance a GPS sensor. Sensor unit 118 may include at least one clock 139 configured
to provide time data indicative of a current time. Context data may be defined as
data indicative of a local and/or temporal context of the data provided by other sensors
115, 131 - 137. Context data may comprise the location data and/or the time data provided
by location sensor 138 and/or clock 139. Context data may also be received from an
external device via communication port 119, e.g., from communication device 150. E.g.,
one or more of sensors 115, 131 - 137 may then be included in communication device
150. Sensor unit 118 may include further sensors providing sensor data indicative
of a property of the user and/or the environment and/or the context.
[0057] FIG. 3 illustrates an exemplary implementation of hearing device 110 as a RIC hearing
aid 210. RIC hearing aid 210 comprises a BTE part 220 configured to be worn at an
ear at a wearing position behind the ear, and an ITE part 240 configured to be worn
at the ear at a wearing position at least partially inside an ear canal of the ear.
BTE part 220 comprises a BTE housing 221 configured to be worn behind the ear. BTE
housing 221 accommodates processor 112 communicatively coupled to input transducer
115 and audio signal receiver 116. BTE part 220 further includes a battery 227 as
a power source. BTE part 220 may further include a user interface 257, which may be
implemented, e.g., at a surface of BTE housing 221. ITE part 240 is an earpiece comprising
an ITE housing 241 at least partially insertable in the ear canal. ITE housing 241
accommodates output transducer 117. ITE part 240 may further include another input
transducer as an in-the-ear input transducer 145, e.g., an ear canal microphone, configured
to detect sound inside the ear canal and to provide an in-the-ear audio signal indicative
of the detected sound. BTE part 220 and ITE part 240 are interconnected by a cable
251. Processor 112 is communicatively coupled to output transducer 117 and to in-the-ear
input transducer 145 of ITE part 240 via cable 251 and cable connectors 252, 253 provided
at BTE housing 221 and ITE housing 241. In some implementations, at least one of sensors
130 - 139 is included in BTE part 220 and/or ITE part 240.
[0058] FIG. 4 illustrates exemplary implementations of a communication device 410 which
may be communicatively coupled to hearing device 110, 210, e.g., via communication
port 119. For example, communication device 410 may be a portable device configured
to be worn stationary with the user and operable at a position remote from the ear
at which hearing device 110, 210 is worn. As illustrated, portable device 410 comprises
a portable housing 411 which may be configured, e.g., to be worn by the user on the
user's body at a position remote from the ear at which hearing device 110 is worn.
E.g., portable device 410 may be implemented as a smartphone, a tablet, and/or the
like.
[0059] Portable device 410 further comprises a user interface 428 implemented as a touch
sensor allowing the user to enter a user command which can be received by processor
112 of hearing device 110 and/or a processor of the communication device 410 as user
control data. For instance, as illustrated, user interface 428 may be implemented
as a touch screen operable to display information to the user. Querying of a user
command to be entered by the user via user interface 428 may be implemented by displaying
a corresponding query on the touch screen, e.g., in the form of a text, symbol, and/or
other visual signs. In other examples, user interface 428 may be implemented by speech
recognition allowing the user to enter a user command with his voice. In other examples,
querying of a user command to be entered by the user via user interface 428 may be
implemented by outputting a voice message via output transducer 117.
[0060] Portable device 410 further comprises an optical sensor 455. In the illustrated example,
optical sensor 455 is a camera facing the same direction as user interface 428. Thus,
when the user is manipulating user interface 428, optical sensor 455 is configured
to face the user's face and/or to capture image information from the user's face.
E.g., when portable device 410 is implemented as a smartphone, optical sensor 455
may be implemented as a front camera and/or a selfie camera.
[0061] FIG. 5 illustrates further exemplary implementations of a communication device 510
which may be communicatively coupled to hearing device 110, 210, e.g., via communication
port 119. For example, communication device 510 may be a wearable device configured
to be worn by the user, e.g., on his body, and operable at a position remote from
the ear at which hearing device 110, 210 is worn. As illustrated, wearable device
510 comprises a wearable housing 511 which may be configured, e.g., to be worn by
the user on the user's body at a position remote from the ear at which hearing device
110 is worn. E.g., wearable device 510 may be implemented as a smartwatch, smart glasses,
and/or the like. In the illustrated example, wearable device 510 is implemented as
smart glasses, wherein wearable housing 511 comprises an eyeglass frame surrounding
eyeglasses 512, 513.
[0062] Wearable device 510 further comprises an optical sensor 555, 556. In the illustrated
example, optical sensor 555, 556 is a pair of cameras facing the user's face. E.g.,
optical sensor 555, 556 may be implemented in front of and/or behind eyeglasses 512,
513. Thus, when the user is manipulating user interface 157, 428, optical sensor 555,
556 is configured to face the user's face, in particular the user's eyes, and/or to
capture image information from the user's face, in particular from the user's eyes.
[0063] In some implementations, a communication device may be implemented as two or more
communications devices, e.g., at least one portable device 410 and/or at least one
wearable device 510, communicatively coupled to each other. For example, a user interface
of the communication device may include touch screen 428 of portable device 410 and
an optical sensor of the communication device may include pair of cameras 555, 556
of wearable device 510.
[0064] FIGS. 6A and 6B schematically illustrate situations in which a user 611 interacts
with user interface 428 of communication device 410 to enter a user command. During
the user interaction, image information representative of a facial expression 621,
622 of user 611 is captured by camera 455. The image information may indicate at least
one of a position and/or orientation of the user's eyebrows, e.g., a frowning, raising
and/or narrowing of the eyebrows; a dilation and/or position and/or movement of the
user's pupils; a wrinkling of the user's forehead; and a shape of the user's mouth.
To this end, the image information may be evaluated, e.g., by processor 112, 152,
to extract and/or verify a presence and/or a magnitude of at least one of those features
in facial expression 621, 622.
[0065] In the situation illustrated in FIG. 6A, facial expression 621 of user 611 comprises
features indicating an elevated cognitive effort and/or frustration and/or bafflement
and/or confusion and/or helplessness and/or elevated stress level and/or insecurity
of the user when interacting with user interface 428. Those features may include narrowed
and/or angled and/or raised eyebrows and/or a wrinkling of the user's forehead, e.g.,
a frowning between the eyebrows. Those features may further include a shape of the
user's mouth, e.g., lowered and/or rather straight corners of the mouth. Those features
may also include a property of the user's pupils. E.g., a dilation of the user's pupils,
i.e., the pupils being larger than usual, and/or a position of the pupils facing away
from user interface 428 can indicate a large cognitive effort and/or an elevated stress
level and/or frustration of the user.
[0066] In the situation illustrated in FIG. 6B, facial expression 622 of user 611 comprises
features indicating a small cognitive effort and/or confidence and/or calmness and/or
relaxation and/or low stress level and/or security of the user when interacting with
user interface 428. Those features may include rather straight and/or lowered eyebrows
and/or an absence of a wrinkling on the user's forehead. Those features may further
include a shape of the user's mouth, e.g., raised corners of the mouth and/or a smile.
Those features may also include a property of the user's pupils. E.g., an absence
of a dilation of the user's pupils can indicate a small cognitive effort and/or a
low stress level.
[0067] To identify and/or classify one or more of such features in facial expression 622
of user 611, an image recognition algorithm may be applied on the image information
provided by optical sensor 154, 155, 165, 455, 555, 556. E.g., the image recognition
algorithm may be executed by processor 112, 152 and/or a computing device communicatively
coupled to communication device 150 and/or hearing device 110, e.g., a remote server
which may be accessed via an internet connection. E.g., the image recognition algorithm
can be configured to relate the image information to previously recorded image information
of the user's face and/or to previously recorded image information of people different
from the user. Image recognition algorithms which are enabled to perform such a task,
e.g., by a machine learning (ML) algorithm such as a (deep) neural network, are known
in the art. E.g., an algorithm as disclosed in
Front. Psychol. 12:759485 (2021) by Song, Z., entitled "Facial Expression Emotion
Recognition Model Integrating Philosophy and Machine Learning Theory ", doi: 10.3389/fpsyg.2021.759485, and/or in
BioMed Eng OnLine 8, 16 (2009) by Kulkarni, S.S., Reddy, N.P., and Hariharan, S.,
entitled "Facial expression (mood) recognition from facial images using committee
neural networks" and/or in the references cited therein may be employed.
[0068] FIG. 7 illustrates embodiments of an exemplary query of a user command to be entered
by user 611 via user interface 428 included in communication device 410. In the illustrated
example, the query is displayed on a display of the communications device, e.g., on
touch screen 428 of communication device 410. In other examples, the query may be
outputted by output transducer 117, e.g., as a voice message.
[0069] In the illustrated example, the query is displayed in the form of one or more texts
611, 612, 613, 614, 651, 652 indicative of a respective configuration parameter of
hearing device 110 and/or one or more symbolic representations 616, 617, 618, 619
of the configuration parameter. Further displayed are one or more input interfaces
631, 632, 633, 634, 641, 642 allowing the user to enter a respective user command
indicating an adjustment desired by user 611 of the respective configuration parameter.
Input interface 641, 642 is implemented as a push button allowing the user to enter
the user command by pushing the button. Input interface 631 - 634 is implemented as
a slider allowing the user to enter the user command by moving the slider. Further,
a graphical boundary and/or limit 621, 622, 623, 624 for entering the user command
via slider 631 - 634 is displayed.
[0070] Each input interface 631 - 634, 641, 642 relates to at least one configuration parameter
adjustable depending on the user command entered by user 611 via input interface 631
- 634, 641, 642. In the illustrated example, input interface 631 relates to a volume
control and/or input interface 632 relates to a beamforming adjustment and/or input
interface 631 relates to a noise reduction adjustment and/or input interface 632 relates
to a spectral balance modification.
[0071] The volume control can be configured to adjust a volume of an audio signal processed
by hearing device 110 so as to change a level of an output audio signal output by
output transducer 117 so as to stimulate the user's hearing. For example, the volume
may be adjusted during an audio signal processing performed by an audio signal processor,
e.g., by adjusting an amplitude of the audio signal, and/or during an amplification
of the audio signal performed by an audio signal amplifier, e.g., by adjusting a gain
provided by the amplifier.
[0072] The beamforming adjustment can be configured to adjust a property of a beamforming
applied on the audio signal, e.g., during an audio signal processing performed by
an audio signal processor. For instance, the adjustment of the beamforming may comprise
at least one of turning the beamforming on or off and/or changing a beam width of
the beamforming and/or changing a directivity of the beamforming. E.g., when the directivity
of the beam points toward the front of the user, the directivity may be adjusted to
the side and/or back of the user.
[0073] The noise reduction adjustment can be configured to adjust a property of a noise
reduction, e.g., a noise cancelling (NC), applied on an audio signal, e.g., during
an audio signal processing performed by the audio signal processor. E.g., an audio
signal processing may provide for a cancelling and/or suppression and/or cleaning
of noise contained in the audio signal. For instance, the property of the NC, which
may be adjusted by the noise cancelling adjustment, may include a type and/or strength
of the NC. E.g., different types of the NC may include general noise and/or noise
caused by a non-speech audio source and/or noise at a certain noise level and/or frequency
range and/or noise emitted from a specific audio source, e.g., traffic noise, aircraft
noise, construction site noise, etc. Different strengths of the NC may indicate a
content of the noise in the modified audio signal, e.g., an amount of which noise
is removed and/or still present in the modified audio signal.
[0074] The spectral balance modification can be configured to adjust a spectral balance
of an audio signal and/or a spectral balance of a specific content in an audio signal.
The spectral balance can be indicative of a frequency content of the audio signal.
The frequency content may comprise a power of one or more frequencies and/or frequency
bands, e.g., relative to a power of one or more other frequencies and/or frequency
bands. The frequency range of the frequency content may comprise, e.g., a range of
audible frequencies, e.g., from 20 Hz to 20.000 Hz, and/or a range of inaudible frequencies.
A specific content in the audio signal, for which the spectral balance may be modified,
may include, e.g., a music content and/or a speech content, e.g., an own voice content
and/or a voice content of another person and/or a significant other.
[0075] Further, in the illustrated example, input interfaces 641, 642 relate to different
audio processing algorithms for the processing of an audio signal. E.g., one or more
of the different audio processing algorithms may be activated and/or deactivated by
pushing one or more of input interfaces 641, 642. As illustrated, at least one of
the audio processing algorithms may be related to a clarity of sound outputted by
output transducer 117 and/or at least one of the audio processing algorithms may be
related to a listening comfort when sound is outputted by output transducer 117. E.g.,
when the audio processing algorithm related to the clarity of sound is activated,
the audio processing may be performed in a way to provide an enhanced clarity, e.g.,
sharpness, of the outputted sound. Such a configuration of hearing device 110 may
be beneficial, e.g., to provide for a better speech intelligibility. As another example,
when the audio processing algorithm related to the listening comfort is activated,
the audio processing may be performed in a way to provide for a more comfortable listening
experience, which may be accompanied, e.g., by a reduced clarity and/or sharpness
of the outputted sound. Such a configuration of hearing device 110 may be beneficial,
e.g., to provide for a better acoustic atmosphere in daily situations, e.g., not involving
social contacts.
[0076] In the illustrated example, when multiple input interfaces 631 - 634, 641, 642 are
presented to the user, the input interfaces may be presented in a predetermined layout,
e.g., in a predetermined order and/or size, on user interface 428. E.g., input interfaces
631 - 634, 641, 642 may be spatially and/or temporally separated in the predetermined
order. In the illustrated example, the multiple input interfaces 631 - 634, 641, 642
are presented to the user on a single screen, e.g., on touch screen 428. In some examples,
the multiple input interfaces 631 - 634, 641, 642 can then be spatially separated
in the predetermined order by displaying adjustment options 631 - 634 subsequently
in a defined direction, e.g., from the top of screen 428 to the bottom of screen 428.
In other examples, at least two of the multiple input interfaces 631 - 634, 641, 642
can be presented to the user on a different screens, e.g., on touch screen 428. The
different screens may by accessible to the user by entering a dedicated user command,
e.g., on user interface 428, such as performing a manual gesture on user interface
428, e.g., swiping on user interface 428 with one or more fingers. In other examples,
the input interfaces 631 - 634, 641, 642 may be presented to the user in the form
of voice messages which may be outputted to the user in a temporally separated manner.
[0077] FIGS. 8A - 8C illustrate embodiments of an exemplary input support which may be presented
to the user depending on image information about a facial expression of the user when
interacting with user interface 157, 428 included in communication device 410. In
particular, the input support may be presented in a case in which the image information
is representative of at least one of the features of facial expression 621 indicating
an elevated cognitive effort and/or frustration and/or bafflement and/or confusion
and/or helplessness and/or elevated stress level and/or insecurity of user 611. In
the illustrated example, the input support is displayed on a display of the communications
device, e.g., on touch screen 428 of communication device 410. In other examples,
the input support may be outputted by output transducer 117, e.g., as a voice message.
[0078] In the example illustrated in FIG. 8A, the input support is provided by adding and/or
modifying e.g., highlighting, at least one input interface 631 - 634, 641, 642 for
entering the user command. To this end, e.g., as illustrated, one or more texts 611
- 614, 651, 652 indicative of a respective configuration parameter adjustable by input
interface 631 - 634, 641, 642 and/or one or more symbolic representations 616 - 619
of the configuration parameter may be emphasized, e.g., by increasing a font size
and/or changing a color. In other examples, a size and/or color of input interface
631 - 634, 641, 642 may be changed and/or a layout on which input interface 631 -
634, 641, 642 is presented and/or masking and/or removing at least one input interface
631 - 634, 641, 642 which is not to be highlighted.
[0079] In some implementations, when the input support comprises adding and/or modifying
e.g., highlighting, of the selected input interface, the input interface to be modified
may be selected, e.g., from input interfaces 631 - 634, 641, 642. In some instances,
the input interface to be modified is selected depending on sensor data. To this end,
sensor data received from at least one of input transducer 115, displacement sensor
136, environmental sensor 130, 131, 132, physiological sensor 133, 134, 135, location
sensor 138, and clock 139 may be employed. In the illustrated example, input interface
633 for adjusting a configuration parameter related to noise reduction is highlighted.
[0080] For example, selecting input interface 633 to be added and/or modified may be based
on sensor data received from input transducer 115. To illustrate, when the sensor
data provided by input transducer 115, e.g., an audio signal indicative of a sound
in the user's environment, is determined to include a rather low signal to noise ratio
(SNR), input interface 633 may be selected to be highlighted. As another example,
selecting input interface 633 to be added and/or modified may be based on sensor data
received from environmental sensor 130 - 132. To illustrate, when the sensor data
provided by environmental sensor 130 - 132 is indicative of a rather noisy environment
and/or acoustic scene, input interface 633 may be selected to be highlighted. As another
example, selecting input interface 633 to be added and/or modified may be based on
sensor data received from physiological sensor 133 - 134. To illustrate, when the
sensor data provided by physiological sensor 133 - 134 is indicative of a medical
emergency of the user, input interface 633 may be selected to be highlighted. As another
example, selecting input interface 633 to be added and/or modified may be based on
sensor data received from displacement sensor 136. To illustrate, when the sensor
data provided by displacement sensor 136 is indicative of the user resting in a rather
static position, e.g., being in a calm state, input interface 633 may be selected
to be highlighted. Other examples, in which input interface 633 can be selected to
be highlighted, include a location of the user and/or time.
[0081] In some implementations, when the input support comprises presenting of a selected
input interface, the input interface may be selected by predicting the input interface,
e.g., based on logged user commands and/or based on sensor data, which may be provided
by any of sensors 130 - 136, 138, 139, and/or based on an audio signal. For instance,
previously entered user commands and/or sensor data and/or audio signals may be logged
in memory 113 and/or in a memory of communication device 150. To illustrate, the user
command may be predicted based on logged user commands and/or sensor data and/or audio
signals, which have been collected in a database. In some implementations, the database
is included a look-up table from which the predicted user command can be outputted.
In some implementations, a machine learning (ML) algorithm, which outputs the predicted
user command, may be trained with the database.
[0082] In the example illustrated in FIG. 8B, the input support is provided by presenting
an input option of the user command. The selected input option may thus represent
a possible adjustment of the configuration parameter. In particular, input option
662 may be presented as a possible position of slider 632, e.g., along graphical boundary
622. In other examples, the input option may be presented by adding and/or highlighting
one or more push buttons 641, 642 corresponding to the possible adjustment. In other
examples, the input option may presented by presenting suggestions of a number and/or
text to be entered by the user.
[0083] In some implementations, when the input support comprises presenting an input option
of the user command, the input option to be presented may be determined beforehand.
In some instances, the input interface to be modified is determined depending on sensor
data, which may be provided by any of sensors 130 - 136, 138, 139, and/or an audio
signal, as described above. In some implementations, when the input support comprises
presenting an input option of the user command, the input option may be determined
by predicting the input option, e.g., based on logged user commands and/or based on
logged sensor data, which may be provided by any of sensors 130 - 136, 138, 139, and/or
based on logged audio signals. For instance, previously entered user commands and/or
sensor data and/or audio signals may be logged in memory 113 and/or in a memory of
communication device 150. E.g., the logged user commands and/or sensor data and/or
audio signals may be collected in a database which may be included a look-up table
and/or to train an ML algorithm, as described above.
[0084] In the example illustrated in FIG. 8C, the input support is provided by outputting
one or more support messages 671, 672 to the user. Support message 671, 672 may provide
additional information, e.g., about one or more configuration parameters, for entering
the user command to the user. E.g., the additional information may include an explanation
and/or illustration of the configuration parameter. In the illustrated example, support
message 671, 672 is outputted in a text form, e.g., on a display of the communications
device, e.g., on touch screen 428 of communication device 410. In other examples,
support message 671, 672 may be outputted by output transducer 117, e.g., as a voice
message.
[0085] Furthermore, as illustrated, the input support can be provided by highlighting at
least one input interface 641, 642 by means of an allocation 673, 674 of support message
671, 672 to input interface 641, 642. As illustrated, the allocation may be outputted
on display 428, e.g., as a graphic symbol such as an arrow 641, 642. In other examples,
the allocation may be outputted by output transducer 117, e.g., as a voice message
referring to at least one input interface 641, 642. As further illustrated, highlighting
of input interface 641, 642 may comprise masking and/or removing at least another
input interface 631 - 634 from the user interface, e.g., display 428. The masked and/or
removed input interface 631 - 634 may then be made accessible to the user, e.g., by
a user command which may be entered via the user interface. E.g., as illustrated,
the user command may be implemented as a manual gesture, e.g., swiping on the user
interface. The user command may be indicated to the user, e.g., by prompting a text
message 677 on the user interface and/or a voice message outputted by output transducer
117.
[0086] FIG. 9 illustrates a block flow diagram for an exemplary method of adjusting a configuration
of a hearing device 110, 210. The method may be executed by processor 112, 152 included
in hearing device 110, 210 and/or communication device 150. At operation S11, querying
of a user command to be entered by the user via a user interface included in the communication
device is initiated. At operation S12, image information 711 representative of a facial
expression of the user is received. At operation S15, depending on the facial expression,
presenting of an input support to the user is initiated.
[0087] In particular, at operation S13, after receiving image information 711 at S12, it
may be determined whether the facial expression indicated by image information 711
corresponds to one of at least a first type of facial expression, and a second type
of facial expression. The first type of facial expression may be indicative of, e.g.,
a certain frustration and/or bafflement and/or confusion and/or astonishment and/or
helplessness and/or stress level and/or insecurity of the user. The second type of
facial expression may be indicative of a situation in which the user is in a mental
state from the first type. As another example, the first type of facial expression
may be indicative of a cognitive and/or mental load of the user above a threshold,
and the second type of facial expression may be indicative of a cognitive and/or mental
load of the user below the threshold. In a case in which the facial expression corresponds
to the first type, presenting of the input support to the user is initiated at S15.
In a case in which the facial expression corresponds to the second type, presenting
of the input support to the user is not initiated. Instead, further image information
may be received at S12.
[0088] FIG. 10 illustrates a block flow diagram for another exemplary method of adjusting
a configuration of a hearing device 110, 210. After querying of the user command at
S11, a user command 722 is received at operation S21. Subsequently, at operation S22,
at least one configuration parameter of the hearing device is adjusted depending on
the user command. In particular, receiving of user command 722 at S21 and adjusting
the configuration parameter depending on user command 722 at S22 may be executed in
parallel and/or independently from operations S12, S13, and S15.
[0089] FIG. 11 illustrates a block flow diagram for another exemplary method of adjusting
a configuration of a hearing device 110, 210. At operation S33, which is executed
subsequent to receiving image information 711 at S12 and receiving user command 722
at S21, it is determined whether the facial expression indicated by image information
711 corresponds to the first or second type of facial expression. Subsequently, adjusting
of the configuration parameter of the hearing device is only initiated at S22 in a
case in which the facial expression corresponds to the second type. In a case in which
it is determined at S33 that the facial expression corresponds to the first type,
adjusting of the configuration parameter at S22 is not initiated. Instead, presenting
of the input support to the user is initiated at S15.
[0090] FIG. 12 illustrates a block flow diagram for another exemplary method of adjusting
a configuration of a hearing device 110, 210. At operation S44, which may be executed
subsequent to the determining whether the facial expression corresponds to the first
or second type at S13, S33 or before, sensor data 733 is received. The input support,
which is initiated to be presented at S15, can then also depend on the sensor data
which may be provided by any sensor 130 - 136, 138, 139. For instance, any type and/or
implementation of the input support may be determined based on the sensor data.
[0091] In some examples, an input interface is selected from a plurality of input interfaces
depending on the sensor data. E.g., the input support which is presented at S15 may
then comprise adding and/or modifying of the selected input interface. E.g., as illustrated
in FIG. 8A, the input support which is presented at S15 may then comprise modifying
the selected input interface 633 by a highlighting of the selected input interface.
[0092] In some examples, an input option of the user command representing a possible adjustment
of the configuration parameter is determined depending on the sensor data. E.g., the
input support which is presented at S15 may then comprise presenting of the determined
input option. E.g., as illustrated in FIG. 8B, the input support which is presented
at S15 may then comprise presenting the determined input option 622 as a possible
position of slider 632
[0093] In some implementations, an audio signal may be received in addition or in place
of sensor data 733. In some implementations, the input interface to be selected and/or
the input option to be determined is predicted based on previously entered user commands
and/or previously received sensor data and/or previously received audio signals, which
may be logged, e.g., in a database. The database may then be accessed in addition
or in place of sensor data 733
[0094] While the principles of the disclosure have been described above in connection with
specific devices and methods, it is to be clearly understood that this description
is made only by way of example and not as limitation on the scope of the invention.
The above described preferred embodiments are intended to illustrate the principles
of the invention, but not to limit the scope of the invention. Various other embodiments
and modifications to those preferred embodiments may be made by those skilled in the
art without departing from the scope of the present invention that is solely defined
by the claims. In the claims, the word "comprising" does not exclude other elements
or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single
processor or controller or other unit may fulfil the functions of several items recited
in the claims. The mere fact that certain measures are recited in mutually different
dependent claims does not indicate that a combination of these measures cannot be
used to advantage. Any reference signs in the claims should not be construed as limiting
the scope.
1. A method of adjusting a configuration of a hearing device configured to be worn at
an ear of a user to the individual needs of the user, wherein the hearing device (110,
210) is communicatively coupled to a communication device (150, 410, 510), the method
comprising
- initiating querying of a user command to be entered by the user via a user interface
(157, 428) included in the communication device (150, 410, 510), the user command
indicative of an adjustment desired by the user of at least one configuration parameter
indicative of a current configuration of the hearing device (110, 210); and
- adjusting, depending on the user command, the configuration parameter, characterized by
- receiving image information (711) representative of a facial expression (621, 622)
of the user; and
- initiating presenting, depending on the facial expression (621, 622), an input support
(613, 662, 671, 672) to the user facilitating inputting of the user command.
2. The method of claim 1, wherein the facial expression (621, 622) comprises at least
one of
- a position and/or orientation of the user's eyebrows;
- a dilation and/or position and/or movement of the user's pupils;
- a wrinkling of the user's forehead; and
- a shape of the user's mouth.
3. The method of any of the preceding claims, wherein the input support (621, 622) comprises
at least one of
- modifying, on the user interface (157, 428), at least one input interface (631 -
634, 641, 642) for inputting the user command;
- adding, on the user interface (157, 428), at least one input interface (631 - 634,
641, 642) for inputting the user command;
- presenting, on the user interface (157, 428), an input option (662) of the user
command representing a possible adjustment of the configuration parameter;
- changing, on the user interface (157, 428), a layout on which at least one input
interface (631 - 634, 641, 642) for inputting the user command is presented to the
user; and
- outputting a support message (671, 672) to the user.
4. The method of any of the preceding claims, further comprising
- selecting at least one input interface (631 - 634, 641, 642) for inputting the user
command, wherein said presenting the input support comprises a modifying and/or adding
of the selected input interface (631 - 634, 641, 642); and/or
- determining an input option (662) of the user command representing a possible adjustment
of the configuration parameter, wherein said presenting the input support comprises
presenting of the determined input option (662).
5. The method of any of the preceding claims, further comprising
- receiving sensor data (733) from a sensor (115, 118, 130 - 136, 138, 139)
including
an input transducer (115) configured to provide at least part of the sensor data (733)
as an audio signal indicative of sound detected in the environment of the user; and/or
a displacement sensor (136) configured to provide at least part of the sensor data
(733) as displacement data indicative of a displacement of the hearing device; and/or
a location sensor (138) configured to provide at least part of the sensor data (733)
as location data indicative of a current location of the user; and/or
a clock (139) configured to provide at least part of the sensor data (733) as time
data indicative of a current time; and/or
a physiological sensor (133 - 135) configured to provide at least part of the sensor
data (733) as physiological data indicative of a physiological property of the user;
and/or
an environmental sensor (130 - 132) configured to provide at least part of the sensor
data (733) as environmental data indicative of a property of the environment of the
user,
wherein the input support (613, 662, 671, 672) is presented depending on the sensor
data (733).
6. The method of any of the preceding claims, further comprising
- determining an interaction time of the user with the user interface (157, 428),
wherein the input support (613, 662, 671, 672) is presented depending on the interaction
time.
7. The method of any of the preceding claims, further comprising
- receiving, from an audio input unit (114 - 116), an audio signal,
wherein the input support (613, 662, 671, 672) is presented depending on the audio
signal.
8. The method of any of the preceding claims, further comprising
- logging, in a memory (113), one or more user commands previously entered by the
user,
wherein an input option (662) of the user command representing a possible adjustment
of the configuration parameter is predicted based on the logged user commands; and/or
- logging, in a memory, one or more input interfaces (631 - 634, 641, 642) for inputting
the user command which have been previously used by the user to enter the user command,
wherein an input interface (631 - 634, 641, 642) is predicted based on the logged
user commands.
9. The method of any of the preceding claims, wherein the configuration parameter comprises
at least one of
- an amplification of an audio signal outputted by the hearing device (110, 210);
- a control of a feedback of an audio signal outputted by the hearing device (110,
210);
- a property of a beamforming algorithm executed by the hearing device (110, 210);
- a property of a noise suppression algorithm executed by the hearing device (110,
210);
- a property of a communication port included in the hearing device (110, 210);
- a selection of an audio processing algorithm executed by the hearing device (110,
210);
- an enhancement of a speech content in an audio signal outputted by the hearing device
(110, 210); and
- an enhancement of a music content in an audio signal outputted by the hearing device
(110, 210).
10. The method of any of the preceding claims, wherein the user interface (157, 428) comprises
at least one of a slider, a touch screen, a push button, and a text and/or numerical
input field allowing to input the adjustment desired by the user.
11. The method of any of the preceding claims, wherein the image information (711) is
provided by an optical sensor (154, 155, 165, 455, 555, 556) included in the communication
device (150).
12. The method of any of the preceding claims, further comprising
- relating the image information (711) to previously recorded image information of
the user's face and/or to previously recorded image information of people different
from the user.
13. The method of any of the preceding claims, wherein the communication device (150,
410, 510) comprises a display (428) and the input support (613, 662, 671, 672) is
displayed on the display (428).
14. The method of any of the preceding claims, wherein the input support (613, 662, 671,
672) comprises a voice message outputted to the user by an output transducer (117)
included in the hearing device (110, 210).
15. A system for adjusting a configuration of a hearing device configured to be worn at
an ear of a user to the individual needs of a user, the system comprising a hearing
device (110, 210) configured to be worn at an ear of the user and a communication
device (150, 410, 510) communicatively coupled to the hearing device (110, 210), wherein
the hearing device (110, 210) and/or the communication device (150, 410, 510) comprises
a processor (112, 152) configured to
- initiate querying of a user command to be entered by the user via a user interface
(157, 428) included in the communication device (150, 410, 510), the user command
indicative of an adjustment desired by the user of at least one configuration parameter
indicative of a current configuration of the hearing device (110, 210); and
- adjust, depending on the user command, the configuration parameter, characterized in that the processor (112, 152) is further configured to
- receive image information (711) representative of a facial expression (621, 622)
of the user; and
- initiate presenting, depending on the facial expression (621, 622), an input support
(613, 662, 671, 672) to the user facilitating inputting of the user command.