TECHNICAL FIELD
[0001] The disclosure relates to method of operating a hearing device configured to be worn
at an ear of a user, according to the preamble of claim 1. The disclosure further
relates to a hearing device, according to the preamble of claim 15.
BACKGROUND
[0002] Hearing devices may be used to improve the hearing capability or communication capability
of a user, for instance by compensating a hearing loss of a hearing-impaired user,
in which case the hearing device is commonly referred to as a hearing instrument such
as a hearing aid, or hearing prosthesis. A hearing device may also be used to output
sound based on an audio signal which may be communicated by a wire or wirelessly to
the hearing device. A hearing device may also be used to reproduce a sound in a user's
ear canal detected by an input transducer such as a microphone or a microphone array.
The reproduced sound may be amplified to account for a hearing loss, such as in a
hearing instrument, or may be output without accounting for a hearing loss, for instance
to provide for a faithful reproduction of detected ambient sound and/or to add audio
features of an augmented reality in the reproduced ambient sound, such as in a hearable.
A hearing device may also provide for a situational enhancement of an acoustic scene,
e.g. beamforming and/or active noise cancelling (ANC), with or without amplification
of the reproduced sound. A hearing device may also be implemented as a hearing protection
device, such as an earplug, configured to protect the user's hearing. Different types
of hearing devices configured to be be worn at an ear include earbuds, earphones,
hearables, and hearing instruments such as receiver-in-the-canal (RIC) hearing aids,
behind-the-ear (BTE) hearing aids, in-the-ear (ITE) hearing aids, invisible-in-the-canal
(IIC) hearing aids, completely-in-the-canal (CIC) hearing aids, cochlear implant systems
configured to provide electrical stimulation representative of audio content to a
user, a bimodal hearing system configured to provide both amplification and electrical
stimulation representative of audio content to a user, or any other suitable hearing
prostheses. A hearing system comprising two hearing devices configured to be worn
at different ears of the user is sometimes also referred to as a binaural hearing
device. A hearing system may also comprise a hearing device, e.g., a single monaural
hearing device or a binaural hearing device, and a user device, e.g., a smartphone
and/or a smartwatch, communicatively coupled to the hearing device.
[0003] Hearing devices are often employed in conjunction with communication devices, such
as smartphones or tablets, for instance when listening to sound data processed by
the communication device and/or during a phone conversation operated by the communication
device. More recently, communication devices have been integrated with hearing devices
such that the hearing devices at least partially comprise the functionality of those
communication devices. A hearing system may comprise, for instance, a hearing device
and a communication device.
[0004] In recent times, some hearing devices are also increasingly equipped with different
sensor types. Traditionally, those sensors often include an input transducer to detect
a sound, e.g., a sound detector such as a microphone or a microphone array. An amplified
and/or signal processed version of the detected sound may then be outputted to the
user by an output transducer, e.g., a receiver, loudspeaker, or electrodes to provide
electrical stimulation representative of the outputted signal. In an effort to provide
the user with even more information about himself and/or the ambient environment,
various other sensor types are progressively implemented, in particular sensors which
are not directly related to the sound reproduction and/or amplification function of
the hearing device. Those sensors include inertial sensors, such as accelerometers,
allowing to monitor the user's movements. Physiological sensors, such as optical sensors
and bioelectric sensors, are mostly employed for monitoring the user's health.
[0005] Modern hearing devices provide several features that aim to facilitate speech intelligibility,
improve sound quality, reduce noise level, etc. Many of such sound cleaning features
are designed to benefit the hearing device user's hearing performance in very specific
situations. In order to activate the functionalities only in the situations where
benefit can be expected, an automatic steering system is often implemented which activates
sound cleaning features depending on a combination of, e.g., an acoustic environment
classification, a physical activity classification, a directional classification,
etc.
[0006] To provide for the acoustic environment classification, hearing devices have been
equipped with a sound classifier to classify an ambient sound. An input transducer
can provide an audio signal representative of the ambient sound. The sound classifier
can classify the audio signal allowing to identify different listening situations
by determining a characteristic from the audio signal and assigning the audio signal
to at least one relevant class from a plurality of predetermined classes depending
on the characteristic. Usually, the sound classification does not directly modify
a sound output of the hearing device. Instead, different audio processing instructions
are stored in a memory of the hearing device specifying different audio processing
parameters for a processing of the audio signal, wherein the different classes are
each associated with one of the different audio processing instructions. After assigning
the audio signal to one or more classes, the one or more associated audio processing
instructions are executed. The audio processing parameters specified by the audio
processing instructions can then provide a processing of the audio signal customized
for the particular listening situation corresponding to the at least one class identified
by the classifier. The different listening situations may comprise, for instance,
different classes of listening conditions and/or different classes of sounds. For
example, the different classes may comprise speech and/or nonspeech and/or music and/or
traffic noise and/or other ambient noise.
[0007] The classification may be based on a statistical evaluation of the audio signal,
as disclosed in
EP 3 036 915 B1. More recently, machine learning (ML) algorithms have been employed to classify the
ambient sound. The classifier can be implemented by an artificial intelligence (AI)
chip which may be configured to classify the audio signal by at least one deep neural
network (DNN). The classifier may comprise a sound source separator configured to
separate sound generated by different sound sources, for instance a conversation partner,
passengers passing by the user, vehicles moving in the vicinity of the user such as
cars, airborne traffic such as a helicopter, a sound scene in a restaurant, a sound
scene including road traffic, a sound scene during public transport, a sound scene
in a home environment, and/or the like. Examples of such a sound source separator
are disclosed in international patent application Nos.
PCT/EP 2020/051 734 and
PCT/EP 2020/051 735, and in German patent application No.
DE 2019 206 743.3.
[0008] Besides the acoustic environment classification, since the first digital hearing
aid was created in the 1980s, hearing devices have been increasingly equipped with
the capability to execute a wide variety of increasingly sophisticated audio processing
algorithms intended not only to account for an individual hearing loss of a hearing
impaired user but also to provide for a hearing enhancement in rather challenging
environmental conditions and according to individual user preferences. Those increased
functionalities, however, also make it increasingly difficult for the user to keep
track of different adjustment options for the processing of an audio signal performed
by the hearing device. Further, a handling of those adjustment options becomes increasingly
difficult and may require continuous switching between different screens on a user
device displaying those adjustment options. This can easily lead to a frustration
of a user when he's trying to adjust the hearing device functionalities to his own
preferences.
SUMMARY
[0009] It is an object of the present disclosure to avoid at least one of the above mentioned
disadvantages and to provide for a user-friendlier and/or more intuitive adjustability
of the audio modification capabilities of a hearing device. It is another object to
automatically provide an adjustment option for the user which takes into account current
environmental conditions, such as, e.g., the acoustic surroundings and/or other location
or time dependent characteristics, by still providing for the user's most likely preferences.
It is a further object to provide for an adjustability of the audio processing in
different situations which facilitates the user's handling of the hearing device.
It is a further object to provide a hearing device which is configured to operate
in such a manner.
[0010] At least one of these objects can be achieved by a method of operating a hearing
device configured to be worn at an ear of a user comprising the features of claim
1 and/or a computer-readable medium comprising the features of claim 14 and/or a hearing
device comprising the features of claim 15. Advantageous embodiments of the invention
are defined by the dependent claims and the following description.
[0011] Accordingly, the present disclosure proposes a method of operating a hearing device
configured be worn at an ear of a user, the method comprising
- receiving an audio signal;
- modifying the audio signal, wherein the modifying can be adjusted depending on a user
command received from a user interface, wherein the user interface comprises a plurality
of adjustment options each associated with a respective user command; and
- outputting, by an output transducer included in the hearing device, an output audio
signal based on the modified audio signal so as to stimulate the user's hearing, wherein
the method further comprises
- predicting, depending on the audio signal, at least one of said adjustment options
which most likely conforms to a preference of the user; and
- initiating a presenting of the predicted adjustment option to the user.
[0012] In this way, by predicting the adjustment option which is most probably preferred
by the user and presenting the initiating of its presenting to the user, a handling
of the hearing device can be simplified for the user. In particular, the user preferences
can be accounted for depending on the current environmental conditions by considering
the current acoustic surroundings based on the received audio signal in the decision
which adjustment option needs to be presented to the user.
[0013] Independently, the present disclosure also proposes a non-transitory computer-readable
medium storing instructions that, when executed by a processor, cause a hearing device
to perform operations of the method.
[0014] Independently, the present disclosure also proposes a hearing device configured to
be worn at an ear of a user, the hearing device comprising
an audio input unit configured to provide an audio signal;
a processor configured to modify the audio signal, wherein the modifying can be adjusted
depending on a user command received from a user interface, wherein the user interface
comprises a plurality of adjustment options each associated with a respective user
command; and
an output transducer configured to output an output audio signal based on the modified
audio signal so as to stimulate the user's hearing, wherein the processor is further
configured to
- predict, depending on the audio signal, at least one of said adjustment options which
most likely conforms to a preference of the user; and
- initiate a presenting of the predicted adjustment option to the user.
[0015] Subsequently, additional features of some implementations of the method of operating
a hearing device and/or the computer-readable medium and/or the hearing device are
described. Each of those features can be provided solely or in combination with at
least another feature. The features can be correspondingly provided in some implementations
of the method and/or the hearing device.
[0016] In some implementations, the adjustment option is predicted based on previous user
commands and/or previous adjustment options associated with the user commands and
information about the audio signal when the user command has been entered by the user,
which have been collected in a database. In some implementations, the database is
included a look-up table from which the predicted adjustment option can be outputted.
In some implementations, the audio signal is input into a machine learning (ML) algorithm,
which outputs the predicted adjustment option, wherein the ML algorithm has been trained
with said database.
[0017] In some implementations, the method further comprises
- updating, when the received user command is unassociated with the predicted adjustment
option, the database and/or ML algorithm with the received user command and/or the
adjustment options associated with the received user command and information about
the audio signal when the user command has been entered by the user.
[0018] In some implementations, the method further comprises
- classifying the audio signal by attributing at least one class from a plurality of
predetermined classes to the audio signal, wherein the information about the audio
signal comprises the class attributed to the audio signal.
[0019] In some implementations, the method further comprises
- receiving, from a sensor included in the hearing device, sensor data, wherein the
adjustment option is predicted also depending on the sensor data.
[0020] In some implementations, the method further comprises
- receiving, from a sensor included in the hearing device, sensor data, wherein the
adjustment option is predicted also depending on the sensor data.
[0021] In some implementations, the sensor comprises
a displacement sensor configured to provide at least part of the sensor data as displacement
data indicative of a displacement of the hearing device; and/or
a location sensor configured to provide at least part of the sensor data as location
data indicative of a current location of the user; and/or
a physiological sensor configured to provide at least part of the sensor data as physiological
data indicative of a physiological property of the user; and/or
an environmental sensor configured to provide at least part of the sensor data as
environmental data indicative of a property of the environment of the user.
[0022] In some implementations, the adjustment option is also predicted based on information
about previous sensor data when the user command has been entered by the user, which
is also collected in the database.
[0023] In some implementations, the method further comprises
- classifying the sensor data by attributing at least one class from a plurality of
predetermined classes to the sensor data, wherein the information about the sensor
data comprises the class attributed to the sensor data.
[0024] In some implementations, the method further comprises
- transmitting, to a user device, the predicted adjustment option, wherein the user
device is configured to present the predicted adjustment option to the user. E.g.,
the user device may be a portable device and/or a communication device such as a smartphone,
tablet, smartwatch, and/or the like.
[0025] In some implementations, the method further comprises
- receiving, after the predicted adjustment has been transmitted to the user device,
the user command from the user device.
[0026] In some implementations, the predicted adjustment option is presented to the user
in addition to at least one remaining adjustment option different from the predicted
adjustment option, wherein the predicted adjustment option is spatially and/or temporally
separated from the remaining adjustment option.
[0027] In some implementations, the predicted adjustment option is exclusively presented
to the user at the expense of at least one remaining adjustment option different from
the predicted adjustment option, wherein the remaining adjustment option is presented
to the user upon request of the user.
[0028] In some implementations, the predicted adjustment option is presented to the user
on a screen of the user device.
[0029] In some implementations, at least two different adjustment options are presented
on separate screens on the user device. E.g., the predicted adjustment option may
be presented on a first screen on the user device, and at least one remaining adjustment
option different from the predicted adjustment option may be presented at a second
screen on the user device different from the first screen. E.g., the user device may
be configured to switch between presenting the first screen and the second screen
depending on a request by the user.
[0030] In some implementations, the adjustment options comprise at least one of
- a volume control;
- a property of a beamformer, e.g., a width and/or directivity of the beamformer;
- a property of a noise cancellation, e.g., a strength of the noise cancellation; and
- a spectral balance, e.g., a spectral balance of music content contained in the audio
signal.
[0031] In some implementations, the audio signal is indicative of a sound in the ambient
environment of the user. In some implementations, the audio signal is received from
an input transducer, e.g., a microphone or a microphone array, included in the hearing
device. In some implementations, the audio signal is received by an audio signal receiver
included in the hearing device, e.g., via radio frequency (RF) communication. In some
implementations, the audio signal is received from a remote microphone, e.g., a table
microphone and/or a clip-on microphone. In some implementations, the hearing device
comprises an input transducer configured to provide the audio signal indicative of
a sound detected in the environment of the user. In some implementations, the hearing
device comprises an audio signal receiver configured to receive the audio signal from
a remote audio signal source.
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Reference will now be made in detail to embodiments, examples of which are illustrated
in the accompanying drawings. The drawings illustrate various embodiments and are
a part of the specification. The illustrated embodiments are merely examples and do
not limit the scope of the disclosure. Throughout the drawings, identical or similar
reference numbers designate identical or similar elements. In the drawings:
- Fig. 1
- schematically illustrates an exemplary hearing device;
- Fig. 2
- schematically illustrates an exemplary sensor unit comprising one or more sensors
which may be implemented in the hearing device illustrated in Fig. 1;
- Fig. 3
- schematically illustrates an embodiment of the hearing device illustrated in Fig.
1 as a RIC hearing aid;
- Fig. 4
- schematically illustrates a user interface implemented in a user device;
- Fig. 5
- schematically illustrates an exemplary algorithm of operating a hearing device according
to principles described herein;
- Figs. 6-8
- schematically illustrate adjustment options which may be presented to the user on
a user interface; and
- Fig. 9
- schematically illustrates an exemplary method of operating a hearing device according
to principles described herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0033] FIG. 1 illustrates an exemplary hearing device 110 configured to be worn at an ear
of a user. Hearing device 110 may be implemented by any type of hearing device configured
to enable or enhance hearing or a listening experience of a user wearing hearing device
110. For example, hearing device 110 may be implemented by a hearing aid configured
to provide an amplified version of audio content to a user, a sound processor included
in a cochlear implant system configured to provide electrical stimulation representative
of audio content to a user, a sound processor included in a bimodal hearing system
configured to provide both amplification and electrical stimulation representative
of audio content to a user, or any other suitable hearing prosthesis, or an earbud
or an earphone or a hearable.
[0034] Different types of hearing device 110 can also be distinguished by the position at
which they are worn at the ear. Some hearing devices, such as behind-the-ear (BTE)
hearing aids and receiver-in-the-canal (RIC) hearing aids, typically comprise an earpiece
configured to be at least partially inserted into an ear canal of the ear, and an
additional housing configured to be worn at a wearing position outside the ear canal,
in particular behind the ear of the user. Some other hearing devices, as for instance
earbuds, earphones, hearables, in-the-ear (ITE) hearing aids, invisible-in-the-canal
(IIC) hearing aids, and completely-in-the-canal (CIC) hearing aids, commonly comprise
such an earpiece to be worn at least partially inside the ear canal without an additional
housing for wearing at the different ear position.
[0035] As shown, hearing device 110 includes a processor 112 communicatively coupled to
a memory 113, an audio input unit 114, a user interface 129, and an output transducer
117. Audio input unit 114 may comprise at least one input transducer 115 and/or an
audio signal receiver 116 configured to provide an input audio signal. User interface
129 may comprise at least one internal user interface 127 included in hearing device
110 and/or an external user interface 128 which may be included in a user device,
e.g., a portable device.
[0036] Hearing device 110 may further include a communication port 119. Hearing device 110
may further include a sensor unit 118 communicatively coupled to processor 112. Hearing
device 110 may include additional or alternative components as may serve a particular
implementation. Input transducer 115 may be implemented by any suitable device configured
to detect sound in the environment of the user and to provide an input audio signal
indicative of the detected sound, e.g., a microphone or a microphone array. Output
transducer 117 may be implemented by any suitable audio transducer configured to output
an output audio signal to the user, for instance a receiver of a hearing aid, an output
electrode of a cochlear implant system, or a loudspeaker of an earbud.
[0037] Processor 112 is configured to receive, from audio input unit 114, an audio signal.
E.g., when the audio signal is received from input transducer 115, the audio signal
may be indicative of a sound detected in the environment of the user and/or, when
the audio signal is received from audio signal receiver 116, the audio signal may
be indicative of a sound provided from a remote audio source such as, e.g., a remote
microphone and/or an audio streaming server. Processor 112 is further configured to
modify the audio signal, wherein the modifying can be adjusted depending on a user
command received from user interface 129, wherein the user interface comprises a plurality
of adjustment options each associated with a respective user command; and to control
output transducer 117 to output an output audio signal based on the modified audio
signal so as to stimulate the user's hearing. Processor 112 is also configured to
predict, depending on the audio signal, at least one of the adjustment options which
most likely conforms to a preference of the user; and to initiate a presenting of
the predicted adjustment option to the user. These and other operations, which may
be performed by processor 112, are described in more detail in the description that
follows.
[0038] Memory 113 may be implemented by any suitable type of storage medium and is configured
to maintain, e.g. store, data controlled by processor 112, in particular data generated,
accessed, modified and/or otherwise used by processor 112. For example, memory 113
may be configured to store instructions used by processor 112 to modify the audio
signal received from audio input unit 114, e.g., audio processing instructions in
the form of one or more audio processing algorithms. The audio processing algorithms
may comprise different audio processing instructions of processing the input audio
signal received from input transducer 115 and/or audio signal receiver 116. For instance,
the audio processing algorithms may provide for at least one of a gain model (GM)
defining an amplification characteristic, a noise cancelling (NC) algorithm, a wind
noise cancelling (WNC) algorithm, a reverberation cancelling (RevC) algorithm, a feedback
cancelling (FC) algorithm, a speech enhancement (SE) algorithm, a gain compression
(GC) algorithm, a noise cleaning algorithm, a binaural synchronization (BS) algorithm,
a beamforming (BF) algorithm, in particular static and/or adaptive beamforming, and/or
the like. A plurality of the audio processing algorithms may be executed by processor
112 in a sequence and/or in parallel to generate a processed audio signal.
[0039] As another example, memory 113 may be configured to store instructions used by processor
112 to classify the input audio signal received from input transducer 115 and/or audio
signal receiver 116 by attributing at least one class from a plurality of predetermined
sound classes to the input audio signal. Exemplary classes may include, but are not
limited to, low ambient noise, high ambient noise, traffic noise, music, machine noise,
babble noise, public area noise, background noise, speech, nonspeech, speech in quiet,
speech in babble, speech in noise, speech from the user, speech from a significant
other, background speech, speech from multiple sources, quiet indoor, quiet outdoor,
speech in a car, speech in traffic, speech in a reverberating environment, speech
in wind noise, speech in a lounge, car noise, applause, music, e.g. classical music,
and/or the like. In some instances, the different audio processing instructions can
be associated with different classes.
[0040] As another example, memory 113 may be configured to store a database in which previous
user commands received from user interface 129 and/or previous adjustment options
associated with the user commands and/or information about the audio signal, e.g.,
at a time at which the user command has been entered by the user, are collected. As
another example, memory 113 may be configured to store instructions used by processor
112 to predict at least one of said adjustment options, e.g., a machine learning (ML)
algorithm, which may output the predicted adjustment option.
[0041] Memory 113 may comprise a non-volatile memory from which the maintained data may
be retrieved even after having been power cycled, for instance a flash memory and/or
a read only memory (ROM) chip such as an electrically erasable programmable ROM (EEPROM).
A non-transitory computer-readable medium may thus be implemented by memory 113. Memory
113 may further comprise a volatile memory, for instance a static or dynamic random
access memory (RAM).
[0042] As illustrated, hearing device 110 may further comprise a communication port 119.
Communication port 119 may be implemented by any suitable data transmitter and/or
data receiver and/or data transducer configured to exchange data with another device.
For instance, the other device may be another hearing device configured to be worn
at the other ear of the user than hearing device 110 and/or a communication device
such as a smartphone, smartwatch, tablet and/or the like. Communication port 119 may
be configured for wired and/or wireless data communication. For instance, data may
be communicated in accordance with a Bluetooth
™ protocol and/or by any other type of radio frequency (RF) communication.
[0043] As illustrated, hearing device 110 may comprise an input transducer 115. Input transducer
115 may be implemented by any suitable device configured to detect sound in the environment
of the user, e.g., a microphone or a microphone array, and/or to detect sound in the
inside the ear canal of the user, e.g., an ear canal microphone, and to provide an
audio signal indicative of the detected sound. As illustrated, hearing device 110
may comprise an audio signal receiver 116. Audio signal receiver 116 may be implemented
by any suitable data receiver and/or data transducer configured to receive an input
audio signal from a remote audio source. For instance, the remote audio source may
be a wireless microphone, such as a table microphone, a clip-on microphone and/or
the like, and/or a portable device, such as a smartphone, smartwatch, tablet and/or
the like, and/or any another data transceiver configured to transmit the input audio
signal to audio signal receiver 116. E.g., the remote audio source may be a streaming
source configured for streaming the input audio signal to audio signal receiver 116.
Audio signal receiver 116 may be configured for wired and/or wireless data reception
of the input audio signal. For instance, the input audio signal may be received in
accordance with a Bluetooth
™ protocol and/or by any other type of radio frequency (RF) communication.
[0044] As illustrated, hearing device 110 may comprise an internal user interface 127, which
may be included in hearing device 110 and configured to provide interaction data indicative
of an interaction of the user with hearing device 110, e.g., a touch sensor and/or
a push button and/or a slide and/or a toggle and/or a displacement sensor such as
an accelerometer and/or a speech detector configured to recognize speech and/or transform
speech into a user command. As illustrated, hearing device 110 may comprise an external
user interface 128, which may be included in a user device, which may be communicatively
coupled with hearing device 110, and configured to provide interaction data indicative
of an interaction of the user with the user device, which may then be transmitted
to hearing device 110 as a user command. E.g., the user device may be a portable device
and/or a communication device such as a smartphone, tablet, smartwatch, and/or the
like. E.g., the external user interface 128 may be implemented as a touch screen and/or
a push button and/or a displacement sensor and/or a speech detector, and/or the like.
[0045] As illustrated, hearing device 110 may comprise a sensor unit 118 comprising at least
one further sensor communicatively coupled to processor 112 in addition to input transducer
115. Some examples of a sensor which may be implemented in sensor unit 118 are illustrated
in Fig. 2.
[0046] As illustrated in FIG. 2, sensor unit 118 may include at least one environmental
sensor configured to provide environmental data indicative of a property of the environment
of the user, e.g., in addition to the audio signal provided by input transducer 115,
for example an optical sensor 130 configured to detect light in the environment and/or
a barometric sensor 131 and/or an ambient temperature sensor 132. Sensor unit 118
may include at least one physiological sensor configured to provide physiological
data indicative of a physiological property of the user, for example an optical sensor
133 and/or a bioelectric sensor 134 and/or a body temperature sensor 135. Optical
sensor 133 may be configured to emit the light at a wavelength absorbable by an analyte
contained in blood such that the physiological sensor data comprises information about
the blood flowing through tissue at the ear. E.g., optical sensor 133 can be configured
as a photoplethysmography (PPG) sensor such that the physiological sensor data comprises
PPG data, e.g. a PPG waveform. Bioelectric sensor 134 may be implemented as a skin
impedance sensor and/or an electrocardiogram (ECG) sensor and/or an electroencephalogram
(EEG) sensor and/or an electrooculography (EOG) sensor.
[0047] Sensor unit 118 may include a movement sensor 136 configured to provide movement
data indicative of a movement of the user, for example an accelerometer and/or a gyroscope
and/or a magnetometer. Sensor unit 118 may include at least one location sensor 138
configured to provide location data indicative of a current location of the user,
for instance a GPS sensor. Sensor unit 118 may include at least one clock 139 configured
to provide time data indicative of a current time. Context data may be defined as
data indicative of a local and/or temporal context of the data provided by other sensors
115, 131 - 137. Context data may comprise the location data and/or the time data provided
by location sensor 138 and/or clock 139. Context data may also be received from an
external device via communication port 119, e.g., from a communication device. E.g.,
one or more of sensors 115, 131 - 137 may then be included in the communication device.
Sensor unit 118 may include further sensors providing sensor data indicative of a
property of the user and/or the environment and/or the context.
[0048] FIG. 3 illustrates an exemplary implementation of hearing device 110 as a RIC hearing
aid 210. RIC hearing aid 210 comprises a BTE part 220 configured to be worn at an
ear at a wearing position behind the ear, and an ITE part 240 configured to be worn
at the ear at a wearing position at least partially inside an ear canal of the ear.
BTE part 220 comprises a BTE housing 221 configured to be worn behind the ear. BTE
housing 221 accommodates processor 112 communicatively coupled to input transducer
115 and audio signal receiver 116. BTE part 220 further includes a battery 227 as
a power source. BTE part 220 further includes internal user interface 127, which may
be implemented, e.g., at a surface of BTE housing 221. ITE part 240 is an earpiece
comprising an ITE housing 241 at least partially insertable in the ear canal. ITE
housing 241 accommodates output transducer 117. ITE part 240 may further include another
input transducer as an in-the-ear input transducer 145, e.g., an ear canal microphone,
configured to detect sound inside the ear canal and to provide an in-the-ear audio
signal indicative of the detected sound. BTE part 220 and ITE part 240 are interconnected
by a cable 251. Processor 112 is communicatively coupled to output transducer 117
and to in-the-ear input transducer 145 of ITE part 240 via cable 251 and cable connectors
252, 253 provided at BTE housing 221 and ITE housing 241. In some implementations,
at least one of sensors 130 - 139 is included in BTE part 220 and/or ITE part 240.
[0049] FIG. 4 illustrates exemplary implementations of a user device 410 which may be communicatively
coupled to hearing device 210, e.g., via communication port 119. For example, user
device 410 may be a portable device configured to be worn stationary with the user
and operable at a position remote from the ear at which hearing device 110 is worn.
User device 410 comprises a portable housing 411 which may be configured, e.g., to
be worn by the user on the user's body at a position remote from the ear at which
hearing device 110 is worn. In the illustrated example, portable device 410 is implemented
as a communication device, for example a smartphone, a tablet, a smartwatch, and/or
the like. Portable device 410 further comprises a user interface 428 implemented as
a touch sensor allowing the user to enter a user command which can be received by
processor 112 of hearing device 110 and/or a processor of the communication device
410 as user control data. For instance, as illustrated, user interface 428 may be
implemented as a touch screen operable to display information to the user. In other
examples, user interface 428 may be implemented by speech recognition allowing the
user to enter a user command with his voice.
[0050] FIG. 5 illustrates a functional block diagram of an exemplary audio signal processing
arrangement 501 that may be implemented by hearing device 110, 210. Arrangement 501
comprises at least one input transducer 502, which may be implemented by input transducer
115, and/or at least one audio signal receiver 504, which may be implemented by audio
signal receiver 116. The audio signal provided by input transducer 502 may be an analog
signal. The analog signal may be converted into a digital signal by an analog-to-digital
converter (ADC) 503. The audio signal provided by audio signal receiver 504 may be
an encoded signal. The encoded signal may be decoded into a decoded signal by a decoder
(DEC) 505. Arrangement 501 further comprises at least one output transducer 514, which
may be implemented by output transducer 117. Arrangement 501 further comprises at
least one user interface 516, which may be implemented by user interface 127, 128,
428. Arrangement 501 may further comprise a sensor unit 518, which may be implemented
by at least one of sensors 130 - 136, 138, 139 included in sensor unit 118.
[0051] Arrangement 501 may further comprise a classifier 517. Classifier 517 can be configured
to attribute at least one class to the audio signal provided by input transducer 502
and/or audio signal receiver 504 and/or at least one class to sensor data provided
by sensor unit 518. E.g., when the class is attributed to the audio signal, the class
attributed to the audio signal may include at least one of low ambient noise, high
ambient noise, traffic noise, music, machine noise, babble noise, public area noise,
background noise, speech, nonspeech, speech in quiet, speech in babble, speech in
noise, speech from the user, speech from a significant other, background speech, speech
from multiple sources, quiet indoor, quiet outdoor, speech in a car, speech in traffic,
speech in a reverberating environment, speech in wind noise, speech in a lounge, car
noise, applause, music, e.g. classical music, and/or the like is attributed to the
audio signal. E.g., when the class is attributed to the sensor data, which may be
provided, e.g., by movement sensor 136, attributed to the movement data may comprise
at least one of the user walking, running, standing, the user turning his head, and
the user falling to the ground.
[0052] Arrangement 501 further comprises an adjustment option determination module 523,
a user command reception module 525, an audio modification adjustment module 527,
and an audio modification module 529. Modules 523, 525, 527, 529 may be executed by
at least one processor, e.g., by a processor including processing unit 112 of hearing
device 110 and/or another processing unit included in another hearing device, which
may be configured to be worn at a different ear of the user, and/or in a user device,
e.g., portable device 410.
[0053] As illustrated, the audio signal provided by input transducer 502, after it has been
converted into a digital signal by analog-to-digital converter 503, and/or the audio
signal provided by audio signal receiver 504, after it has been decoded by decoder
505, can be received by audio modification module 529. Audio modification module 529
is configured to modify the audio signal. E.g., audio modification module 529 may
comprise an audio signal processor, e.g., a digital signal processor (DSP) configured
to apply one or more audio processing algorithms on the audio signal to generate the
modified audio signal. As another example, audio modification module 529 may comprise
an amplifier configured to amplify the audio signal, e.g., the processed audio signal
after a processing performed by the audio signal processor, to generate the modified
audio signal. In a case in which a plurality of audio processing algorithms are applied
on the audio signal, the audio processing algorithms may be executed in a sequence
and/or in parallel to generate the modified audio signal. Based on the modified audio
signal, an output audio signal can be output by output transducer 514 so as to stimulate
the user's hearing. To this end, the processed audio signal may be converted into
an analog signal by a digital-to-analog converter (DAC) 515 before providing the modified
audio signal to output transducer 514.
[0054] Audio modification adjustment module 527 can be configured to adjust the modifying
of the audio signal executed by audio modification module 529, e.g., by adjusting
at least one of the audio processing algorithms which are executed by audio modification
module 529 and/or adjusting an amplification of the audio signal performed by audio
modification module 529. For instance, the adjustment of the modifying of the audio
signal may be performed depending on a user command received from user interface 516.
[0055] As illustrated, audio modification adjustment module 527 may also be configured to
receive the audio signal provided by input transducer 502, after it has been converted
into a digital signal by analog-to-digital converter 503, and/or the audio signal
provided by audio signal receiver 504, after it has been decoded by decoder 505. The
adjustment of the modifying of the audio signal may then also be performed depending
on the audio signal. In some instances, the audio signal may be classified by classifier
517. At least one class attributed to the audio signal by classifier 517 may then
be received by audio modification adjustment module 527. The adjustment of the modifying
of the audio signal may then also be based on the class attributed to the audio signal
by classifier 517.
[0056] As illustrated, audio modification adjustment module 527 may also be configured to
receive the sensor data provided by sensor unit 518. The adjustment of the modifying
of the audio signal may then also be performed depending on the sensor data. In some
instances, the sensor data may be classified by classifier 517. At least one class
attributed to the sensor data by classifier 517 may then be received by audio modification
adjustment module 527. The adjustment of the modifying of the audio signal may then
also be based on the class attributed to sensor data by classifier 517.
[0057] As illustrated, some examples of the adjustments of the modifying of the audio signal
performed by audio modifying module 529, can comprise a volume control 531 and/or
a beamforming adjustment 532 and/or a noise cancelling adjustment 533 and/or a spectral
balance modification 534. Volume control 531 can be configured to adjust a volume
of the audio signal so as to change a level of the output audio signal output by output
transducer 514 so as to stimulate the user's hearing. For example, the volume may
be adjusted during an audio signal processing performed by the audio signal processor,
e.g., by adjusting an amplitude of the audio signal, and/or during an amplification
of the audio signal performed by the audio signal amplifier, e.g., by adjusting a
gain provided by the amplifier.
[0058] Beamforming adjustment 532 can be configured to adjust a property of a beamforming
applied on the audio signal, e.g., during an audio signal processing performed by
the audio signal processor. For instance, the adjustment of the beamforming may comprise
at least one of turning the beamforming on or off and/or changing a beam width of
the beamforming and/or changing a directivity of the beamforming. E.g., when the directivity
of the beam points toward the front of the user, the directivity may be adjusted to
the side and/or back of the user.
[0059] Noise cancelling adjustment 533 can be configured to adjust a property of a noise
cancelling (NC) applied on the audio signal, e.g., during an audio signal processing
performed by the audio signal processor. E.g., an audio signal processing performed
by audio modifying module 529 may provide for a cancelling and/or suppression and/or
cleaning of noise contained in the audio signal. For instance, the property of the
NC, which may be adjusted by noise cancelling adjustment 533, may include a type and/or
strength of the NC. E.g., different types of the NC may include general noise and/or
noise caused by anon-speech audio source and/or noise at a certain noise level and/or
frequency range and/or noise emitted from a specific audio source, e.g., traffic noise,
aircraft noise, construction site noise, etc. Different strengths of the NC may indicate
a content of the noise in the modified audio signal, e.g., an amount of which noise
is removed and/or still present in the modified audio signal.
[0060] Spectral balance modification 534 can be configured to adjust a spectral balance
of the audio signal and/or a spectral balance of a specific content in the audio signal.
The spectral balance can be indicative of a frequency content of the audio signal.
The frequency content may comprise a power of one or more frequencies and/or frequency
bands, e.g., relative to a power of one or more other frequencies and/or frequency
bands. The frequency range of the frequency content may comprise, e.g., a range of
audible frequencies, e.g., from 20 Hz to 20.000 Hz, and/or a range of inaudible frequencies.
A specific content in the audio signal, for which the spectral balance may be modified,
may include, e.g., a music content and/or a speech content, e.g., an own voice content
and/or a voice content of another person and/or a significant other.
[0061] User command reception module 525 can be configured to receive, from user interface
127 - 129, 428, 516, a user command. E.g., the user command may be received from internal
user interface 127 and/or from external user interface 128, 428. E.g., after the user
has entered the user command in user device 410 via user interface 428, the user command
may be transmitted from user device 410 to hearing device 110, e.g., via communication
port 119. Depending on the user command, user command reception module 525 can be
configured to control audio modification adjustment module 527 to perform the adjusting
of the modifying of the audio signal executed by audio modification module 529, e.g.,
by adjusting at least one of the audio processing algorithms which are executed by
audio modification module 529 and/or adjusting an amplification of the audio signal
performed by audio modification module 529.
[0062] User interface 127 - 129, 428, 516 comprises a plurality of adjustment options each
associated with a respective user command. Adjustment option determination module
523 can be configured to predict, depending on the audio signal, at least one of the
adjustment options which most likely conforms to a preference of the user. To this
end, as illustrated, the audio signal provided by input transducer 502, after it has
been converted into a digital signal by analog-to-digital converter 503, and/or the
audio signal provided by audio signal receiver 504, after it has been decoded by decoder
505, can also be received by adjustment option determination module 523. After predicting
the adjustment option, adjustment option determination module 523 can be configured
to initiate a presenting of the predicted adjustment option to the user. E.g., as
illustrated, information about the predicted adjustment option may then be transmitted
to user interface 516. Correspondingly, user interface 127 - 129, 428, 516 may then
be configured, based on the transmitted information, to present the predicted adjustment
option to the user.
[0063] The adjustment option may be predicted based on previous user commands and/or previous
adjustment options associated with the user commands. The information about the previous
user commands and/or previous adjustment options may be collected, e.g., by adjustment
option determination module 523, in a database. In addition, information about the
audio signal when the user command has been entered by the user may be collected in
the database. For instance, the database may be stored in memory 113 of hearing device
110 and/or a memory of user device 410. As another example, a machine learning (ML)
algorithm may by trained with the database, which ML algorithm may be stored in memory
113 of hearing device 110 and/or a memory of user device 410. Accordingly, adjustment
option determination module 523 may be at least partially executed by processor 112
included in hearing device 110 and/or at least partially executed by a processor included
in user device 410. Further, user command reception module 525 may also be at least
partially executed by processor 112 included in hearing device 110 and/or at least
partially executed by a processor included in user device 410.
[0064] In some instances, the information about the previous user commands and/or previous
adjustment options and corresponding information about the audio signal collected
in the database may be employed, by adjustment option determination module 523, as
a look-up table. To illustrate, adjustment option determination module 523 may be
configured to determine whether the audio signal received from input transducer 502
and/or from audio signal receiver 504 may correspond to information about the audio
signal collected in the database. E.g., adjustment option determination module 523
may compare the received audio signal with the information about the audio signal
in the database and determine, based on the comparison, whether the received audio
signal is similar to the collected information about the audio signal. In such a case,
adjustment option determination module 523 may predict the adjustment option as the
previous adjustment option stored in the database which has been employed by the user
when the user command has been entered and the corresponding information about the
collected audio signal has been obtained.
[0065] In some instances, adjustment option determination module 523 can be configured to
execute an ML algorithm configured to predict the at least one of the adjustment option
which most likely conforms to a preference of the user. In particular, the ML algorithm
may be trained with the information about the previous user commands and/or previous
adjustment options and corresponding information about the audio signal collected
in the database. E.g., when training the ML algorithm, the information about the previous
user commands and/or previous adjustment options may be labelled with the information
about the audio signal when the user command has been entered by the user. Accordingly,
when the audio signal provided by input transducer 502 and/or audio signal receiver
504 is input into the ML algorithm, the ML algorithm can output the predicted adjustment
option. E.g., the ML algorithm may be implemented as a neural network (NN), in particular
a deep neural network (DNN).
[0066] In some instances, as illustrated, adjustment option determination module 523 can
be configured to receive information about the received audio signal from classifier
517. In this way, the information about the audio signal collected in the database
may comprise the class attributed to the audio signal and/or the ML algorithm may
also be trained with the class attributed to the audio signal. E.g., when training
the ML algorithm, the information about the previous user commands and/or previous
adjustment options may be labelled with the class attributed to the audio signal.
Furthermore, when predicting the at least one adjustment option by the ML algorithm,
the class attributed to the received audio signal may be input into the ML algorithm.
[0067] Adjustment option determination module 523 may also be configured to predict, depending
on sensor data, at least one of the adjustment options which most likely conforms
to a preference of the user. To this end, as illustrated, the sensor data provided
by sensor unit 518 may also be received by adjustment option determination module
523. E.g., the received sensor data may comprise at least one of displacement data
indicative of a displacement of hearing device 110, which may be provided by displacement
sensor 136; location data indicative of a current location of the user, which may
be provided by location sensor 138; time data indicative of a current time, which
may be provided by clock 139; physiological data indicative of a physiological property
of the user, which may be provided by at least one of physiological sensors 133 -
135; and, environmental data indicative of a property of the environment of the user,
which may be provided by at least one of environmental sensors 130 - 132.
[0068] In particular, information about the sensor data when the user command has been entered
by the user may also be collected in the database. In some instances, the information
about the sensor data collected in the database may also be employed, by adjustment
option determination module 523, as a look-up table. In some instances, when adjustment
option determination module 523 is configured to execute an ML algorithm configured
to predict the at least one of the adjustment option which most likely conforms to
a preference of the user the information about the sensor data collected in the database,
the ML algorithm may also be trained with the information about the sensor data. E.g.,
when training the ML algorithm, the information about the previous user commands and/or
previous adjustment options may be labelled with the information about the sensor
data when the user command has been entered by the user. Accordingly, when the sensor
provided by sensor unit 118 is input into the ML algorithm, the ML algorithm can output
the predicted adjustment option.
[0069] To illustrate, when the displacement data provided by displacement sensor 136 may
indicate a situation in which the user is walking or running, a different adjustment
option may be preferred by the user as compared to a situation in which the displacement
data would indicate the user is standing or sitting. As another example, when the
physiological data provided by physiological sensor 133 - 135 may indicate the user
is in a stressful situation and/or involved in a sports activity and/or having a medical
emergency, as indicated, e.g., by heart rate data, a different adjustment option may
be preferred by the user as compared to a situation in which the physiological data
would indicate the user is in a relaxed situation and/or resting. As a further example,
when the environmental data provided by environmental sensor 130 - 132 may indicate
the user is in an environment of high altitude and/or rather hot climate, a different
adjustment option may be preferred by the user as compared to a situation in which
the environmental data would indicate the user in an environment of low altitude and/or
rather cold climate. Similarly, the location data provided by location sensor 138
and/or the time data provided by clock 139 may include information useful to predict
a preferred adjustment option of the user.
[0070] In some instances, the information about the previous user commands and/or previous
adjustment options and corresponding information about the sensor data collected in
the database may be employed, by adjustment option determination module 523, as a
look-up table. To illustrate, adjustment option determination module 523 may be configured
to determine whether the sensor data received from sensor unit 518 may correspond
to information about the sensor data collected in the database. E.g., adjustment option
determination module 523 may compare the received sensor data with the information
about the sensor data in the database and determine, based on the comparison, whether
the received audio signal is similar to the collected information about the sensor
data. In such a case, adjustment option determination module 523 may predict the adjustment
option as the previous adjustment option stored in the database which has been employed
by the user when the user command has been entered and the corresponding information
about the collected sensor data has been obtained.
[0071] In some instances, adjustment option determination module 523 can be configured to
execute an ML algorithm configured to predict the at least one of the adjustment option
which most likely conforms to a preference of the user. In particular, the ML algorithm
may be trained with the information about the previous user commands and/or previous
adjustment options and corresponding information about the sensor data collected in
the database. E.g., when training the ML algorithm, the information about the previous
user commands and/or previous adjustment options may be labelled with the information
about the sensor data when the user command has been entered by the user. Accordingly,
when the sensor data provided by sensor unit 518 is input into the ML algorithm, the
ML algorithm can output the predicted adjustment option. E.g., the ML algorithm may
be implemented as aNN, in particular a DNN.
[0072] In some instances, as illustrated, adjustment option determination module 523 can
be configured to receive information about the received sensor data from classifier
517. The information about the sensor data collected in the database may then comprise
the class attributed to the sensor data and/or the ML algorithm may also be trained
with the class attributed to the sensor data. E.g., when training the ML algorithm,
the information about the previous user commands and/or previous adjustment options
may be labelled with the class attributed to the sensor data. Furthermore, when predicting
the at least one adjustment option by the ML algorithm, the class attributed to the
received sensor data may be input into the ML algorithm.
[0073] In some instances, adjustment option determination module 523 can be configured to
update, when the received user command is unassociated with the predicted adjustment
option, the database and/or ML algorithm with the received user command and/or the
adjustment options associated with the received user command and information about
the audio signal and/or sensor data when the user command has been entered by the
user. To illustrate, when the information about the previous user commands and/or
previous adjustment options and corresponding information about the audio signal and/or
sensor data collected in the database is employed by adjustment option determination
module 523 as a look-up table, the look-up table may be updated with the received
user command unassociated with the predicted adjustment option and/or the adjustment
options associated with this user command and information about the audio signal and/or
sensor data when this user command has been entered by the user. To this end, as illustrated,
the user command unassociated with the predicted adjustment option, as received by
user command reception module 525, and/or the adjustment option associated therewith
may be transmitted to adjustment option determination module 523.
[0074] As another example, when adjustment option determination module 523 is configured
to execute an ML algorithm configured to predict the at least one of the adjustment
option which most likely conforms to the preference of the user, the ML algorithm
may be updated with the received user command unassociated with the predicted adjustment
option and/or the adjustment options associated with this user command and information
about the audio signal and/or sensor data when this user command has been entered
by the user. This may imply a continued training of the ML algorithm when the received
user command is unassociated with the predicted adjustment option. As illustrated,
for the purpose of the training, the user command unassociated with the predicted
adjustment option, as received by user command reception module 525, and/or the adjustment
option associated therewith may then be transmitted to adjustment option determination
module 523. In particular, after continuously training the ML algorithm in such a
way, e.g., on the fly, e.g., during also using the ML algorithm for predicting the
at least one of the adjustment option, the prediction of the adjustment option can
be increasingly matched to the user's preferences.
[0075] FIG. 6 illustrates exemplary adjustment options 631, 632, 633, 634 allowing the user
to enter a respective user command indicating a request of the user to adjust the
modifying of the audio signal executed by audio modification module 529. In the illustrated
example, adjustment options 631, 632, 633, 634 are presented to the user on user interface
428 of user device 410. In particular, adjustment options 631 - 634 may be presented
as a slide on touch screen 428 allowing the user to enter the user command by moving
the slide. In another example, at least one of adjustment options 631 - 634 may be
presented to the user on internal user interface 127 of hearing device 110, 210 allowing
the user to enter the user command, e.g., by means of a switch, toggle, slider, movement
sensor, touch sensor, etc. In another example, at least one of adjustment options
631 - 634 may be presented to the user as a voice message, which may be output by
output transducer 117, allowing the user, e.g., to enter the user command as a voice
command, which may be detected by input transducer 115.
[0076] In the illustrated example, an adjustment limit and/or an adjustment specification
621, 622, 623, 624 is presented to the user for each adjustment option 631 - 634,
e.g., in the form of a text and/or image which may be displayed on touch screen 428.
A first adjustment option 631 is denoted by the specification 621 of a volume control.
A second adjustment option 632 is denoted by the specification 622 of a beamformer
property, e.g., a beam width of the beamformer. A third adjustment option 633 is denoted
by the specification 623 of a noise cancelling property, e.g., a strength of the noise
cancelling. A fourth adjustment option 634 is denoted by the specification 624 of
a spectral balance, e.g., a spectral balance of music content. In other examples,
adjustment limits and/or adjustment specifications 621, 622, 623, 624 may be presented
to the user as a voice message, which may be output by output transducer 117.
[0077] In some implementations, multiple adjustment options 631 - 634 are presented to the
user. The multiple adjustment options 631 - 634 may be presented to the user in a
predetermined order. E.g., adjustment options 631 - 634 may be spatially and/or temporally
separated in the predetermined order. In the illustrated example, the multiple adjustment
options 631 - 634 are presented to the user on a single screen, e.g., on touch screen
428. In some examples, the multiple adjustment options 631 - 634 can then be spatially
separated in the predetermined order by displaying adjustment options 631 - 634 subsequently
in a defined direction, e.g., from the top of screen 428 to the bottom of screen 428.
In other examples, the multiple adjustment options 631 - 634 may be presented to the
user in the form of voice messages which may be outputted to the user in a temporally
separated manner.
[0078] The order in which the multiple adjustment options 631 - 634 are presented to the
user on user interface 127 - 129, 428, 516 may be determined depending on the adjustment
option which most likely conforms to a preference of the user, as predicted by adjustment
option determination module 523. In the illustrated example, the predicted adjustment
option corresponds to first adjustment option 631. Predicted adjustment option 631
is presented on top of screen 428. One or more remaining adjustment options 632 -
634 are presented below predicted adjustment option 631.
[0079] In some instances, an order in which remaining adjustment options 632 - 634 are presented
may also be determined based on a prediction of the user's most likely preferred adjustment
option performed by adjustment option determination module 523. E.g., when adjustment
option determination module 523 is configured to execute an ML algorithm configured
to predict the at least one adjustment option 631 - 634, the ML algorithm may be configured
to also output a likelihood and/or probability according to which the prediction of
the respective adjustment option 631 - 634 is correct and/or can be predicted in a
confident manner. The order in which multiple adjustment options 631 - 634 are presented
to the user on user interface 127 - 129, 428, 516 may then be determined depending
on the likelihood and/or probability.
[0080] FIGS. 7 and 8 illustrate other examples according to which adjustment options 631,
632, 633, 634 can be presented to the user on user interface 127 - 129, 428, 516.
In those examples, predicted adjustment option 631 can be presented to the user exclusively
and/or in an isolated manner relative to remaining adjustment options 632 - 634. In
the illustrated example, predicted adjustment option 631 is presented to the user
exclusively on a single screen of touch screen 428. In other examples, predicted adjustment
option 631 may be presented to the user as a single adjustment option on user interface
127, which may be specified by one of specifications 632 - 634, which may be outputted
to the user as a single voice message.
[0081] In some implementations, remaining adjustment options 632 - 634 may also be presented
to the user, e.g., after receiving a request of the user according to which the user
desires their presenting. In the example illustrated in FIGS. 7 and 8, such a user
request can be manually entered by the user. E.g., the user may perform a manual gesture
on touch screen 428, such as swiping. After receiving of the user request, remaining
adjustment options 632 - 634 may be presented to the user on a separate screen, as
illustrated in FIG. 8. In other examples, remaining adjustment options 632 - 634 may
be presented to the user after receiving a corresponding voice command of the user
and/or a manual gesture performed on internal user interface 127.
[0082] FIG. 9 illustrates a block flow diagram for an exemplary method of operating hearing
device 110, 210. The method may be executed by processor 112 of hearing device 110,
210 and/or another processor communicatively coupled to processor 112. At operation
S11, an audio signal 911 is received. At operation S12, depending on the input audio
signal 911, at least one of adjustment options 631 - 634 which most likely conforms
to a preference of the user is predicted. At operation S13, a presenting of predicted
adjustment option 631 to the user is initiated on user interface 127 - 129, 428, 516.
At operation S14, a user command 915 is received from user interface 127 - 129, 428,
516. At operation S15, it may be optionally verified whether user command 915 is associated
with the predicted adjustment option 631. In a case in which user command 915 would
not be associated with the predicted adjustment option 631, a database and/or ML algorithm
employed for predicting the adjustment option 631 - 634 may be updated with received
user command 915 and/or the adjustment option 631 - 634 associated with received user
command 915. At operation S16, the modifying of audio signal 911, which may be performed
by audio modification module 529, is adjusted depending on the user command received
from user interface 127 - 129, 428, 516. Subsequently, an output audio signal based
on the modified audio signal 912 can be output by output transducer 117, 127 so as
to stimulate the user's hearing.
[0083] While the principles of the disclosure have been described above in connection with
specific devices and methods, it is to be clearly understood that this description
is made only by way of example and not as limitation on the scope of the invention.
The above described preferred embodiments are intended to illustrate the principles
of the invention, but not to limit the scope of the invention. Various other embodiments
and modifications to those preferred embodiments may be made by those skilled in the
art without departing from the scope of the present invention that is solely defined
by the claims. In the claims, the word "comprising" does not exclude other elements
or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single
processor or controller or other unit may fulfil the functions of several items recited
in the claims. The mere fact that certain measures are recited in mutually different
dependent claims does not indicate that a combination of these measures cannot be
used to advantage. Any reference signs in the claims should not be construed as limiting
the scope.
1. A method of operating a hearing device configured be worn at an ear of a user, the
method comprising
- receiving an audio signal (911);
- modifying the audio signal (911), wherein the modifying can be adjusted depending
on a user command received from a user interface (127 - 129, 428, 516), wherein the
user interface (127 - 129, 428, 516) comprises a plurality of adjustment options (621
- 624) each associated with a respective user command; and
- outputting, by an output transducer (117, 514) included in the hearing device, an
output audio signal based on the modified audio signal (912) so as to stimulate the
user's hearing;
characterized by
- predicting, depending on the audio signal (911), at least one of said adjustment
options (621 - 624) which most likely conforms to a preference of the user; and
- initiating a presenting of the predicted adjustment option (621 - 624) to the user.
2. The method of claim 1, wherein the adjustment option (621 - 624) is predicted based
on previous user commands and/or previous adjustment options (621 - 624) associated
with the user commands and information about the audio signal (911) when the user
command has been entered by the user, which have been collected in a database.
3. The method of claim 2, wherein the audio signal (911) is input into a machine learning
(ML) algorithm, which outputs the predicted adjustment option (621 - 624), wherein
the ML algorithm has been trained with said database.
4. The method of claim 3, further comprising
- updating, when the received user command is unassociated with the predicted adjustment
option (621 - 624), the database and/or ML algorithm with the received user command
and/or the adjustment option (621 - 624) associated with the received user command
and information about the audio signal (911) when the user command has been entered
by the user.
5. The method of any of claim 2 to 4, further comprising
- classifying the audio signal (911) by attributing at least one class from a plurality
of predetermined classes to the audio signal (911), wherein the information about
the audio signal (911) comprises the class attributed to the audio signal (911).
6. The method of any of the preceding claims, further comprising
- receiving, from a sensor (118, 130 - 136, 138, 139, 518) included in the hearing
device, sensor data, wherein the adjustment option (621 - 624) is predicted also depending
on the sensor data.
7. The method of claim 6, wherein the sensor (118, 130 - 136, 138, 139, 518) comprises
a displacement sensor (136) configured to provide at least part of the sensor data
as displacement data indicative of a displacement of the hearing device; and/or
a location sensor (138) configured to provide at least part of the sensor data as
location data indicative of a current location of the user; and/or
a physiological sensor (133, 134, 135) configured to provide at least part of the
sensor data as physiological data indicative of a physiological property of the user;
and/or
an environmental sensor (130, 131, 132) configured to provide at least part of the
sensor data as environmental data indicative of a property of the environment of the
user.
8. The method of any of claims 2 to 5 and claim 6 or 7, wherein the adjustment option
(621 - 624) is also predicted based on information about previous sensor data when
the user command has been entered by the user, which is also collected in the database.
9. The method of claim 8, further comprising
- classifying the sensor data by attributing at least one class from a plurality of
predetermined classes to the sensor data, wherein the information about the sensor
data comprises the class attributed to the sensor data.
10. The method of any of the preceding claims, further comprising
- transmitting, to a user device (410), the predicted adjustment option (621 - 624),
wherein the user device (410) is configured to present the predicted adjustment option
(621 - 624) to the user.
11. The method of claim 10, further comprising
- receiving, after the predicted adjustment option (621 - 624) has been transmitted
to the user device (410), the user command from the user device (410).
12. The method of any of the preceding claims, wherein the predicted adjustment option
(621 - 624) is presented to the user in addition to at least one remaining adjustment
option (621 - 624) different from the predicted adjustment option (621 - 624), wherein
the predicted adjustment option (621 - 624) is spatially and/or temporally separated
from the remaining adjustment option (621 - 624).
13. The method of any of the preceding claims, wherein the predicted adjustment option
(621 - 624) is exclusively presented to the user at the expense of at least one remaining
adjustment option (621 - 624) different from the predicted adjustment option (621
- 624), wherein the remaining adjustment option (621 - 624) is presented to the user
upon request of the user.
14. The method of any of the preceding claims, wherein the adjustment options (621 - 624)
comprise at least one of
- a volume control;
- a property of a beamformer;
- a property of a noise cancellation; and
- a spectral balance.
15. A hearing device configured be worn at an ear of a user, the hearing device comprising
an audio input unit (114) configured to provide an audio signal (911);
a processor (112) configured to modify the audio signal (911), wherein the modifying
can be adjusted depending on a user command received from a user interface (127 -
129, 428, 516), wherein the user interface (127 - 129, 428, 516) comprises a plurality
of adjustment options (621 - 624) each associated with a respective user command;
and
an output transducer (117) configured to output an output audio signal based on the
modified audio signal (912) so as to stimulate the user's hearing,
characterized in that the processor (112) is further configured to
- predict, depending on the audio signal (911), at least one of said adjustment options
(621 - 624) which most likely conforms to a preference of the user; and
- initiate a presenting of the predicted adjustment option (621 - 624) to the user.