[0001] The inventive technology relates to a method and a system for modifying audio signal
processing of a hearing device in accordance with user preferences. The inventive
technology further relates to a method for personalized audio signal processing on
a hearing device, a hearing device system and a hearing device arrangement.
Background
[0002] Hearing devices and the audio signal processing on hearing devices are known from
the prior art.
Detailed description
[0003] It is an object of the present inventive technology to provide an improved method
for modifying audio signal processing of a hearing device, in particular an easy and
user-friendly method for modifying, in particular personalizing, audio signal processing
of a hearing device.
[0004] This object is achieved by a method for modifying audio signal processing of a hearing
device in accordance with user preferences having the steps specified in claim 1.
A modification system is provided. To the modification system, a sound sample comprising
a sound feature for which a user wishes to modify the audio signal processing of the
hearing device is provided. A modification parameter which is selectable by the user
is provided to the modification system. The sound feature of the sound sample is associated
with at least one corresponding sound class wherein said associating the sound feature
of the sound sample with at least one corresponding sound class includes classifying
the sound sample by the modification system. By the modification system, an audio
processing routine for audio processing of sounds belonging to the at least one corresponding
sound class in accordance with the modification parameter is determined. The audio
signal processing of the hearing device is modified by implementing the determined
audio processing routine on the hearing device.
[0005] The method allows for simple and user-friendly modification of the audio signal processing.
In particular, providing a sound sample comprising a sound feature leads to a precise
and user-friendly modification. The user does not have to deal with abstract parameters,
but the modification process can be steered based on the actual sound experience of
the user. Further, the provision of the sound feature improves the personalization
of the audio signal processing because relevant sound features can vary based on user
preferences and/or the concrete situation and/or location the user is in.
[0006] The method steps of the method do not necessarily have to be executed in the above-specified
sequence. For example, the modification parameter may be provided to the modification
system together with the sound sample comprising the sound feature. Alternatively,
the modification parameter may be provided at a later stage of the method, for example
after classifying the sound sample comprising the sound feature by the modification
system. The provision of the modification parameter may be based on information gained
from the classifying of the sound sample by the modification system.
[0007] A hearing device as in the context of the present inventive technology can be a wearable
hearing device, in particular a wearable hearing aid, or an implantable hearing device,
in particular an implantable hearing aid, or a hearing device with implants, in particular
a hearing aid with implants. An implantable hearing aid is, for example, a middle-ear
implant, a cochlear implant or brainstem implant. A wearable hearing device is, for
example, a behind-the-ear device, an in-the-ear device, a spectacle hearing device
or a bone conduction hearing device. In particular, the wearable hearing device can
be a behind-the-ear hearing aid, an in-the-ear hearing aid, a spectacle hearing aid
or a bone conduction hearing aid. A wearable hearing device can also be a suitable
headphone, for example what is known as a hearable or smart headphone.
[0008] The hearing device can be part of a hearing device system. A hearing device system
in the sense of the present inventive technology is a system of one or more devices
being used by a user, in particular by a hearing impaired user, for enhancing his
or her hearing experience. A hearing device system can comprise one or more hearing
devices. For example, a hearing device system can comprise two hearing devices, in
particular two hearing aids. The hearing devices can be considered to be wearable
or implantable hearing devices associated with the left and right ear of a user, respectively.
[0009] Particular suitable hearing device systems can further comprise one or more peripheral
devices. A peripheral device in the sense of the inventive technology is a device
of the hearing device system which is not a hearing device, in particular not a hearing
aid. In particular, the one or more peripheral devices may comprise a mobile device,
in particular a smartwatch, a tablet and/or a smartphone. The peripheral device may
be realized by components of the respective mobile device, in particular the respective
smartwatch, tablet and/or smartphone. Particularly preferably, the standard hardware
components of the mobile device are used for this purpose by virtue of an applicable
piece of hearing device system software, for example in the form of an app, being
installable and executable on the mobile device. Additionally or alternatively, the
one or more peripheral devices may comprise a wireless microphone. Wireless microphones
are assistive listening devices used by hearing impaired persons to improve understanding
of speech in noise and over distance. Such wireless microphones include for example
body-worn microphones or table microphones.
[0010] Different devices of the hearing device system, in particular different hearing devices
and/or peripheral devices, may be connectable in a data-transmitting manner, in particular
by a wireless data connection. The wireless data connection can be provided by a global
wireless data connection network to which the components of the hearing device system
can connect or can be provided by a local wireless data connection network which is
established within the scope of the hearing device system. The local wireless data
connection network can be connected to a global data connection network as the Internet
e.g. via a landline or it can be entirely independent. A suitable wireless data connection
may be a Bluetooth connection or similar protocols, such as for example Asha Bluetooth.
Further exemplary wireless data connection are DM (digital modulation) transmitters,
aptX LL and/or induction transmitters (NFMI). Also other wireless data connection
technologies, e.g. broadband cellular networks, in particular 5G broadband cellular
networks, and/or WIFI wireless network protocols, can be used.
[0011] The hearing device may comprise audio processing routines for processing an audio
signal. The audio processing routines in particular comprise one or more audio processing
neural networks. At this juncture and below, the term "neural network" must be understood
to mean an artificial neural network, in particular a deep neural network (DNN). While
the usage of neural networks may be advantageous with respect to the quality of the
audio processing, such neural networks are not required for audio processing on the
hearing device. The audio processing may exclusively rely on traditional audio processing
methods. Traditional audio processing routines are to be understood as audio processing
routines which do not rely on artificial intelligence, in particular on neural networks,
but can e.g. include digital audio processing. Traditional audio processing routines
include, but are not limited to linear signal processing routines as e.g. Wiener filters
and/or beam forming.
[0012] The audio processing routines can be stored on a data storage of a computing device
of the hearing device. A data storage in the sense of the inventive technology is
a computer-readable medium. The computer-readable medium may be a non-transitory computer-readable
medium, in particular a data memory. Exemplary data memories include e.g. dynamic
(DRAM) or static (SRAM) random access memories (RAM) or solid state drives (SSD) as
well as hard drives and flash drives.
[0013] The hearing device may comprise a computing device for executing audio processing
routines. The computing device may execute one or more audio processing routines stored
on a data storage of the hearing device. The computing device may comprise a general
processor adapted to perform arbitrary operations, e.g. a central processing unit
(CPU). The computing device may alternatively or additionally comprise a processor
specialized on the execution of a neural network. Preferably, the computing device
may comprise an AI chip for executing the neural network. AI chips can execute neural
networks efficiently. However, a dedicated AI chip is not necessary for the execution
of a neural network.
[0014] The modification system may comprise a remote device, in particular a remote server,
a peripheral device, in particular a peripheral device of the hearing device system,
and/or components of the hearing device. The modification system may be realized by
components of the hearing device system, in particular by a peripheral device of the
hearing device system and/or components of the hearing devices. An exemplary modification
system may be realized by a remote device, in particular by a remote server, which
is directly or indirectly connectable to the hearing device. The term "remote device"
is to be understood as any device which is not a part of the hearing device system.
In particular, the remote device is positioned at a different location than the hearing
device system. The modification system may be provided as a remote server to which
a plurality of hearing device systems are connectable via a data connection, in particular
via a remote data connection. The remote device, in particular the remote server,
may in particular be connectable to the hearing device by a peripheral device of the
hearing device system, in particular in form of a smartwatch, a smartphone and/or
a tablet. The data connection between the remote device, in particular the remote
server, and the hearing device may be established by any suitable data connection,
in particular by a wireless data connection such as the wireless data connection described
above with respect to the devices of the hearing device system. The data connection
may in particular be established via the Internet.
[0015] Exemplarily, the modification system may comprise parts realized on a remote device,
in particular a remote server, and parts realized on devices, in particular components,
of the hearing device system. For example, an association of the sound feature of
the provided sound sample with at least one sound class can be executed on a remote
server. This ensures that an association with the at least one sound class can be
performed using the computational power of a server. Alternatively or additionally,
an association of the sound feature of the respective at least one sound class can
also be performed on a peripheral device of the hearing device system, in particular
on a smartphone. Other steps of the modification method may be executed on the hearing
device system, in particular on the hearing device. In particular, it is possible
to determine the audio processing routine directly on the hearing device. For example,
the hearing device may comprise different audio processing routines, in particular
different audio processing neural networks. Different audio processing routines, in
particular different audio processing neural networks, may be pre-installed on a data
storage of the hearing device. A determination of the audio processing routine may
be performed by selecting the suitable audio processing routine, in particular the
suitable audio processing neural network, based on the at least one corresponding
sound class associated with the sound feature of the sound sample.
[0016] The association of the sound feature of the sound sample with at least one corresponding
sound class includes classifying the sound sample by the modification system. Classifying
may comprise determining said at least one corresponding sound class with which the
sound feature is associated and/or determining at least one or a plurality of potentially
corresponding sound classes for associating the sound feature with the at least one
corresponding sound class, e.g. by selection by a user.
[0017] To this end, the modification system may comprise a processing unit for processing
the sound sample. The processing unit may comprise a general processor adapted to
perform arbitrary operations, e.g. a central processing unit (CPU). The processing
unit may alternative or additionally comprise a processor specialized on the execution
of at least one neural network, in particular a classification neural network. The
processing unit may further comprise a data storage. The data storage may be a computer-readable
medium, in particular in the form of a non-transitory computer-readable medium, e.g.
a data memory. Exemplary data memories include e.g. dynamic (DRAM) or static (SRAM)
random access memories (RAM) or solid state drives (SSD) as well as hard drives and
flash drives. On the data storage, processing algorithms, in particular a classification
algorithm, may be stored.
[0018] Preferably, the processing unit and/or the data storage of the modification system
are at least partially realized on a remote device, in particular on a remote server.
This ensures high computational power for executing the classification algorithm.
A remote device, in particular a remote server, can further be used to process, in
particular to classify, sound samples from hearing device systems of a plurality of
users. Additionally or alternatively, a processing unit and/or a data storage of a
peripheral device of the hearing device system and/or the hearing device may be used
as processing unit and/or the data storage of the modification system. The processing
unit and/or the data storage of the modification system may be distributed over several
devices.
[0019] The association of the sound feature of the sound sample with at least one sound
class may comprise classifying the sound sample using a classification algorithm.
Preferably but not necessarily, the classification algorithm may comprise one or more
classification neural networks. To this end, a classification algorithm stored on
the data storage may be executed by the processing unit of the modification system.
[0020] In the context of the inventive technology, the term "sound class" is to be understood
as a classification of the sound feature, e.g. with respect to the respective sound
source. The term "sound feature" refers to at least one sound contained in the sound
sample which can be associated with a sound class, in particular with the at least
one corresponding sound class.
[0021] The sound class may refer, for instance, to a type of at least one sound source generating
the sound feature. For example, the sound class can determine the concrete type of
the sound feature (e.g. the sound of a vacuum cleaner can be associated with the sound
class "vacuum cleaner"). Further, the sound class can characterize the sound in a
more general category and may also or alternatively refer to a type of an acoustic
scene in the environment of the user for which the sound feature is characteristic.
For example, different corresponding sound classes may represent different grades
of generalizations of corresponding sounds. For example, when the sound feature is
the sound of a vacuum cleaner, the corresponding sound class may be the sound class
"vacuum cleaner", "household appliance", "motor noise" and/or "monotonous background
noise".
[0022] The at least one corresponding sound class represents the result of the association
of the sound feature with at least one sound class according to the technology of
the invention. It is to be understood that one sound feature may be associated with
one or more sound classes.
[0023] The processing of the sound sample by the modification system may return a plurality
of potential corresponding sound classes. The association of the sound feature with
at least one corresponding sound class may further include a selection of at least
one of the potential corresponding sound classes, in particular by the user. For example,
the user may select one or more corresponding sound classes from a selection of potential
corresponding sound classes provided by the modification system.
[0024] The audio signal processing of the hearing device is modified in accordance with
user preferences. In the context of the inventive technology, modifying concerns every
alteration and/or mutation of how the audio signal processing takes place on the hearing
device. The modification may particularly concern the processing of sounds belonging
to the at least one corresponding sound class. For example, the modification may result
in a different attenuation and/or a different amplification and/or a different enhancement
of sounds and/or sound features belonging to the at least one corresponding sound
class. Enhancement of a sound or a sound feature is to be understood as any alteration
to that sound or sound feature which may improve the reception of that sound or sound
feature by a user, in particular which may improve intelligibility of that sound or
sound feature. Enhancement may comprise, but is not limited to, dereverberation and/or
noise suppression. It is also possible that, based on the modification, sounds or
sound features belonging to the at least one corresponding sound class are removed
from audio signals processed on the hearing device. Removal of sounds or sound features
may correspond to strongly attenuating such sounds or sound features. The modification
is determined by the corresponding at least one sound class and the modification parameter.
For example, the modification parameter may specify a selection of one of several
corresponding sound classes. The modification parameter may further govern how sounds
belonging to the at least one corresponding sound class are processed on the hearing
device.
[0025] The modification system determines an audio processing routine for processing sounds
belonging to the at least one corresponding sound class in accordance with the modification
parameter. For example, the audio processing routine is determined to specifically
attenuate and/or amplify and/or enhance and/or remove sounds belonging to the at least
one corresponding sound class. The audio processing routine may be determined to enhance
or suppress sounds belonging to the at least one corresponding sound class in accordance
with the modification parameter. Here and in the following, the term audio processing
routine is to be understood as comprising one or more processing routines needed for
processing sounds belonging to the at least one corresponding sound class in accordance
with the modification parameter. For example, the audio processing routine may comprise
a classification routine for classifying sounds as belonging to the at least one corresponding
sound class and/or to further process such sounds, e.g. to amplify, attenuate, remove
and/or enhance such sounds.
[0026] According to a preferred aspect of the inventive technology, the modification parameter
includes a rating of the sound feature by the user and/or a rating of the at least
one corresponding sound class by the user and/or at least one sound class selected
by a user for associating the sound feature with the at least one corresponding sound
class. The modification parameter allows for a user-friendly and precise modification
of the audio signal processing of the hearing device.
[0027] The rating of the sound feature and/or the at least one corresponding sound class
may reflect how unpleasant or pleasant, or how undesirable or desirable the user finds
such sounds to be. The respective rating may thus govern how strongly the sounds belonging
to the at least one corresponding sound class are attenuated or amplified. The rating
may be continuous or binary.
[0028] For example, the user may give a rating on a continuous scale from "unpleasant" to
"pleasant" or may simply categorize the sound or the at least one corresponding sound
class as "unpleasant" or "pleasant".
[0029] The modification parameter may comprise a selection of at least one sound class from
a plurality of potential corresponding sound classes. For example, the classification
of the sound sample on the modification system may return a plurality of potential
corresponding sound classes for selection by the user. The selection associates the
sound feature with the selected at least one corresponding sound class. The selection
precisely determines for which kind of sounds the audio signal processing is modified
on the hearing device. For example, different sound classes may be related with a
specific kind of modification, e.g. an enhancement, an attenuation and/or an amplification
of sounds belonging to that sound class. The modification of the audio signal processing
may then be governed by the at least one selected sound class together with the related
kind of modification.
[0030] Preferably, the modification parameter comprises a rating of the sound feature or
of the at least one corresponding sound class and a selection of at least one sound
class from a plurality of potential corresponding sound classes. This way, the user
can precisely determine how and for which kind of sounds the audio signal processing
should be modified.
[0031] The modification parameter, in particular the rating of the sound feature, may be
provided to the modification system together with the sound sample. For example, the
user may initiate a recording of the sound sample and at the same time rate the sound
feature. Alternatively, the modification parameter may be provided to the modification
system at a later stage of the method, e.g. based on a result of the classification
of the sound sample by the modification system. It is also possible that different
elements of the modification parameter are provided to the modification system at
different stages of the method. For example, it is possible that a rating of the sound
feature is provided to the modification system together with the sound sample comprising
the sound feature. The modification system may then process, in particular classify,
the sound sample to identify potential corresponding sound classes. A selection of
at least one sound class from the potential corresponding sound classes may then be
provided to the modification system as a further element of the modification parameter.
[0032] According to a preferred aspect of the inventive technology, the modification parameter
selectable by the user is presented to the user in the form of a rating scale of the
sound feature and/or a rating scale of the at least one corresponding sound class
and/or a selection of potential corresponding sound classes. This ensures an intuitive
selection of the modification parameter. The modification of the audio signal processing
can be precisely adjusted in accordance with user preferences. In particular, a rating
scale allows the user to precisely determine how sounds belonging to the at least
one corresponding sound class should be processed in the future, e.g. by setting one
of various degrees between unpleasant and pleasant on the scale.
[0033] Preferably, possible modification parameters, in particular a rating scale and/or
a selection of potential corresponding sound classes, may be presented to the user
in graphical or text form on a screen of a peripheral device, in particular on a screen
of a smartphone and/or smartwatch. This allows for a particular easy and intuitive
selection of the modification parameter.
[0034] Alternatively or additionally, the modification parameter selectable by the user
is presented to the user in the form of a voice message. Preferably, the voice message
can be presented to the user directly through the hearing device. This is particularly
advantageous for users having problems with the operation of a peripheral device,
in particular of a smartphone and/or smartwatch. Using a voice message to present
the selectable modification parameter is particularly intuitive. Further, this kind
of presentation does not require a hearing device system comprising a peripheral device,
in particular a peripheral device with a dedicated user interface, e.g. in the form
of a touch screen. The selection of the modification parameter may, for example, be
performed by voice input. Additionally or alternatively, the selection of the modification
parameter may be achieved by other forms of input, e.g. by gestures and/or by tapping
the hearing device.
[0035] According to a preferred aspect of the inventive technology, providing the sound
sample comprises recording the sound sample upon user initiation. This allows for
a user-friendly and precise modification of the audio signal processing. For example,
when the user encounters a sound feature, for which he wants to modify the audio signal
processing, the user can initiate the recording of the sound sample. The user can
easily initiate the modification process on demand. For example, the user may use
a peripheral device of the hearing device system, in particular a smartwatch and/or
a smartphone, to record the sound feature. Alternatively or additionally, the sound
sample may also be recorded using the recording device, in particular a microphone,
of the hearing device. For example, the user may initiate the recording by tapping
the hearing device.
[0036] Optionally, the hearing device system may provide different alternatives how to initiate
the recording of the sound sample. Different alternatives of how the recording is
initiated may be linked to a rating of the sound feature to be recorded. For example,
depending on the way how the recording is initiated, the user may communicate whether
the sound feature is unpleasant and shall be attenuated or whether the sound is pleasant
and shall be amplified. For example, a user interface on the peripheral device, in
particular a touch screen on a smartwatch and/or a smartphone may present two different
recording buttons for unpleasant and pleasant sounds, respectively. When the sound
is recorded using the hearing device, different ways of initiating the recording may
be to tap either the left or the right hearing device. This way, a rating of the sound
feature as part of the modification parameter is selectable by the user upon the recording
of the sound sample. The rating may be provided to the modification system together
with the sound sample.
[0037] According to a preferred aspect of the inventive technology, associating the sound
feature of the sound sample with at least one corresponding sound class includes classifying
the sound sample by at least one classification neural network of the modification
system. The at least one classification neural network may be part of a classification
algorithm stored on a data storage of the modification system. Using a classification
neural network allows for a particularly precise classification of the sound feature.
The classification neural network may automatically associate the sound feature with
at least one corresponding sound class. The classification neural network may also
identify potential corresponding sound classes. The association of the sound feature
may then be completed by a selection of at least one corresponding sound class from
the plurality of potential corresponding sound classes.
[0038] The at least one classification neural network may be a complex neural network allowing
for classifying arbitrary sounds belonging to a large variety of different sound classes.
The at least one classification neural network may also comprise a plurality of specialized
classification neural networks, each performing different classifications. For example,
each specialized classification neural network may be specifically adapted to classify
sounds belonging to a specific sound class. The specialized classification neural
networks may then return whether the sound feature belongs to the respective sound
class or not. The plurality of specialized classification neural networks can also
be seen as sub-networks of a more complex classification neural network.
[0039] The at least one classification neural network may be further trained using the provided
sound sample and/or the provided modification parameter. In particular, the at least
one classification neural network may be further trained based on a selection by the
user of at least one sound class from a plurality of potential corresponding sound
classes.
[0040] According to a preferred aspect of the inventive technology, the determined audio
processing routine comprises at least one audio processing neural network adapted
for processing sounds of the at least one corresponding sound class, in particular
to enhance or attenuate sounds of the corresponding sound class. This allows for a
particularly precise processing of sounds belonging to the at least one corresponding
sound class. The audio signal processing of the hearing device can be reliably and
precisely modified. The at least one audio processing neural network is adapted for
processing sounds of the at least one corresponding sound class. This is to be understood
to mean that the audio processing neural network is specifically trained for processing
sounds of the corresponding sound class. For example, the audio processing neural
network may be trained using a plurality of sounds samples of sounds belonging to
the respective at least one sound class.
[0041] The processing of sounds of the at least one corresponding sound class, in particular
the processing of sounds of the at least one corresponding sound class by the at least
one audio processing neural network, preferably comprises amplification and/or attenuation
and/or enhancement of at least one sound feature in the processed sound which belongs
to the at least one corresponding sound class. The processing of sounds of the at
least one corresponding sound class, in particular the processing of sounds of the
at least one corresponding sound class by the at least one audio processing neural
network, may also comprise superimposing another sound feature on the at least one
sound feature belonging to the at least one corresponding sound class and/or replacing
said at least one sound feature with another sound feature.
[0042] Preferably, the at least one audio processing neural network may directly influence
the sound which will be outputted to the user. In particular, the at least one audio
processing neural network may contribute to attenuation and/or amplification and/or
enhancement and/or removal of sounds belonging to the at least one corresponding sound
class. For example, the at least one audio processing neural network may compute a
filter mask which may be applied to the audio signal, in order to alter the processing
of the audio signal in accordance with the user preferences, in particular to attenuate
or amplify sounds belonging to the at least one corresponding sound class. Additionally
or alternatively, the at least one audio processing neural network may directly output
a correspondingly processed audio signal. In particular, the at least one audio processing
neural network may output an audio signal, in which sounds belonging to the at least
one corresponding sound class are processed, in particular attenuated and/or amplified
and/or enhanced, in accordance with user preferences. For example, the at least one
audio processing neural network may be configured to enhance or attenuate sounds belonging
to the at least one corresponding sound class. Particularly preferably, the at least
one audio processing neural network may be configured to separate sounds belonging
to the at least one corresponding sound class from audio signals to be processed by
the hearing device. The separated sounds may then be independently processed, in particular
amplified and/or attenuated and/or enhanced. It is even possible to remove the separated
sounds completely from processed audio signals to be outputted to the user.
[0043] Additionally or alternatively, at least one of the at least one audio processing
neural network may be based on a classification neural network which is used to classify
the sound feature of the sound sample. The at least one audio processing neural network
may correspond to the classification neural network. Preferably, the audio processing
neural network may correspond to a specialized classification neural network which
is specifically adapted to classify sounds belonging to a specific sound class. In
particular, the at least one audio processing neural network may correspond to a subnetwork
of a general classification neural network used on the modification system for classifying
the sound feature of the sound sample. Being specialized on classifying sounds of
a specific sound class, the at least one audio processing neural network has lower
computational demands than a general classification neural network. The audio processing
neural network can advantageously be executed directly on the hearing device. The
at least one audio processing neural network may classify the audio signals to be
processed by the hearing device. When the at least one audio processing neural network
encounters sounds belonging to the at least one corresponding sound class, it can
indicate the corresponding sounds. Based on the indication, the further processing
of the audio signals on the hearing device may be adapted in order to correspond to
the user preferences. For example, audio processing routines adapted to attenuate,
amplify, enhance or remove sounds or sound features of the at least one corresponding
sound class may be turned on or off based on a prior indication of such sounds in
the audio signal to be processed. This way, computationally expensive routines may
be executed only when needed.
[0044] The at least one audio processing neural network may be created by the modification
system. For example, the modification system may use the sound sample comprising the
sound feature and/or further sounds belonging to the at least one corresponding sound
class to train a neural network to process sounds of the at least one corresponding
sound class. The trained neural network may then be implemented on the hearing device
for audio signal processing.
[0045] Preferably, the modification system comprises a plurality of different audio processing
neural networks. Different audio processing neural networks may be adapted to process
sounds belonging to different sound classes. The audio processing neural networks
of the modification system may be pre-configured for the processing of sounds of different
sound classes. Upon the determination of the audio processing routine, a suitable
audio processing neural network may be chosen, in particular an audio processing neural
network being pre-configured for the associated at least one corresponding sound class.
[0046] The at least one audio processing neural network may be stored on a data storage
of the modification system. This is particularly advantageous if the modification
system is a remote server providing sufficient storage space for a plurality of different
audio processing neural networks. It is also possible that the at least one audio
processing neural network is already stored on a device of the hearing device system,
e.g. on a peripheral device or on the hearing device. Determining the audio processing
routine may then comprise identifying a suitable audio processing neural network.
Implementing the audio processing routine on the hearing device may comprise transferring
the suitable audio processing neural network to a hearing device and/or initiating
the suitable audio processing neural network on the hearing device.
[0047] It is also possible that the at least one audio processing neural network is executed
on a computing unit of a peripheral device of the hearing device system. This is particularly
advantageous if the at least one audio processing neural network has higher computational
demands which may not be met by the processing power of a hearing device. Further,
a higher battery capacity of the peripheral device may be used to run the at least
one audio processing neural network. An output of the at least one audio processing
neural network, in particular a filter mask, may then be transferred from the peripheral
device to the hearing device.
[0048] According to a preferred aspect of the inventive technology, the at least one audio
processing neural network is trained using the sound sample and/or the modification
parameter. The at least one audio processing neural network and with that the audio
processing routine are precisely adapted to the user's specific needs. In particular,
training the at least one audio processing neural network using the sound sample ensures
that the sound feature, which the user encounters, is reliably detected and/or processed
by the at least one audio processing neural network. Training the at least one audio
processing neural network using the modification parameter may advantageously further
individualize the at least one audio processing neural network. In particular, the
rating of the sound feature and/or of the at least one corresponding sound class may
be considered in the audio processing by the at least one audio processing neural
network.
[0049] According to a preferred aspect of the inventive technology, the modification system
comprises a remote server for modifying audio signal processing of hearing devices
of a plurality of users. The modification system can advantageously rely on the high
computational power of a remote server. Using the remote server for modifying the
audio signal processing of the hearing devices of a plurality of users has the further
advantage that sound samples and/or modification parameters from a plurality of users
may be used to improve the modification process. A classification algorithm, in particular
at least one classification neural network, of the modification system may be improved,
in particular trained, using the data of a plurality of users. For example, the sound
samples and/or modification parameters of different users may be used to train the
at least one classification neural network. Doing so, the reliability and precision
of the classification can be increased. Further, sound samples of a plurality of users
may be used to train the at least one audio processing neural network.
[0050] Preferably, implementing the determined audio processing routine comprises transmitting
implementation data from the remote server to the hearing device. Implementation data
may include any data which is necessary to implement the determined audio processing
routine on the hearing device. For example, the implementation data may contain processing
parameters, e.g. a specific filter mask which controls the audio signal processing
on the hearing device. Additionally or alternatively, the implementation data may
contain at least one audio processing neural network. The audio processing neural
network may be completely transferred from the remote server to the hearing device.
It is also possible to transfer determined network weights from the remote server
to the hearing device. The network weights can then be implemented in an existing
audio processing neural network on the hearing device to resemble the determined at
least one audio processing neural network. Further, the implementation data may contain
feature vectors or other network parameters which may control the operation of an
audio processing neural network on the hearing device.
[0051] The implementation data may be directly transferred from the remote server to the
hearing device. It is also possible that the remote server is in data connection with
to a peripheral device of the hearing device system. The implementation data may be
transferred from the remote server to the peripheral device and from there distributed
to the hearing device.
[0052] According to a preferred aspect of the inventive technology, at least two sound samples
of different sound features and respective modification parameters are provided to
the modification system and the modification system associates the at least two sound
samples with respective corresponding sound classes and determines the audio processing
routine to audio process sounds belonging to the corresponding sound classes in accordance
with the respective modification parameters. The at least two sound samples and respective
modification parameters are preferably associated with the same user. The provision
of a plurality of sound samples and modification parameters allows to further personalize
the audio processing of the hearing device. The user may encounter different sound
features for which he wishes to modify the audio signal processing. This way, the
audio signal processing can be precisely adjusted to the user's specific demands.
The modification of the audio signal processing may be different for different sound
samples. For example, a first sound feature may correspond to a vacuum cleaner, a
second sound feature may correspond to a news anchor. The user may choose to attenuate
sounds corresponding to the vacuum cleaner and to enhance sounds corresponding to
the news anchor.
[0053] Different sound samples and different modification parameters may be provided to
the modification system at the same time. Preferably, different sound samples and
the respective modification parameters are provided to the modification system at
different times. The user can provide a sound sample, in particular record a sound
sample, whenever he encounters a sound for which he wishes to modify the audio signal
processing.
[0054] The processing, in particular classification, of a further sound sample may take
into account the modification based on previous sound samples and respective modification
parameters. In particular, the modification system may identify overlapping sound
classes. In this regard, the modification system may suggest to the user to choose
an overlapping sound class. For example, the first sound sample may be the sound of
a vacuum cleaner, the second sample may be the sound of an air conditioning. The classification
of the modification system may find that both sound samples belong to the more general
sound classes "motor noise" and/or "monotonous background noise". The user may then
select whether he wishes to modify the audio processing for sounds belonging to at
least one of the overlapping sound classes in the future. Alternatively, the user
may decide that only sounds belonging to the more specific sound classes "vacuum cleaner"
and "air conditioning" may be modified in the future.
[0055] It is a further object of the inventive technology to improve personalized audio
signal processing on hearing devices.
[0056] This object is achieved by a method for personalized audio signal processing on a
hearing device as specified in claim 10. A hearing device for processing audio signals
is provided. The audio signal processing of the hearing device is modified in accordance
with user preferences as described above. Audio signals are processed by the modified
audio signal processing of the hearing device thereby processing sounds belonging
to the at least one corresponding sound class in accordance with the user preference,
in particular in accordance with the modification parameter. The method allows for
an audio signal processing which can be easily and reliably adapted to the specific
hearing situation which the user encounters. The audio processing allows for an improved
and personalized hearing experience. The further advantages and preferred features
of the method correspond to those described above with respect to the method for modifying
audio signal processing of a hearing device in accordance with user preferences.
[0057] It is a further object of the inventive technology to provide a modification system
for modifying audio processing of a hearing device in accordance with user preferences.
[0058] This object is achieved by the modification system as specified in claim 11. The
modification system comprises an input interface for receiving at least one sound
sample comprising a sound feature for which a user wishes to alter the audio signal
processing of the hearing device and for receiving at least one modification parameter.
The modification system further comprises a processing unit adapted to process the
sound sample for classifying the sound sample, thereby contributing to associating
the sound feature of the sound sample with at least one corresponding sound class,
and for determining an audio processing routine for audio processing of sounds belonging
to the at least one corresponding sound class in accordance with the at least one
modification parameter. The modification system further comprises an output interface
for implementing the determined audio processing routine on the hearing device. The
modification system allows for a user-friendly and reliable modification of the audio
signal processing. The modification system may comprise the further optional features
and advantages described above with reference to the method.
[0059] The input interface and the output interface of the modification system may be combined
in a single data interface. The input interface and the output interface may preferably
be configured to transmit data over an internet connection. Implementing the determined
audio processing routine on the hearing device may preferably comprise providing implementation
data to the hearing device via the output interface.
[0060] The modification system preferably comprises a classification algorithm for classifying
the sound feature of the sound sample to identify at least one corresponding sound
class. The classification algorithm preferably comprises at least one classification
neural networks for classifying the sound feature of the sound sample to identify
at least one corresponding sound class. The processing unit of the modification system
may be specifically adapted for the processing of the at least one classification
neural network. For example, the processing unit may comprise a processor adapted
for the execution of neural networks, in particular an AI chip.
[0061] According to a preferred aspect of the inventive technology, the modification system
comprises at least two audio processing neural networks, wherein the at least two
audio processing neural networks are adapted to process sounds of different sound
classes. The determined audio processing routine may comprise at least one of the
at least two audio processing neural networks. For example, determining the audio
processing routine may comprise selecting a suitable audio processing neural network.
The at least one audio processing neural network may be stored on a data storage of
the modification system.
[0062] According to a preferred aspect of the inventive technology, at least parts of the
modification system, in particular the processing unit and/or the data storage, are
implemented on a remote server to which the hearing device is connectable. The modification
system has a high computational power. Particularly advantageously, the remote server
may be used to contribute to the modification of audio signal processing of a plurality
of hearing devices of different users. Data of different users, in particular sound
samples and/or modification parameters of different users, may be used on the remote
server to improve the modification process. In particular, data of different users
may be used to train corresponding neural networks, in particular the at least one
classification neural network and/or the at least one audio processing neural network.
[0063] The modification system may be realized by the remote server. Alternatively, the
modification system may comprise further devices, in particular devices of the hearing
device system of the individual user.
[0064] It is a further object of the inventive technology to improve a hearing device system.
[0065] This object is achieved by a hearing device system as specified in claim 14. The
hearing device system comprises a hearing device for processing audio signals. The
hearing device system further comprises a recording device for recording a sound sample
comprising a sound feature for which a user wishes to modify the audio signal processing
of the hearing device. The hearing device system comprises a user interface for receiving
a modification parameter. The hearing device comprises a data interface for providing
the recorded sound sample and the modification parameter to a modification system
for further processing, in particular for classification of the sound sample, as well
as for receiving from the modification system implementation data for implementing
an audio processing routine on the hearing device. The hearing device system can be
adapted to the user's specific demands.
[0066] The hearing device system preferably comprises a plurality of hearing devices, in
particular two hearing devices. The hearing device system may further comprise a peripheral
device, in particular a mobile device, for example a smartphone, a smartwatch and/or
a tablet. The user interface may be comprised by the peripheral device, in particular
in the form of a touch screen. The data interface may be realized by the peripheral
device, in particular by establishing a data connection to the modification system,
in particular to a remote server of the modification system. It is also possible,
that the data interface and/or the user interface are realized by the one or more
hearing devices.
[0067] The recording device for recording a sound sample may be the recording device of
the hearing device and/or the recording device of a peripheral device of the hearing
device system. For example, the user may initiate the recording of a sound sample
by using a dedicate application on a peripheral device, in particular on a smartphone,
a smartwatch and/or a tablet. The recording device preferably is a microphone.
[0068] The hearing device system may comprise further optional features discussed with respect
to the method for modifying audio signal processing of a hearing device in accordance
with user preferences.
[0069] It is a further object of the inventive technology to improve a hearing device arrangement.
[0070] This object is achieved by a hearing device arrangement as specified in claim 15.
The hearing device arrangement comprises a hearing device system and a modification
system as described above. The hearing device arrangement allows for an easy and flexible
modification of the audio processing of a hearing device of the hearing device system
using the modification device. The modification system may comprise one or more components
being realized on or by devices of the hearing device system. For example, the modification
system may be realized at least partially on or by a peripheral device of the hearing
device system and/or by components of the hearing device.
[0071] Preferably, the modification system comprises a remote device, in particular a remote
server. The modification system, in particular the remote device of the modification
system, may be independent from components or devices of the hearing device system.
For example, the hearing device system may be connectable via a data connection to
the modification system, in particular to a remote device of the modification system,
preferably to a remote server of the modification system. The hearing device arrangement
may comprise a plurality of hearing device systems of different users. The hearing
device systems of different users may each be connectable to the modification system
via a data connection.
[0072] As described above, the provision of the modification parameter which is selectable
by the user is a particularly advantageous aspect of the inventive technology. The
provision of the modification parameter is however not necessary to modify the audio
signal processing. It is an independent aspect of the inventive technology to provide
a method and a system for modifying the audio signal processing of a hearing device
which does not rely on the provision of a modification parameter selectable by the
user. For example, a sound sample comprising a sound feature for which the user wishes
to modify the audio signal processing of the hearing device is provided to a modification
system. The sound feature of the sound sample is associated with at least one corresponding
sound class by classifying the sound sample by the modification system. An audio processing
routine for processing of sounds belonging to the at least one corresponding sound
class is determined by the modification system based on the associated at least one
corresponding sound class. The determined audio processing routine is implemented
on the hearing device. Thus, the modification of the audio signal processing of the
hearing device may be realized solely based on the provided sound sample comprising
the sound feature. For example, each sound class may be related with a specific kind
of modification of the audio signal processing. The kind of modification may be pre-defined
for each sound class. For example, the sound class may determine whether the sounds
belonging to the sound class are attenuated or enhanced. The method and the system
according to this independent aspect of the invention may further comprise any of
the above described features.
[0073] Further details, features and advantages of the inventive technology are obtained
from the description of exemplary embodiments with reference to the figures, in which:
- Fig. 1
- shows a schematic depiction of a hearing device arrangement comprising a hearing device
system and a modification system,
- Fig. 2
- shows a schematic depiction of a method for modification of the audio signal processing
of hearing devices of the hearing device system according to Fig. 1,
- Fig. 3
- shows a schematic depiction of a secondary device of the hearing device system according
to Fig. 1, wherein a modification parameter is user-selectable by user input,
- Fig. 4
- shows a schematic depiction of a further embodiment of a hearing device arrangement
comprising a hearing device system and a modification system,
- Fig. 5
- shows a schematic depiction of a further embodiment of a hearing device arrangement
comprising a hearing device system and a modification system, wherein the modification
system is partially realized by components of the hearing device system,
- Fig. 6
- shows a schematic depiction of a further embodiment of a hearing device arrangement
comprising a hearing device system and a modification system, wherein the modification
system is realized by components of the hearing device system, and
- Fig. 7
- shows a schematic depiction of a further embodiment of a hearing device arrangement
comprising a plurality of hearing device systems and a modification system.
[0074] Fig. 1 schematically depicts a hearing device arrangement 1 comprising a hearing
device system 2 and a modification system 3.
[0075] The hearing device system 2 comprises two hearing devices 4L, 4R. The hearing devices
4L, 4R of the shown embodiment are hearing aids worn in the left and right ear of
a user U, respectively. Here and in the following, the appendix "L" to a reference
sign indicates that the respective component or signal is associated with the left
hearing device 4L. The appendix "R" to a reference sign indicates that the respective
component is associated with the right hearing device 4R. In case reference is made
to both hearing devices or their respective components, the respective reference number
is used without an appendix. For example, the hearing devices 4L, 4R may commonly
be referred to as the hearing devices 4 for simplicity.
[0076] The hearing device system 2 further comprises a peripheral device 5. The peripheral
device 5 is provided in form of a smartphone. In other embodiments, the peripheral
device 5 may be another portable device, for example a mobile device, in particular
a tablet, smartwatch and/or smartphone. In yet another embodiment, the peripheral
device 5 can be a wireless microphone. In yet another embodiment, the hearing device
system 2 may comprise a plurality of peripheral devices 5, e.g. a mobile device and
a wireless microphone.
[0077] The hearing devices 4L, 4R are connected to each other by a wireless data connection
6LR. The left hearing device 4L is connected to the peripheral device 5 by a wireless
data connection 6L. The right hearing device 4R and the peripheral device 5 are connected
by a wireless data connection 6R. Any suitable protocol can be used for establishing
the wireless data connections 6LR, 6L, 6R. For example, the wireless data connections
6LR, 6L, 6R may be a Bluetooth connection or may use similar protocols, such as for
example Asha Bluetooth. Further exemplary wireless data connection are FM transmitters,
aptX LL and/or induction transmitters (NFMI) such as the Roger protocol. Different
wireless data connections may employ different data connection technologies, in particular
data connection protocols. For example, the wireless data connection 6LR between the
hearing devices L, R may be based on another data technology than the wireless data
connection 6L, 6R between the hearing devices L, R, respectively, and the peripheral
device 5.
[0078] In the present embodiment, the hearing device 4L comprises an audio input device
7L comprising an electroacoustic transducer, in this case in the form of a microphone,
a computing device 8L and an audio output device 9L comprising an electroacoustic
transducer which, in this case, is in the form of a receiver. Analogously, the hearing
device 4R comprises an audio input device 7R in form of a microphone, a computing
device 8R and an audio output device 9R in the form of a receiver. The audio input
devices 7L, 7R receive ambient sound and provide corresponding audio signals for audio
signal processing to the computing devices 8L, 8R. The computing devices 8L, 8R perform
audio signal processing on the respective audio signals. The respective processed
audio signals are provided to the audio output devices 9L, 9R which provide a corresponding
output in the form of sound to the user U via their receivers. Audio signal herein
may be any electrical signal which carries acoustic information.
[0079] It has to be noted that in alternative embodiments, the audio input devices 7L, 7R
can comprise, in addition to or instead of the microphones, an interface that allows
for receiving audio signals e.g. in the form of an audio stream, for instance provided
by an external microphone. Furthermore, the audio output devices 9L, 9R can comprise,
in addition to or instead of the receivers, an interface that allows for outputting
electric audio signals e.g. in the form of an audio stream or in the form of electrical
signals that can be used for driving an electrode of a hearing aid implant.
[0080] The respective computing devices 8L, 8R of the hearing devices 4L, 4R are not depicted
in detail. Each of the computing devices 8L, 8Rhas a processor, in particular an AI
chip, and a main memory. Each computing device 8L, 8R can also comprise a data memory
for storing at least one audio signal processing algorithm which, when executed by
the processor, performs the audio signal processing. The data memory preferably comprises
different audio signal processing algorithms for processing different kinds of audio
signals. The at least one audio signal processing algorithm may comprise at least
one audio processing neural network for audio signal processing the audio signal.
The audio signal processing using at least one neural network is schematically depicted
by the audio processing neural networks ANL, ANR being part of the computing devices
8L, 8R of the hearing devices 4L, 4R, respectively.
[0081] The hearing device 4L further comprises a data connection interface 10L. The hearing
device 4R further comprises a data connection interface 10R. The peripheral device
5 comprises a data connection interface 11. Via the data connection interfaces 10L,
10R, 11, the wireless data connections 6LR, 6L, 6R are established. Using the wireless
data connection 6LR, 6L, 6R, The hearing device 4L, the hearing device 4R and the
peripheral device 5 can exchange data. For example, binaural data can be exchanged
between the hearing devices 4L, 4R using the wireless data connection 6LR. Using binaural
data, the audio signal processing of the hearing devices 4L, 4R can preferably preserve
binaural cues. The wireless data connection 6L, 6R can also be used to exchange data
between the hearing devices 4L, 4R, respectively, and the peripheral device 5. In
particular, the peripheral device 5 can transmit implementation data I to the hearing
device 4L and/or the hearing device 4R, in order to modify the audio signal processing
of the hearing device 4L and the hearing device 4R, respectively, as will be described
below.
[0082] The peripheral device 5 is a smartphone being equipped with a special application
software, which allows the user to interact with the hearing devices, in particular
to modify the hearing devices 4. The peripheral device 5 comprises a user interface
12. The user interface 12 is in the form of a touch screen. The user interface 12
allows to display information to the user U in form of a user output UO. The user
interface 12 can receive user input UI, for example in form of a touch input. In particular,
the user U can select one or more of different alternatives displayed on the user
interface 12.
[0083] The peripheral device 5 comprises a recording device 13 in form of a microphone.
[0084] The peripheral device 5 comprises a data interface 14. The data interface 14 is configured
to transmit data to and receive data from devices outside of the hearing device system
2. The data interface 14 is for example a data interface connecting to the Internet.
The data interface 14 preferably can connect to the Internet via WiFi and/or mobile
data protocols, such as 3G, 4G and/or 5G broadband cellular networks.
[0085] The modification system 3 comprises a remote server 15. The remote server 15 comprises
an input interface 16 and an output interface 17. The remote server 15 is connectable
to the hearing device system 2 via the input interface 16 and the output interface
17 in a data-transmitting manner. As shown in Fig. 1, a remote data connection 18
is established between the data interface 14 of the peripheral device 5 and the input
interface 16 and the output interface 17 of the remote server 15. In the schematic
of Fig. 1, the input interface 16 and the output interface 17 are shown as separate
interfaces in order to discuss their different functionalities in the following. The
input interface 16 and the output interface 17 may be combined in a single data interface
of the remote server 15.
[0086] In Fig. 1, a remote data connection 18 is shown between the remote server 15 and
an exemplary hearing device system 2. The remote server 15 is configured to connect
to a plurality of such hearing device systems 2. The remote server 15 connects to
hearing devices systems 2 of different users U. The modification system 3 is configured
to modify the audio signal processing of the hearing devices 4L, 4R of a plurality
hearing device systems 2 according to the preferences of the respective user U.
[0087] The input interface 16 is configured to receive input data ID. Input data ID comprises
a sound sample S and a modification parameter M. Different components of the input
data ID, in particular the sound sample S and the modification parameter M, can be
transmitted to the input interface 16 at different times, in particular at different
stages of a modification process. The modification system 3 uses the input data ID
comprising the sound sample S and the modification parameter M to modify the audio
signal processing of the hearing devices 4 in accordance with the user preferences
of the user U.
[0088] The output interface 17 is configured to provide output data OD to the hearing device
system 2. The output data OD comprises implementation data I and classification data
C. Different components of the output data OD, in particular the implementation data
I and the classification data C, can be provided to the hearing device system 2 at
different times, in particular at different stages of the modification process.
[0089] The remote server 15 comprises a processing unit 19 and a data storage 20. The processing
unit 19 is not shown in detail. The processing unit 19 comprises a main storage and
a processor. The processor comprises an AI chip. The data storage 20 comprises program
data which can be executed on the processor. The data storage 20 comprises classification
algorithms 21 which, when executed by the processing unit 19, classify audio signals
in different sound classes SC. The classification algorithms 21 comprise a classification
neural network CN.
[0090] The data storage 20 further comprises audio processing routine data 22. Audio processing
routine data 22 comprises audio processing routines which may be implemented on the
hearing devices 4. Audio processing routine data 22 comprises a plurality of different
audio processing neural networks AN. In Fig. 1 exemplarily two audio processing neural
networks AN are shown. Different audio processing neural networks are adapted to process
sound of different sound classes SC. For example, different audio processing neural
networks AN may be trained with sounds belonging to different sound classes SC, thereby
being specialized in the processing of sounds belonging to the respective sound class
SC. For example, one audio processing neural network AN may be adapted for processing
traffic noise. Another audio processing neural network may be adapted for processing
sounds of household appliances. Yet another audio processing neural network may be
adapted for processing speech. Different audio processing neural networks AN may also
be adapted for a different kind of audio processing of the same sound classes SC,
e.g. to attenuate or enhance sounds of the respective sound class SC.
[0091] Fig. 2 schematically depicts a method for personalized audio signal processing 25.
The method for personalized audio signal processing 25 can preferably be implemented
on the hearing device arrangement 1 depicted in Fig. 1. The method 25 utilizes the
modification system 3 being in data connection to the hearing device system 2. The
method steps to be performed on the modification system 3 are shown in the left column.
The method steps to be performed on the hearing device system 2 are shown in the right
column. Via the remote data connection 18, data is exchanged between the two systems
of the hearing device arrangement 1.
[0092] In preparation step 26, the modification system 3 is provided. On the side of the
user U, the hearing device system 2 is provided in a preparation step 27. The preparation
step 27 can also comprise the launch of an app on the peripheral device 5, initiating
the modification process to personalize the audio signal processing of the hearing
devices 4. The preparation steps 26, 27 may further comprise establishing the remote
data connection 18 between the hearing device system 2 and the modification system
3.
[0093] The modification process starts when the user U wishes to modify the audio signal
processing of the hearing devices 4 for a sound feature SS. Fig. 1 schematically shows
a sound source 23 producing a sound feature SS. The sound source 23 is exemplarily
depicted as a fan. In a recording step 28, a sound sample S of the sound feature SS
is recorded. The recording of the sound sample S is initiated by the user U, in particular
by a user input UI, via the user interface 12.
[0094] In the present example, the sound sample S of the sound feature SS is recorded using
the recording device 13 of the peripheral device 5. Alternatively or additionally,
the sound sample S of the sound feature SS is recorded by the recording devices 7
of the hearing devices 4.
[0095] The recorded sound sample S of the sound feature SS is provided to the modification
system 3. The sound sample S is transferred from the data interface 14 to the input
interface 16 of the remote server 15.
[0096] In a classification step 29, the sound feature SS of the sound sample S is processed
by executing the classification algorithm 21 containing the classification neural
network CN on the processing unit 19 of the remote server 15. Using the classification
algorithm 21, the sound feature SS of the sound sample S is classified to belong to
one or more sound classes SC. Processing of the classification algorithm 21 returns
one or more sound classes SC to which the sound feature SS of the sound sample S belongs.
[0097] Different sound classes SC may represent alternative categories, like household appliance
or traffic noise. Further, different sound classes SC may represent a different degree
of generalization of the sound feature SS. For example, if the sound feature SS is
that of a vacuum cleaner, sound classes SC with an increasing degree of generalization
may be: vacuum cleaner, household appliance, motor noise, and monotonous background
noise. In the present example, the sound feature SS may correspond to a vacuum cleaner.
The classification algorithm 21 then may return the sound classes SC "vacuum cleaner",
"household appliance", "motor noise" and "monotonous background noise".
[0098] The identified sound classes SC are provided to the hearing device system 2 in the
form of classification data C. The classification data C is transmitted from the output
interface 17 of the remote server 15 to the data interface 14 of the peripheral device
5.
[0099] In a user selection step 30, the user U selects a modification parameter M which
further determines the modification of the audio processing by the hearing devices
4. The modification parameter M comprises a rating of the sound feature SS by the
user U, and/or a rating of the at least one corresponding sound class SC by the user
U, and/or at least one sound class SC selected by the user U for associating the sound
feature SS with the at least one corresponding sound class SC.
[0100] In the shown embodiment, the modification parameter M comprises a user selection
of at least one corresponding sound class SC. The at least one sound class SC provided
by the modification system 3 is displayed on the user interface 12 to the user U.
An exemplary user output UO on the user interface 12 of the peripheral device 5 is
shown in Fig. 3. In this specific example, the sound classes SC "vacuum cleaner",
"household appliance", "motor noise" and "monotonous background noise" are displayed
to the user U. The sound classes SC displayed to the user U form a selection of potential
corresponding sound classes from which the user can choose one or more sound classes
SC. The user U can select form the displayed sound classes SC for which kind of sounds
he wants to modify the audio signal processing on the hearing devices 4. With the
selection the user can determine, whether the modification of the audio signal processing
applies to a wider range of sounds (e.g. monotonous background noises) or to a class
of sounds, which more precisely corresponds to the sound feature SS (e.g. vacuum cleaner).
The selection by the user U associates the sound feature SS with the selected sound
classes SC. The associated sound classes SC are also referred to as corresponding
sound classes. The selection of the at least one corresponding sound class is done
in a sound class selection step 31.
[0101] In the shown embodiment, the modification parameter M further comprises a rating
of the sound feature SS. In the user selection step 30, a rating scale 32 is displayed
to the user U (cf. fig. 3). Using the rating scale 32, the user U can determine whether
he finds the sound feature SS to be annoying or pleasant to various degrees. Based
on this selection, it is determined how the audio signal processing for sounds of
the selected sound class SC is modified. If the user U rates the sound as unpleasant,
the audio signal processing of the hearing devices 4 is modified to attenuate sound
of the selected sound class SC. If the user U rates the sound to be pleasant, the
audio signal processing of the hearing devices 4 is modified to enhance the sounds
of the selected sound class SC. The degree of attenuation or enhancement corresponds
to the degree of how annoying or pleasant the user U rates the sound. The rating of
the sound using the rating scale 32 is done in a sound rating step 33.
[0102] In the shown example, the user selection step 30 comprises the sound class selection
step 31 and the sound rating step 33. The modification parameter M contains a selection
of at least one of the sound classes SC, which have been identified by the modification
system 3, and a rating of the sound feature SS.
[0103] In other embodiments, the modification parameter M as well as the user selection
may be different. For example, it might be possible that the user U is only presented
with a selection of several potential corresponding sound classes SC. The modification
parameter M may only comprise the selection of at least one corresponding sound class
SC. The modification is then based on the at least one corresponding sound class SC,
e.g. by considering a pre-defined degree of attenuation or enhancement depending on
the respective corresponding sound class SC. For example, if the user U selects the
sound class "vacuum cleaner", the respective sound can be automatically attenuated.
In other use cases, for example when the recorded sound is identified as being a human
voice, the respective sound class SC (e.g. news anchor) may be automatically enhanced.
[0104] In yet another embodiment, the modification parameter M may only comprise a rating
of the user U of the sound feature SS. The sound feature SS may be associated automatically
with at least one corresponding sound class SC based on the outcome of the classification
algorithm 21. Sounds belonging to the at least one corresponding sound class SC are
then processed in accordance with the rating provided by the user U. In such an example,
it is possible that the user U rates the sound feature SS before the sound sample
S is transferred to the modification system 3 and/or processed by the modification
system 3. The sound sample S and the modification parameter M can be provided to the
modification system 3 at the same time.
[0105] In yet another example, the modification parameter M may comprise a rating of the
at least one corresponding sound class SC. For example, the sound feature SS may be
associated automatically with at least one corresponding sound class SC based on the
outcome of the classification algorithm 21. The at least one corresponding sound class
SC may be displayed to the user U together with a rating scale for rating the at least
one corresponding sound class SC.
[0106] The modification parameter M is provided to the modification system 3. The modification
parameter M is transferred from the data interface 14 to the input interface 16 on
the remote server 15.
[0107] In an audio processing routine determination step 34, an audio processing routine
for audio processing sounds belonging to the at least one corresponding sound class
SC in accordance with the modification parameter M is determined. In the shown embodiment,
the processing unit 19 of the remote server 15 selects the audio processing neural
network AN from the data storage 20 which is specialized on processing sounds from
the selected sound class SC. The audio processing routine determination step 34 further
comprises an adaption of the selected audio processing neural network AN. The audio
processing neural network AN is adapted by training the audio processing neural network
AN with the sound sample S of the sound feature SS. This way, the audio processing
neural network AN is specifically adapted for the audio processing based on user preferences,
in particular for audio processing according to the specific use case in which the
user U has recorded the sound sample S. Moreover, the audio processing neural network
AN may be further trained using other sound samples belonging to the at least one
corresponding sound class SC which are stored on the server. The further sound samples
may have been obtained by previous modification processes by the same or other users
U.
[0108] There are various kinds of suitable audio processing neural networks AN. For example,
the audio processing neural network AN may correspond to a classification neural network
for determining whether a sound belongs to the corresponding sound class SC or not.
If a sound belonging to the at least one corresponding sound class SC is detected,
the audio processing neural network AN may indicate (or flag) the audio signals as
containing sounds of the corresponding sound class SC. The indicated sounds can then
easily detected and adequately processed by further audio signal processing routines
executed on the computing device 8 of the hearing devices 4. Additionally or alternatively,
the audio processing neural network AN is configured to directly modify the sounds
belonging to the corresponding sound class SC. For example, the audio processing neural
network AN may be adapted to enhance or attenuate sounds belonging to the corresponding
sound class SC. To this end, the audio processing neural network AN may calculate
corresponding filter masks which can be employed in the further audio signal processing.
Additionally or alternatively, the at least one audio processing neural network may
be adapted for directly outputting a correspondingly processed audio signal. Other
suitable audio processing neural networks AN may further be adapted to separate sound
belonging to the corresponding sound class SC from the audio signals to be processed
by the hearing devices 4. The separated sounds can then be attenuated and/or amplified
and/or enhanced and/or removed independently of further sounds contained in the audio
signal. These separated sounds may for example be enhanced or attenuated or even completely
removed from the audio signal. Additionally or alternatively, the separated sounds
may also be superimposed and/or replaced by a different sound and/or sound feature.
[0109] The audio processing neural network AN may be specifically adapted for processing
sounds of the at least one corresponding sound class. Being specially adapted to process
sounds of the at least one corresponding sound class, the audio processing neural
networks AN can be run on the hearing devices 4 despite their reduced size leading
to limited computational powers and battery capacities. Such limitations may be significantly
reduced in the modification system 20, which therefore may be exploited to store a
variety of different audio processing neural networks AN each suitable to be implemented
on the hearing device 4. Additionally or alternatively, the modification system can
be exploited to store one or more rather complex audio processing neural networks,
which can be reduced, by the modification system 20, to a less complex audio processing
neural network AN suitable to be implemented on the hearing device 4.
[0110] It is also possible that the determined audio processing routine comprises several
audio processing neural networks AN. For example, if the user U selects a more general
sound class SC, e.g. the sound class "monotonous background noise", the processing,
in particular the classification, on the modification system 3 may determine several
audio processing neural networks AN which are specifically adapted for different sub-classes
of the selected general sound class SC, e.g. the sub-class is "vacuum cleaner", "air
conditioning" and the like.
[0111] The determined audio processing routine is implemented on the hearing devices 4.
To this end, implementation data I is transferred from the output interface 17 of
the remote server 15 to the data interface 14 of the peripheral device 5 of the hearing
device system 2. The implementation data I contains the adapted audio processing neural
network AN and/or further processing parameters.
[0112] In an implementation step 35, the determined audio processing routine is implemented
on the hearing devices 4 based on the implementation data I. The implementation data
I is received by the data interface 14 of the peripheral device 5. Via the wireless
data connections 6L, 6R, the implementation data I is transferred from the peripheral
device 5 to the respective hearing devices 4L, 4R. The implementation data I is used
to implement the determined audio processing routine on the computing devices 8 of
the hearing devices 4. For example, the provided audio processing neural network ANL
may be implemented on an AI chip of the computing device 8L of the hearing device
4L. Additionally or alternatively, the provided audio processing neural network ANR
may be implemented on an AI chip of the computing device 8R of the hearing device
4R. The provided audio processing neural networks ANR, ANL may be the same or different.
The provided audio processing routine, in particular the provided audio processing
neural networks AN, can replace or supplement an audio processing routine which has
been previously implemented on the hearing devices 4.
[0113] Additionally or alternatively, the implementation data I may contain processing parameters
which alter the processing of the hearing devices 4. For example, the implementation
data I may contain suitable network weights which update the audio processing neural
network AN on the hearing devices 4 to process sounds of the at least one corresponding
sound class in accordance with user preferences.
[0114] The implementation of the determined audio processing routine, in particular the
replacement of the audio processing neural network AN on the computing devices 8 of
the hearing devices 4, may require a reboot of the hearing devices 4.
[0115] After implementing the determined audio processing routine on the hearing devices
4, the hearing devices 4 are used for processing audio signals in an audio signal
processing step 36. In the audio signal processing step 36, respective audio signals
recorded by the recording devices 7 of the hearing devices 4 are processed by the
computing device 8 comprising modified audio processing algorithms containing the
determined audio processing routine, in particular the adapted audio processing neural
networks AN. In the audio processing, the audio signals belonging the at least one
associated sound class are processed in accordance with the preferences of the user
U.
[0116] In a training step 37 on the modification system 3, the modification parameter M,
in particular the selection of the sound class SC by the user U, is used to further
train the classification neural network CN. This way, the classification of the at
least one corresponding sound class SC is improved over time. In particular, the modification
parameters M of a plurality of users U, whose hearing devices 4 are connectable to
the remote server 15 of the modification system 3, provide an increasing stock of
training data gradually improving classification of sound features SS of provided
sound samples S. Further, the modification parameters M as well as the sound samples
S of a plurality of users U may be used to further adapt, in particular further train,
the audio processing neural networks AN.
[0117] As is indicated by the repetition loop in Fig. 2, the modification process can be
repeated to further personalize the audio signal processing of the hearing device
system 2. To this end, the user U can record further sound samples S of other sound
features SS for which he likes to modify the audio signal processing of the hearing
devices 4.
[0118] For example, the user U may record a sound sample S of the sound feature SS of a
drilling machine. The classification of the sound feature SS on the modification system
3 returns the sound classes "drilling machine", "motor noise" and "loud background
noise". The associated sound classes SC can then be presented to the user U. For example,
the user U can be further provided with information on the previously selected sound
classes SC. For example, if the user U has previously selected the sound class "motor
noise", it can be indicated to the user that the newly associated sound classes SC
contain the same sound class "motor noise". The user U can then choose whether to
select a further sound class SC, e.g. the sound class "drilling machine" or whether
he wishes to change the modification based on the previously selected sound class
SC. For example, the user U can increase the annoyance level in order to stronger
attenuate sounds belonging to the sound class "motor noise".
[0119] If the user U has previously selected the sound class "vacuum cleaner", the system
can inform the user U that the newly recorded sound feature SS most likely belongs
to the sound class "drilling machines". The user U can be further informed that the
previously selected sound class "vacuum cleaner" and the sound class "drilling machine"
both belong to the more general sound class "motor noise". The user U can then decide
whether he wishes to only modify the specific sound classes "vacuum cleaner" and "drilling
machine" or whether he wishes to modify the more general sound class "motor noise".
[0120] In the above-described embodiment, the modification system 3 is realized on the remote
server 15. Thus, the hearing device system 2 and the modification system 3 are clearly
separated from each other. This system has the advantage that the modification system
3 can be utilized to modify the audio signal processing of the hearing devices 4 of
a plurality of users U. Using the data of several users U, in particular the modification
parameter M and a sound sample S, the modification system 3 can be gradually improved.
Further, the realization of the modification system 3 on the remote server 15 has
the advantage that the high computational power of the server 15 can be used for processing,
in particular classifying, the sound sample S and determining appropriate audio processing
routines. In other embodiments, the modification system 3 can be at least partially
realized on devices of the hearing device system 2 itself. Thus, the modification
system 3 and the hearing device system 2 may have common components. For example,
it is possible that the classification of the sound sample S is executed on the remote
server 15, exploiting the higher computational power of the remote server 15. Further
steps of the modification process, in particular the determination of audio processing
routines, may be performed on the hearing device system 2. For example, the determination
of an audio processing routine may comprise the selection of a suitable audio processing
neural network AN. Different audio processing neural networks AN may already be stored
on the hearing devices 4. The determination of the audio processing routine may then
be performed by choosing the appropriate audio processing neural network AN directly
on the hearing devices 4.
[0121] In further embodiments, the modification system 3 may completely be realized by components
of the hearing device system 2. For example, the peripheral device 5 may process,
in particular classify, the sound sample S to associate at least one corresponding
sound class SC to the sound feature SS of the sound sample S. In an embodiment, the
peripheral device 5 may comprise the classification algorithm 21 containing the classification
neural network CN for classifying the sound feature SS of the sound sample S. These
systems have the advantage to be independent of a remote data connection 18 to a remote
server 15.
[0122] In further embodiments, the hearing device system 2 may not comprise a peripheral
device 5. In this case, a direct remote data connection 18 may be established between
the hearing devices 4 and the remote server 15. The sound sample S may be recorded
using the recording devices 7 of the hearing devices 4. The recording of the sound
sample S may be initiated by tapping at least one of the hearing devices 4. Alternatively
or additionally, the recording may be initiated by keyword recognition, for example
if the user U says "start recording". The recorded sound sample S may be directly
transferred from the hearing devices 4 to the remote server 15. Classification data
C can be transferred from the remote server 15 directly to the hearing devices 4.
The classification data C, in particular a selection from a plurality of potential
corresponding sound classes SC and/or a rating scale, can be provided to the user
U via audio output, in particular via voice message. A selection of the modification
parameter M may be performed by gestures, e.g. by tapping the hearing devices 4, and/or
by keyword recognition, in particular if the user U repeats the label of the sound
class SC to be selected and/or provides a rating of the sound.
[0123] In other embodiments, the modification process does not require the provision of
a modification parameter M, in particular the modification process does not require
the selection of a modification parameter by the user. For example, the modification
system 3 may automatically select the corresponding sound class SC to which the sound
feature SS belongs with the highest probability. The modification is then based on
the at least one corresponding sound class SC, e.g. by considering a pre-defined degree
of attenuation or enhancement depending on the respective corresponding sound class
SC. For example, if the modification system 3 finds that the recorded sound sample
S contains the sound feature SS of a drilling machine, it may automatically modify
the audio signal processing of the hearing devices 4 to heavily attenuate sounds belonging
to the sound class "drilling machine".
[0124] Fig. 4 schematically depicts a further embodiment of a hearing device arrangement
101. The hearing device arrangement 101 comprises a hearing device system 102 and
a modification system 103. Devices belonging to the hearing device system 102 are
contained in a dashed box schematically representing the hearing device system 102.
Devices belonging to the modification system 103 are contained in a dashed box schematically
representing the modification system 103.
[0125] The hearing device system 102 comprises one or more hearing devices schematically
depicted by hearing device 104. The hearing device system 102 may optionally comprise
one or more peripheral device, schematically depicted by peripheral device 105.
[0126] The modification system 103 comprises one or more remote devices as exemplary shown
by remote device 115. Remote device 115 is a remote server.
[0127] Hearing device system 102 and modification system 103 are in data connection via
a remote data connection 118. Remote data connection may be established between the
hearing device 104 and/or the peripheral device 105 and the remote device 115. For
example, the remote device 115 may be directly connected to the hearing device 104
via the remote data connection 118. Additionally or alternatively, it is possible
that the remote device 115 is connected to the hearing device 104 via the peripheral
device 105.
[0128] Fig. 5 schematically depicts a further embodiment of a hearing device arrangement
201. The hearing device arrangement 201 comprises a hearing device system 202 and
a modification system 203. Devices belonging to the hearing device system 202 are
contained in a dashed box schematically representing the hearing device system 202.
Devices belonging to the modification system 203 are contained in a dashed box schematically
representing the modification system 203.
[0129] The hearing device system 202 comprises one or more hearing devices as schematically
represented by the hearing device 204. The hearing device system 202 further comprises
one or more peripheral devices as schematically depicted by peripheral device 205.
[0130] The modification system 203 comprises one or more remote devices as schematically
depicted by remote device 215. Parts of the modification system 203 are realized on
the remote device 205. In that sense, the remote device 205 also belongs to the modification
system 203. The remote device 205 may be configured to perform one or more processing
steps of the modification system. For example, the remote device 205 may process a
sound sample, in particular classify the sound sample, for contributing to associating
a sound feature of the sound sample with at least one corresponding sound class. Additionally
or alternative, the peripheral device 205 may be configured to determine an audio
processing routine for audio processing of sounds belonging to the at least one corresponding
sound class in accordance with a modification parameter. It is also possible that
the remote device 205 only serves to receive the sound sample and/or a modification
parameter provided to the modification system 203. The remote device 215 may be connected
to the peripheral device 205 by a remote data connection 218.
[0131] Fig. 6 schematically depicts a further embodiment of a hearing device arrangement
301. The hearing device arrangement 301 comprises a hearing device system 302. The
hearing device arrangement 301 further comprises a modification system 303.
[0132] The hearing device system 302 comprises one or more hearing devices as schematically
represented by hearing device 304. The hearing device system 302 comprises one or
more peripheral devices as exemplarily shown by peripheral device 305. The peripheral
device 305 may, e.g., be a mobile device, such as for example a smartphone.
[0133] The modification system 303 is realized by the peripheral device 305 or by components
of the peripheral device 305. For example, the modification system 303 may be realized
by standard hardware components of the peripheral device 305, in particular of a smartphone,
which are used for this purpose by the virtue of an applicable piece of modification
system software, for example in form of an app being installable and executable on
the peripheral device 305.
[0134] Fig. 7 schematically depicts a further embodiment of a hearing device arrangement
401. The hearing device arrangement 401 comprises a plurality of hearing device system,
exemplarily shown by hearing device systems 402a, 402b, 402c. Devices belonging to
the respective hearing device systems 402a, 402b, 402c are contained in a dashed box
schematically resembling the respective hearing device systems 402a, 402b, 402c. The
depicted hearing device arrangement 401 comprises three hearing device systems 402a,
402b, 402c. The number of the shown hearing device systems is to be understood purely
exemplary. It is possible that the hearing device arrangement 401 comprises less than
three or more than three hearing device systems.
[0135] The hearing device arrangement 401 comprises a modification system 403. The modification
system 403 comprises one or more remote devices as exemplary shown by remote device
415.
[0136] The modification system 403 is in data connection with the hearing device systems
402a, 402b, 402c via respective remote data connections 418a, 418b, 418c.
[0137] The hearing device systems 402a, 402b, 402c belong to different users. Hearing device
systems 402a, 402b, 402c may be configured differently or equivalently. For example,
the hearing device systems 402a, 402b, 402c may comprise different devices. By way
of example only, hearing device systems 402a, 402b, 402c are shown to comprise different
devices. Hearing device system 402a comprises a single hearing device 404a and a peripheral
device 405a. Hearing device system 402b comprises a single hearing device 404b. Hearing
device system 402c comprises two hearing devices 404cL, 404cR to be worn on the left
and right ear of a hearing device system user, respectively. Hearing device system
402c further comprises a peripheral device 405c. Of course, other combinations and
configurations of hearing device systems are possible.
[0138] Being connected to a multiplicity of hearing device systems 402a, 402b, 402c, modification
system 403 can be used to modify audio processing of the respective hearing device
404a, 404b, 404cL, 404cR. Sound samples and/or modification parameters provided by
a plurality of users may be used to improve the modification process. In particular,
a classification algorithm, for example at least one classification neural network,
of the modification system 403 may be improved, in particular trained, using data
of a plurality of users. Additionally or alternatively, the sound samples and/or modification
parameters of different users may be used to train the at least one classification
neural network. Further, user data, in particular sound samples and/or modification
parameters, may be used to improve, in particular train audio processing routines,
in particular audio processing neural networks, before being implemented on the hearing
devices of the hearing device systems.
1. Method for modifying audio signal processing of a hearing device (4L, 4R; 104; 204;
304; 404a, 404b, 404cL, 404cR) in accordance with user preferences, having the steps:
- providing a modification system (3; 103; 203; 303; 403),
- providing to the modification system (3; 103; 203; 303; 403) a sound sample (S)
comprising a sound feature (SS) for which a user (U) wishes to modify the audio signal
processing of the hearing device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR),
and
- providing to the modification system (3; 103; 203; 303; 403) a modification parameter
(M), wherein the modification parameter (M) is selectable by the user (U),
- associating the sound feature (SS) of the sound sample (S) with at least one corresponding
sound class (SC), wherein associating includes classifying the sound sample (S) by
the modification system (3; 103; 203; 303; 403),
- determining by the modification system (3; 103; 203; 303; 403) an audio processing
routine for audio processing of sounds belonging to the at least one corresponding
sound class (SC) in accordance with the modification parameter (M), and
- modifying the audio signal processing of the hearing device (4L, 4R; 104; 204; 304;
404a, 404b, 404cL, 404cR) by implementing the determined audio processing routine
on the hearing device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR).
2. Method according to claim 1, wherein the modification parameter (M) includes
- a rating of the sound feature (SS) by the user (U); and/or
- a rating of the at least one corresponding sound class (SC) by the user (U); and/or
- at least one sound class (SC) selected by the user (U) for associating the sound
feature (SS) with the at least one corresponding sound class (SC).
3. Method according to claim 2, wherein the modification parameter (M) is presented to
the user (U) in the form of
- a rating scale (32) of the sound feature (SS); and/or
- a rating scale (32) of the at least one corresponding sound class (SC); and/or
- a selection of potential corresponding sound classes (SC).
4. Method according to any one of claims 1 to 3, wherein providing the sound sample (S)
comprises recording the sound sample (S) upon user initiation.
5. Method according to any one of claims 1 to 4, wherein associating the sound feature
(SS) of the sound sample (S) with the at least one corresponding sound class (SC)
includes classifying the sound feature (SS) of the sound sample (S) by at least one
classification neural network (CN) of the modification system (3; 103; 203; 303; 403).
6. Method according to any one of claims 1 to 5, wherein the determined audio processing
routine comprise at least one audio processing neural network (AN) adapted for processing
sounds of the at least one corresponding sound class (SC), in particular to attenuate,
amplify and/or enhance sounds of the corresponding sound class (SC).
7. Method according to claim 6, wherein the at least one audio processing neural network
(AN) is trained using the sound sample (S) and/or the modification parameter (M).
8. Method according to any one of claims 1 to 7, wherein the modification system (3;
103; 203; 303; 403) comprises a remote server (15; 115; 215; 315; 415) for modifying
audio signal processing of hearing devices (4L, 4R; 104; 204; 304; 404a, 404b, 404cL,
404cR) of a plurality of users (U).
9. Method according to any one of claims 1 to 8, wherein at least two sound samples (S)
of different sound features (SS) and respective modification parameters (M) are provided
to the modification system (3; 103; 203; 303; 403) and wherein the modification system
(3; 103; 203; 303; 403) associates the at least two sound samples (S) with respective
corresponding sound classes (SC) and determines the audio processing routine to audio
process sounds belonging to the corresponding sound classes (SC) in accordance with
the respective modification parameter (M).
10. Method for personalized audio signal processing on a hearing device (4L, 4R; 104;
204; 304; 404a, 404b, 404cL, 404cR), having the steps:
- providing a hearing device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR) for
processing audio signals,
- modifying audio signal processing of the hearing device (4L, 4R; 104; 204; 304;
404a, 404b, 404cL, 404cR) according to user preferences as claimed in any one of the
preceding claims, and
- processing audio signals by the modified audio signal processing of the hearing
device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR) thereby processing sounds
belonging to the at least one corresponding sound class (SC) in accordance with the
user preferences.
11. Modification system for modifying audio processing of a hearing device (4L, 4R; 104;
204; 304; 404a, 404b, 404cL, 404cR) in accordance with user preferences, comprising
- an input interface (16) for receiving
-- at least one sound sample (S) comprising a sound feature (SS) for which a user
(U) wishes to alter the audio signal processing of the hearing device (4L, 4R; 104;
204; 304; 404a, 404b, 404cL, 404cR) and
-- at least one modification parameter (M),
- a processing unit (19) adapted to process the sound sample (S)
-- for classifying the sound sample (S), thereby contributing to associating the sound
feature (SS) of the sound sample (S) with at least one corresponding sound class (SC),
and
-- for determining an audio processing routine for audio processing of sounds belonging
to the at least one corresponding sound class (SC) in accordance with the at least
one modification parameter (M), and
- an output interface (17) for implementing the determined audio processing routine
on the hearing device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR).
12. Modification system according to claim 11, further comprising at least two audio processing
neural networks (AN), wherein the at least two audio processing neural networks (AN)
are adapted to process sounds of different sound classes (SC), in particular to attenuate,
amplify and/or enhance sounds of different sound classes (SC).
13. Modification system according to any one of claims 11 or 12, wherein at least parts
of the modification system (3; 103; 203; 303; 403), in particular the processing unit
(19), are implemented on a remote server (15; 115; 215; 315; 415) to which the hearing
device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR) is connectable.
14. Hearing device system, comprising
- a hearing device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR) for processing
audio signals,
- a recording device (7, 13) for recording a sound sample (S) comprising a sound feature
(SS) for which a user (U) wishes to modify the audio signal processing of the hearing
device (4L, 4R; 104; 204; 304; 404a, 404b, 404cL, 404cR),
- a user interface (12) for receiving a modification parameter (M),
- a data interface (14)
-- for providing the recorded sound sample (S) and the modification parameter (M)
to a modification system (3; 103; 203; 303; 403) for further processing, and
-- for receiving from the modification system (3; 103; 203; 303; 403) implementation
data (I) for implementing an audio processing routine on the hearing device (4L, 4R;
104; 204; 304; 404a, 404b, 404cL, 404cR).
15. Hearing device arrangement, comprising
- a hearing device system (2; 102; 202; 302; 402a, 402b, 402c) as claimed in claim
14, and
- a modification system (3; 103; 203; 303; 403) as claimed in any one of claims 11
to 13.