TECHNICAL FIELD
[0001] The present disclosure relates to the field of hearing aids. The present disclosure
related to a method for enabling notification intelligibility of a hearing aid having
one or more audio events, and a related hearing system.
BACKGROUND
[0002] Hearing aids can provide audible signals to a user for notifying a user of a particular
event occurring in the hearing aid. Such audible signals can take the form of spoken
messages (e.g., pre-recorded voice messages) and/or coded messages (e.g., beeps and/or
tonal combinations).
[0003] When it comes to spoken messages, a significant amount of time and effort may be
required for their creation and evaluation of their usefulness to a user. The creation
and evaluation process may include generation of audio signals for each spoken message
using a speech synthesis tool, adjustment of auditory settings such as pronunciation,
pitch, and speed of the synthesized speech, and evaluation of the synthesized spoken
messages in collaboration with language native speakers and impaired-hearing users.
Such processes may be repeated for each language for which the hearing aid user may
want and/or for all languages used by the hearing aid manufacturer.
[0004] Despite such extensive work and careful evaluation, the spoken messages may not meet
the needs and expectations of hearing-impaired users (e.g., in terms of wording, language,
length of the message and/or pronunciation). Likewise, voice prompts generated by
a smartphone speech generator may suffer from the same limitations.
SUMMARY
[0005] Accordingly, there is a need for hearing aids, hearing systems and methods for enabling
notification intelligibility of a hearing aid which may mitigate, alleviate, or address
the shortcomings existing and may provide for a more personalized hearing experience.
A method:
[0006] A method of enabling notification intelligibility of a hearing aid having one or
more audio events is disclosed. The method comprises obtaining user input indicating
at least one of the one or more audio events to be mapped. The method comprises obtaining
an audio signal indicative of a sound in an environment via a microphone. The method
comprises mapping the audio signal to the at least one of the one or more audio events
of the hearing aid. The mapping is capable of being performed throughout a time of
use of the hearing aid.
A hearing system:
[0007] Further, a hearing system comprising a hearing aid and an auxiliary device is disclosed.
The hearing system is configured to perform the method disclosed herein.
[0008] The hearing system may be adapted to establish a communication link between the hearing
aid and the auxiliary device to provide that information (e.g., one or more of: control
signals, status signals, and audio signals) can be exchanged or forwarded from one
to the other.
[0009] The auxiliary device may comprise one or more of: a remote control, a smartphone,
an electronic device, a wearable electronic device, a smartwatch, and any other suitable
auxiliary device.
[0010] The auxiliary device may comprise a remote control for controlling functionality
and operation of the hearing aid(s). The function of a remote control may be implemented
in a smartphone, the smartphone possibly running an APP allowing to control the functionality
of the audio processing device via the smartphone (the hearing aid(s) comprising an
appropriate wireless interface to the smartphone, e.g., based on Bluetooth or some
other standardized or proprietary scheme).
[0011] The auxiliary device may comprise an audio gateway device adapted for receiving a
multitude of audio signals (e.g., from an entertainment device, e.g., a TV or a music
player, a telephone apparatus, e.g., a mobile telephone or a computer, e.g., a PC,
a wireless microphone, etc.) and adapted for selecting and/or combining an appropriate
one of the received audio signals (or combination of signals) for transmission to
the hearing aid.
[0012] It is intended that some or all of the structural features of the hearing system
(e.g., of the hearing aid and of the auxiliary device) described above, in the `detailed
description of embodiments' or in the claims can be combined with embodiments of the
method, when appropriately substituted by a corresponding process and vice versa.
Embodiments of the method have the same advantages as the corresponding hearing systems.
[0013] It is an advantage of embodiments of the present disclosure that by obtaining the
audio signal in the form of a spoken message (e.g., a voice message) and mapping the
audio signal to the at least one of the one or more audio events, a more personalized
hearing experience is provided to a user of the hearing aid. In other words, the present
disclosure may enable an increase in individual customization and hearing aid user
satisfaction.
[0014] Embodiments of present disclosure advantageously can provide means to the user of
the hearing aid for controlling the personal quality of the hearing aid output (e.g.,
without intervention of a Hearing Care Professional (HCP) and/or without being internally
managed by the hearing aid), which can in turn improve the adaptation period of the
user to the hearing aid and/or the overall user experience.
[0015] Embodiments of present disclosure also can advantageously allow customized voice
communications for providing the user with information associated with one or more
audio events of the hearing aid, e.g., information associated with a status of functionality
of the hearing aid. For example, the present disclosure may enable provision of voice
communications that are adapted and/or tailored to the user's level of understanding
and/or perception.
A computer readable medium or data carrier:
[0016] Further, a tangible computer-readable medium (a data carrier) storing a computer
program comprising program code means (instructions) for causing a data processing
system (a computer) to perform (carry out) at least some (such as a majority or all)
of the (steps of the) method described above, in the `detailed description of embodiments'
and in the claims, when said computer program is executed on the data processing system
is provided.
[0017] By way of example, and not limitation, such computer-readable media can comprise
RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other
magnetic storage devices, or any other medium that can be used to carry or store desired
program code in the form of instructions or data structures and that can be accessed
by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc,
optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks
usually reproduce data magnetically, while discs reproduce data optically with lasers.
Other storage media include storage in DNA (e.g., in synthesized DNA strands). Combinations
of the above should also be included within the scope of computer-readable media.
In addition to being stored on a tangible medium, the computer program can also be
transmitted via a transmission medium such as a wired or wireless link or a network,
e.g., the Internet, and loaded into a data processing system for being executed at
a location different from that of the tangible medium.
A computer program:
[0018] Further, a computer program (product) comprising instructions which, when the program
is executed by a computer, cause the computer to carry out (steps of) the method described
above, in the `detailed description of embodiments' and in the claims is provided.
A data processing system:
[0019] Further, a data processing system comprising a processor and program code means for
causing the processor to perform at least some (such as a majority or all) of the
steps of the method described above, in the `detailed description of embodiments'
and in the claims is provided.
An APP:
[0020] Further, a non-transitory application, termed an APP, is disclosed. The APP comprises
executable instructions configured to be executed on an auxiliary device to implement
a user interface for a hearing aid and/or a hearing system described above in the
`detailed description of embodiments', and in the claims. The APP may be configured
to run on cellular phone, e.g., a smartphone, or on another portable device allowing
communication with said hearing aid or said hearing system.
Definitions:
[0021] In the present context, a hearing aid, e.g., a hearing instrument, refers to a device,
which is adapted to improve, augment and/or protect the hearing capability of a user
by receiving acoustic signals from the user's surroundings, generating corresponding
audio signals, possibly modifying the audio signals, and providing the possibly modified
audio signals as audible signals to at least one of the user's ears. Such audible
signals may e.g., be provided in the form of acoustic signals radiated into the user's
outer ears, acoustic signals transferred as mechanical vibrations to the user's inner
ears through the bone structure of the user's head and/or through parts of the middle
ear as well as electric signals transferred directly or indirectly to the cochlear
nerve of the user.
[0022] The hearing aid may be configured to be worn in any known way, e.g., as a unit arranged
behind the ear with a tube leading radiated acoustic signals into the ear canal or
with an output transducer, e.g., a loudspeaker, arranged close to or in the ear canal,
as a unit entirely or partly arranged in the pinna and/or in the ear canal, as a unit,
e.g., a vibrator, attached to a fixture implanted into the skull bone, as an attachable,
or entirely or partly implanted, unit, etc. The hearing aid may comprise a single
unit or several units communicating (e.g., acoustically, electrically or optically)
with each other. The loudspeaker may be arranged in a housing together with other
components of the hearing aid, or may be an external unit in itself (possibly in combination
with a flexible guiding element, e.g., a dome-like element).
[0023] A hearing aid may be adapted to a particular user's needs, e.g., a hearing impairment.
A configurable signal processing circuit of the hearing aid may be adapted to apply
a frequency and level dependent compressive amplification of an input signal. A customized
frequency and level dependent gain (amplification or compression) may be determined
in a fitting process by a fitting system based on a user's hearing data, e.g., an
audiogram, using a fitting rationale (e.g., adapted to speech). The frequency and
level dependent gain may e.g., be embodied in processing parameters, e.g., uploaded
to the hearing aid via an interface to a programming device (fitting system), and
used by a processing algorithm executed by the configurable signal processing circuit
of the hearing aid.
[0024] A 'hearing system' refers to a system comprising one or two hearing aids, and a `binaural
hearing system' refers to a system comprising two hearing aids and being adapted to
cooperatively provide audible signals to both of the user's ears. Hearing systems
or binaural hearing systems may further comprise one or more 'auxiliary devices',
which communicate with the hearing aid(s) and affect and/or benefit from the function
of the hearing aid(s). Such auxiliary devices may include at least one of a remote
control, a remote microphone, an audio gateway device, an entertainment device, e.g.,
a music player, a wireless communication device, e.g., a mobile phone (such as a smartphone)
or a tablet or another device, e.g., comprising a graphical interface. Hearing aids,
hearing systems or binaural hearing systems may e.g., be used for compensating for
a hearing-impaired person's loss of hearing capability, augmenting, or protecting
a normal-hearing person's hearing capability and/or conveying electronic audio signals
to a person. Hearing aids or hearing systems may e.g., form part of or interact with
public-address systems, active ear protection systems, handsfree telephone systems,
car audio systems, entertainment (e.g., TV, music playing or karaoke) systems, teleconferencing
systems, classroom amplification systems, etc.
The invention is set out in the appended set of claims.
BRIEF DESCRIPTION OF DRAWINGS
[0025] The aspects of the disclosure may be best understood from the following detailed
description taken in conjunction with the accompanying figures. The figures are schematic
and simplified for clarity, and they just show details to improve the understanding
of the claims, while other details are left out. Throughout, the same reference numerals
are used for identical or corresponding parts. The individual features of each aspect
may each be combined with any or all features of the other aspects. These and other
aspects, features and/or technical effect will be apparent from and elucidated with
reference to the illustrations described hereinafter in which:
FIGS. 1A-1B illustrate an example hearing system according to this disclosure; and
FIGS. 2A-2B are a flow-chart of an example method of enabling notification intelligibility
of a hearing aid having one or more audio events according to this disclosure.
[0026] The figures are schematic and simplified for clarity, and they just show details
which are essential to the understanding of the disclosure, while other details are
left out. Throughout, the same reference signs are used for identical or corresponding
parts.
[0027] Further scope of applicability of the present disclosure will become apparent from
the detailed description given hereinafter. However, it should be understood that
the detailed description and specific examples, while indicating preferred embodiments
of the disclosure, are given by way of illustration only. Other embodiments may become
apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
[0028] The detailed description set forth below in connection with the appended drawings
is intended as a description of various configurations. The detailed description includes
specific details for the purpose of providing a thorough understanding of various
concepts. However, it will be apparent to those skilled in the art that these concepts
may be practiced without these specific details. Several aspects of the apparatus
and methods are described by various blocks, functional units, modules, components,
circuits, steps, processes, algorithms, etc. (collectively referred to as "elements").
Depending upon particular application, design constraints or other reasons, these
elements may be implemented using electronic hardware, computer program, or any combination
thereof.
[0029] The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated
circuits (e.g., application specific), microprocessors, microcontrollers, digital
signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic
devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB)
(e.g., flexible PCBs), and other suitable hardware configured to perform the various
functionality described throughout this disclosure, e.g., sensors, e.g., for sensing
and/or registering physical properties of the environment, the device, the user, etc.
Computer program shall be construed broadly to mean instructions, instruction sets,
code, code segments, program code, programs, subprograms, software modules, applications,
software applications, software packages, routines, subroutines, objects, executables,
threads of execution, procedures, functions, etc., whether referred to as software,
firmware, middleware, microcode, hardware description language, or otherwise.
[0030] A method of enabling notification intelligibility of a hearing aid having one or
more audio events is disclosed. The method of enabling notification intelligibility
of a hearing aid having one or more audio events may be performed by a hearing system,
e.g., a hearing aid in communication with an auxiliary device.
[0031] The method comprises obtaining user input indicating at least one of the one or more
audio events to be mapped. In other words, the method may comprise selecting at least
one of the one or more audio events to be mapped based on the user input via a graphical
interface on the auxiliary device.
[0032] The method comprises obtaining an audio signal indicative of a sound in an environment
via a microphone. In one or more example methods, the method comprises importing sound
data from and/or recording voice data by a user using the hearing aid and/or any other
user having access to such hearing aid (e.g., a user that is not the wearer of the
hearing aid). For example, the audio signal is spoken and/or recorded speech. The
audio signal may be obtained in the form of a voice message and/or any other sound
(e.g., a horn sound, a bark sound recorded and/or imported by the user). In one or
more example methods, the method comprises obtaining the audio signal from one or
more of: the hearing aid and the auxiliary device. In one or more example methods,
the method comprises storing the audio signal in a memory of the hearing system (e.g.,
in a notification repository of the auxiliary device).
[0033] The method comprises mapping the audio signal to the at least one of the one or more
audio events of the hearing aid. In one or more example methods, the method comprises
mapping manually the audio signal to the at least one of the one or more audio events
of the hearing aid. In other words, the mapping may be performed manually, e.g., upon
obtaining the user input and the audio signal.
[0034] Optionally, the method comprises mapping automatically the audio signal to the at
least one of the one or more audio events of the hearing aid. In other words, the
mapping may be performed automatically, e.g., upon obtaining the audio signal. For
example, the method comprises mapping the audio signal to the at least one of the
one or more audio events, e.g., without obtaining the user input indicating the at
least one of the one or more audio events to be mapped. For example, the method comprises
determining, based on the audio signal, the at least one of the one or more audio
events to be mapped. For example, the method comprises providing a request message
requesting an owner of the hearing aid, e.g., a user of the hearing aid, to either
accept or reject the mapping of the audio signal to the at least one of the one or
more audio events. For example, the method comprises, upon providing the request message,
obtaining user input acceptance indicating acceptance of the mapping of the audio
signal to the at least one of the one or more audio events. For example, the method
comprises, upon providing the request message, obtaining user input rejection indicating
rejection of the mapping of the audio signal to the at least one of the one or more
audio events.
[0035] In one or more example methods, the method comprises mapping the audio signal to
the at least one of the one or more audio events via one or more of: the auxiliary
device and the hearing aid.
[0036] The mapping (e.g., manual mapping and/or automatic mapping) is capable of being performed
throughout a time of (e.g., active or passive) use of the hearing aid. In one or more
example methods, the time of use of the hearing aid is in one or more of: a turn on
state, a charging state, and a turn off state. For example, a hearing aid in a turn
off state can be construed as a hearing aid in an intentional turn off state and/or
unintentional turn off state. A hearing aid in an intentional turn off state may be
a hearing aid that has been (e.g., temporarily) turned off to e.g., save battery.
A hearing aid in an unintentional turn off state may be a hearing aid not functioning
properly due to user error (e.g., existence of moisture in the components of the hearing
aid) and/or running out of power (e.g, a hearing aid requiring replacement of batteries,
e.g., a hearing aid with non-rechargeable batteries). The mapping of the audio signal
to the at least one of the one or more audio events may be activated upon turning
on and/or switching on the hearing aid (e.g., by a user using the hearing aid and/or
any other user having access to such hearing aid). For example, a hearing aid in a
charging state can be construed as a hearing aid placed in a charging box, e.g., a
battery-powered hearing aid and/or a hearing aid with rechargeable batteries. For
example, a hearing aid in a turn off state and/or a charging state may be seen as
a hearing aid being passively used and/or worn by a user. For example, a hearing aid
in a turn on state can be construed as a hearing aid actively being used and/or worn
by a user. In other words, a hearing aid in a turn on state may be a hearing aid in
an active state, e.g., actively improving, augmenting and/or protecting the hearing
capability of the user.
[0037] For example, the mapping is capable of being performed when the hearing aid is stored
in a box (e.g., a charging box and/or a storing box), and/or temporarily removed from
the user's ear (e.g., placed outside a box), and/or when the user is actively using
the hearing aid. The present disclosure may allow the user of the hearing aid to perform
the mapping at any time of use of the hearing aid. In other words, the user may have
control over the audio signal (e.g., imported and/or recorded sound by the user) and
respective mapping, thereby not depending on an HCP (e.g., and/or on pre-recorded
sounds stored in a memory of the hearing system by the HCP).
[0038] In one or more example methods, the method comprises displaying an interface object
indicative of the at least one of the one or more audio events, e.g., via the auxiliary
device in communication with the hearing aid. In one or more example methods, the
method comprises displaying the at least one or more one or more audio events via
a graphical interface of the auxiliary device.
[0039] In one or more example methods, an audio event is associated with a functionality
of the hearing aid. In one or more example methods, an audio event may be associated
with a status of functionality of the hearing aid and/or a state of the hearing aid
(e.g., an internal state).
[0040] The status of functionality of the hearing aid may comprise one or more of: a mute/unmute
status, a pairing status and/or a connectivity status (e.g., a hearing aid and/or
connectivity devices pairing status, e.g., a TV-BOX connection status), a flight mode
status, a loudness level (e.g., volume) status, a power status (e.g., a battery power
status), a communication status (e.g., a left and right hearing aids communication
status), a program status, a self-check status (e.g., identification of a need to
replace a wax filter and/or to clean a component of the hearing aid and/or to get
assistance from a hearing care professional (HCP)), and any other suitable status.
The status of functionality of the hearing aid may comprise a status on one or more
of: identification of a left hearing aid and a right hearing aid and identification
of an end of a trial period. In one or more example methods, an audio event can be
seen as an event generated based on such status of functionality of the hearing aid,
e.g., based on a status parameter indicative of the status of functionality.
[0041] In one or more example methods, an audio event can be seen as an event on the hearing
aid that may trigger a notification to the user wearing the hearing aid when there
is a change in a corresponding status of functionality and/or corresponding state
of the hearing aid (e.g., when a value associated with a functionality of the hearing
aid is greater than, less than, equal to, greater than and equal to, or less than
and equal to a given functionality threshold). For example, an audio event can be
seen as a change in a status of functionality and/or state of the hearing aid. For
example, an audio event can be associated with a change in the power status of the
hearing aid. The audio event may trigger a notification to the user wearing the hearing
aid when a battery power value of the hearing aid is less than and equal to a power
threshold. The notification triggered may indicate to the user of the hearing aid
that the hearing aid is running out of battery.
[0042] Optionally, an audio event can be associated with an event reminder functionality
in the hearing aid, e.g., enabled by the auxiliary device. For example, the audio
event may be a calendar event, e.g., for notifying (e.g., reminding) the user of birthdays
and/or appointments of any nature and/or to take medication.
[0043] Embodiments of present disclosure advantageously can provide for an improved association
between an audio event and a corresponding spoken message used to warn the user wearing
the hearing aid about a change in a corresponding status of functionality and/or corresponding
state of the hearing aid. Embodiments of the present disclosure may enable mapping
an audio signal, such as an audio signal adapted to the user's needs and recorded
and/or imported by the user, to an audio event when, for example, such audio event
has been renamed by an HCP during a fitting procedure (e.g., a fitting procedure prior
to the mapping of the audio signal). For example, renaming such audio event without
modifying a pre-synthesized spoken message mapped to it may mislead the user about
the status of functionality of the hearing aid, thereby impacting user's perception.
[0044] In one or more example methods, the one or more audio events of the hearing aid are
pre-determined. In one or more example methods, the one or more audio events are stored
in a memory of a hearing system comprising the hearing aid from which the one or more
audio events are retrieved when the mapping is to be performed. In one or more example
methods, the method comprises obtaining the at least one of the one or more audio
events of the hearing aid from a memory of the hearing system comprising the hearing
aid, e.g., a memory of the auxiliary device in communication with the hearing aid
and/or the memory of the hearing aid.
[0045] In one or more example methods, the method comprises mapping the audio signal to
the at least one of the one or more audio events of the hearing aid via a graphical
interface. In one or more example methods, the graphical interface is a graphical
interface of the hearing system comprising the hearing aid. For example, the graphical
interface is a graphical interface on the auxiliary device in communication with the
hearing aid.
[0046] In one or more example methods, the method comprises obtaining user input request
indicative of a request to modify the mapping of the audio signal to the at least
one of the one or more audio events of the hearing aid. In one or more example methods,
the method comprises, upon obtaining the user input request, modifying the mapping
of the audio signal to the at least one of the one or more audio events of the hearing
aid. In one or more example methods, the method comprises, upon obtaining the user
input request, providing an acceptance mapping message (e.g., and/or signal) indicating
acceptance to modify the mapping of the audio signal to the at least one of the one
or more audio events. The method may comprise, upon providing the acceptance mapping
message, modifying the mapping of the audio signal the at least one of the one or
more audio events. In one or more example methods, the method comprises, upon obtaining
the user input request, providing a rejection mapping message (e.g., and/or signal)
indicating rejection to modify the mapping of the audio signal to the at least one
of the one or more audio events.
[0047] It is an advantage of the present disclosure that, by modifying the mapping of the
audio signal to the at least one of the one or more audio events, a more customizable
hearing experience is provided to the user wearing the hearing aid.
[0048] In one or more example methods, modifying the mapping of the audio signal comprises
obtaining a new audio signal indicative of a new sound in an environment via the microphone.
In one or more example methods, modifying the mapping of the audio signal comprises
mapping the new audio signal to the at least one of the one or more audio events of
the hearing aid. In one or more example methods, modifying the mapping of the audio
signal comprises replacing (e.g., overriding) the audio signal by the new audio signal,
e.g., without removing (e.g., disabling and/or deactivating) the mapping of the audio
signal to the at least one of the one or more audio events.
[0049] Optionally, modifying the mapping of the audio signal comprises removing the mapping
of the audio signal to the at least one of the one or more audio events. In one or
more examples, modifying the mapping of the audio signal comprises, upon obtaining
the new audio signal, mapping the new audio signal to the at least one of the one
or more audio events.
[0050] In one or more example methods, the method comprises storing the new audio signal
in the memory of the hearing system (e.g., the memory of the auxiliary device and/or
the hearing aid). In one or more example methods, the method comprises storing the
mapping of the new audio signal to the at least one of the one or more audio events
in the memory of the hearing system (e.g., the memory of the auxiliary device and/or
the hearing aid).
[0051] In one or more example methods, modifying the mapping of the audio signal comprises
removing the mapping of the audio signal to the at least one of the one or more audio
events of the hearing aid. In other words, modifying the mapping of the audio signal
comprises disabling (e.g., deactivating) the mapping of the audio signal to the at
least one of the one or more audio events. In one or more example methods, modifying
the mapping of the audio signal comprises mapping a pre-determined signal to the at
least one of the one or more audio events of the hearing aid. In one or more example
methods, the pre-determined signal can be seen as an audio signal, e.g., a sound,
mapped to the at least one of the one or more audio events by default. In other words,
the pre-determined signal may be an audio signal mapped to the at least one of the
one or more audio events prior to acquiring the hearing aid, e.g., mapped by the manufacturer
of the hearing aid. In one or more example methods, the pre-determined signal is a
signal stored in the memory of the hearing system, e.g., in an encoded and/or compressed
version, from which the pre-determined signal is retrieved when the mapping of the
audio signal to the at least one of the one or more audio events is removed.
[0052] Optionally, the method comprises obtaining the mapping of the pre-determined signal
to the at least one of the one or more audio events from the memory of the hearing
system (e.g., the memory of the auxiliary device and/or the hearing aid). In one or
more example methods, the method comprises activating the mapping of the pre-determined
signal to the at least one of the one or more audio events. Put differently, the method
comprises applying the mapping of the pre-determined signal to the at least one of
the one or more audio events to the hearing aid. In one or more example methods, the
mapping of the pre-determined signal to the at least one of the one or more audio
events is stored in the memory of the hearing system from which the pre-determined
is retrieved when the mapping of the audio signal to the at least one of the one or
more audio events is removed. In one or more example methods, the mapping of the pre-determined
signal to the at least one of the one or more audio events which is stored in the
memory of the hearing aid can be performed by an HCP, e.g., an audiologist during
a fitting procedure of the hearing aid, and/or the manufacturer of the hearing aid,
e.g., prior to current use of the hearing aid.
[0053] In one or more example methods, modifying the mapping of the audio signal comprises
generating an updated version of the audio signal based on the audio signal and one
or more audio parameters. In one or more example methods, an audio parameter of the
one or more audio parameters is indicative of an auditory characteristic (e.g., auditory
functionality and/or auditory feature and/or a hearing setting) of the hearing aid.
In one or more example methods, an auditory characteristic of the hearing aid comprises
one or more of: background noise reduction, speed, pitch, a loudness level, timbre,
rhythm, and any other suitable characteristic. In one or more example methods, the
method comprises, upon obtaining the user input request, providing a modification
acceptance message indicating acceptance to modify the one or more audio parameters.
The method may comprise, upon providing the modification acceptance message, generating
the updated version of the audio signal. In other words, the method may comprise generating
the updated version of the audio signal based upon user interaction (e.g., in response
to the user input request). In one or more example methods, the method comprises,
upon obtaining the user input request, providing a modification rejection message
indicating rejection to modify the one or more audio parameters.
[0054] For example, modifying the one or more audio parameters can be seen as a safe modification,
e.g., a modification that is not harmful to the user wearing the hearing aid. For
example, the one or more audio parameters may be modified within an allowable range
(e.g., a pre-determined range) for preventing unintended harm to the user waring the
hearing aid. In other words, the present disclosure may allow the user using the hearing
aid to have full control over the one or more audio parameters, such as within the
respective allowable range. Modification of the one or more audio parameters may lead
to improved notification intelligibility.
[0055] In one or more example methods, the method comprises providing the rejection message
when detecting a modification that can impact negatively the user wearing the hearing
aid. For example, the rejection message can be provided when such allowable ranges
need modifications (e.g., based on feedback from the user of the hearing aid) which
may require further adjustments in hearing aid and/or in the auxiliary device (e.g.,
adjustments in software and/or hardware). In other words, the rejection message may
be seen as a temporary measure when such further adjustments take more time than expected.
[0056] In one or more example methods, the method comprises storing the updated version
of the audio signal and/or the mapping of updated version of the audio signal to the
at least one of the one or more audio events in the memory of the hearing system.
The method may comprise storing the updated version of the audio signal by replacing
the audio signal by the updated version of the audio signal, e.g., either after removing
the audio signal from the memory or without removing the audio signal from the memory.
[0057] In one or more example methods, the method comprises providing the modification acceptance
message and/or the modification rejection message perceivable to the user as sound,
e.g. in form of a notification, such as one or more of: a spoken notifications (e.g.,
" change accepted"), non-spoken notification (e.g., a pre-determined notification
stored in the memory of the hearing system), a tonal notification (e.g., beep and/or
sound images), and combinations thereof.
[0058] Embodiments of the present disclosure may advantageously enable the user of the hearing
aid to one or more of: record, import, update, and delete an audio signal at any time
of use of the hearing aid.
[0059] In one or more example methods, generating the updated version of the audio signal
comprises varying the one or more audio parameters, e.g., vary a level and/or value
of the one or more audio parameters within an allowable range, thereby modifying the
audio signal for provision of the updated version of the audio signal. In one or more
example methods, the user is allowed to adjust the one or more audio parameters within
such allowable range. The allowable range may be stored in the memory of the hearing
system, e.g., of the auxiliary device. The allowable range may be a pre-determined
range.
[0060] Embodiments of the present disclosure may provide for improved sound quality and/or
perception of a spoken message by allowing the users of the hearing aid themselves
to adjust the level of the one or more audio parameters. For example, embodiments
of the present disclosure enable the user of the hearing device to modify at least
one of the one or more audio parameters to their own preference (e.g., without a hearing
aid care professional). For example, embodiments the present disclosure allow the
user to select a combination of at least two of the one or more audio parameters (e.g.,
levels and/or values) adjusted to their hearing impairment, thereby providing for
a more perceptible audio output signal (e.g., spoken message) to the user of the hearing
aid. In other words, embodiments of the present disclosure may provide the user with
a method for self-adjusting auditory characteristics of the hearing aid, thereby allowing
further customization of the hearing aid output to match their preferences and/or
needs. As a particular example, embodiments of the present disclosure may allow further
customization of the hearing aid output by providing a user using the hearing aid
with means to record an audio signal and subsequently to adjust the one or more audio
parameters for generation of the updated version of the audio signal, such as a signal
more adapted to their hearing preferences and/or impairments.
[0061] In one or more example methods, the method comprises determining an action required
to be carried out by the user wearing (e.g., using) the hearing aid. In one or more
example methods, the method comprises determining the action based on a status of
functionality of the hearing aid, e.g., when there is a change in the status of functionality
of the hearing aid and/or a state of the hearing aid. In one or more example methods,
the action required to be carried out by the user wearing the hearing aid can be seen
as an action in relation to the hearing aid, e.g., an action to be carried out in
response to a change in the status of functionality of the hearing aid. In one or
more example methods, the action required to be carried out by the user wearing the
hearing aid can be seen as an action in relation to an external entity, e.g., an action
to be carried out in response to a calendar reminder. In one or more example methods,
the method comprises outputting an audio output signal perceivable by the user as
sound based on one or more of: the audio signal, the pre-determined signal, and the
action. The audio output signal may indicate a message to the user wearing the hearing
aid. The audio output signal may comprise information associated with the hearing
aid, e.g., about an internal state of the hearing aid (e.g., transmitted to the user
as a "low battery" warning). For example, the audio output signal perceivable by the
user as sound can be seen as a spoken notification and/or spoken reminder to the user
wearing the hearing aid. The outputting of the audio output signal may be triggered
based on a change in the status of functionality of the hearing aid. In one or more
example methods, the method comprises outputting the audio output signal to the hearing
aid via the auxiliary device. For example, the audio output signal comprises information
about a state of the hearing aid and/or calendar dates (e.g., birthdays, appointments,
and/or meetings).
[0062] Optionally, the method comprises outputting the audio output signal based on one
or more of: the audio signal, the pre-determined signal, the action, and user input
confirmation. In one or more example methods, the method comprises providing a request
output message (e.g., a tonal indication, e.g., via beeping) requesting the user of
the hearing aid to either accept or reject the outputting of the audio output signal.
The method may comprise, upon providing the request output message, obtaining user
input acceptance (e.g., by tapping on the hearing aid) indicating acceptance of the
outputting of the audio output signal. The method may comprise, upon obtaining the
user input acceptance, outputting the audio output signal. The method may comprise,
upon providing the request output message, obtaining user input rejection (e.g., by
not tapping on the hearing aid) indicating rejection of the outputting of the audio
output signal. The method may comprise, upon obtaining the user input rejection, foregoing
the outputting the audio output signal. In other words, the method may comprise, upon
obtaining the user input rejection, not outputting the audio output signal.
[0063] Embodiments of the present disclosure may advantageously provide a confirmation functionality
in the hearing aid allowing the user of the hearing aid to confirm (e.g., via tapping
and upon hearing a short reminder tonal indication) whether the audio output signal
(e.g., a notification and/or a reminder) can be output via the hearing aid (e.g.,
via a speaker of the hearing aid). The confirmation functionality in the hearing aid
may ensure that a user will not be disturbed by a notification and/or reminder while
in conversation.
[0064] It is an advantage of embodiments of the present disclosure that, by outputting the
audio output signal, the user is notified of an external event and/or a change in
the status of the hearing aid in a manner of their choosing. In other words, embodiments
of the present disclosure may allow for the output the audio output signal perceivable
by the user as sound in such a way that such sound meets their hearing expectations.
[0065] In one or more example methods, the method comprises outputting a written message
(e.g., text message) indicative of the action to be carried out by the user wearing
the hearing aid in addition to the audio output signal. The written message may be
indicative of a calendar reminder. In one or more example methods, the method comprises
outputting a written message together with the audio output signal. In one or more
example methods, the method comprises outputting (e.g., providing and/or transmitting)
the written message via a graphical interface, e.g., a graphical interface and/or
display on the auxiliary device. In other words, outputting the written message may
comprise displaying the written message in a pop-up window in the graphical interface
of the auxiliary device. Optionally, the method comprises outputting the written message
when foregoing the outputting of the audio output signal (e.g., when not outputting
the audio signal as sound).
[0066] In one or more example methods, the mapping is capable of being performed without
intervention of an audiologist. For example, the mapping is capable of being performed
without intervention of an HCP (e.g., an audio expert).
[0067] A hearing system comprising a hearing aid and an auxiliary device is disclosed. The
hearing system may comprise a processor (e.g., in the auxiliary device and/or hearing
aid), an interface (e.g., a graphical interface in the auxiliary device and/or an
interface in the hearing aid), and a memory (e.g., of the auxiliary device and/or
hearing aid). The interface of the hearing aid may be seen as an input unit and/or
an output unit. The processor may be seen as a signal processor (such as a digital
signal processor).
[0068] The hearing aid may be adapted to provide a frequency dependent gain and/or a level
dependent compression and/or a transposition (with or without frequency compression)
of one or more frequency ranges to one or more other frequency ranges, e.g., to compensate
for a hearing impairment of a user. The hearing aid may comprise a signal processor
for enhancing the input signals and providing a processed output signal.
[0069] The hearing aid may comprise an output unit for providing a stimulus perceived by
the user as an acoustic signal based on a processed electric signal. The output unit
may comprise a number of electrodes of a cochlear implant (for a CI type hearing aid)
or a vibrator of a bone conducting hearing aid. The output unit may comprise an output
transducer. The output transducer may comprise a receiver (loudspeaker) for providing
the stimulus as an acoustic signal to the user (e.g., in an acoustic (air conduction
based) hearing aid). The output transducer may comprise a vibrator for providing the
stimulus as mechanical vibration of a skull bone to the user (e.g., in a bone-attached
or bone-anchored hearing aid). The output unit may (additionally or alternatively)
comprise a (e.g., wireless) transmitter for transmitting sound picked up-by the hearing
aid to another device, e.g., a far-end communication partner (e.g., via a network,
e.g., in a telephone mode of operation, or in a headset configuration).
[0070] The hearing aid may comprise an input unit for providing an electric input signal
representing sound. The input unit may comprise an input transducer (e.g., a microphone),
for converting an input sound to an electric input signal. The input unit may comprise
a wireless receiver for receiving a wireless signal comprising or representing sound
and for providing an electric input signal representing said sound.
[0071] The wireless receiver and/or transmitter may e.g., be configured to receive and/or
transmit an electromagnetic signal in the radio frequency range (3 kHz to 300 GHz).
The wireless receiver and/or transmitter may e.g., be configured to receive and/or
transmit an electromagnetic signal in a frequency range of light (e.g., infrared light
300 GHz to 430 THz, or visible light, e.g., 430 THz to 770 THz).
[0072] The hearing aid may comprise antenna and transceiver circuitry allowing a wireless
link to one or more of: an entertainment device (e.g., a TV-set), a communication
device (e.g., a telephone), a wireless microphone, a separate (e.g., external) processing
device, or another hearing aid, etc. The hearing aid may thus be configured to wirelessly
receive a direct electric input signal from another device. Likewise, the hearing
aid may be configured to wirelessly transmit a direct electric output signal to another
device. The direct electric input or output signal may represent or comprise an audio
signal and/or a control signal and/or an information signal.
[0073] In general, a wireless link established by antenna and transceiver circuitry of the
hearing aid can be of any type. The wireless link may be a link based on near-field
communication, e.g., an inductive link based on an inductive coupling between antenna
coils of transmitter and receiver parts. The wireless link may be based on far-field,
electromagnetic radiation. Preferably, frequencies used to establish a communication
link between the hearing aid and the other device is below 70 GHz, e.g. located in
a range from 50 MHz to 70 GHz, e.g. above 300 MHz, e.g. in an ISM range above 300
MHz, e.g. in the 900 MHz range or in the 2.4 GHz range or in the 5.8 GHz range or
in the 60 GHz range (ISM=Industrial, Scientific and Medical, such standardized ranges
being e.g. defined by the International Telecommunication Union, ITU). The wireless
link may be based on a standardized or proprietary technology. The wireless link may
be based on Bluetooth technology (e.g., Bluetooth Low-Energy technology, e.g., LE
audio), or Ultra-Wideband (UWB) technology.
[0074] The hearing system, such as the hearing aid, may be constituted by or form part of
a portable (e.g., configured to be wearable) device, e.g., a device comprising a local
energy source, e.g., a battery, e.g., a rechargeable battery. The hearing aid may
e.g., be a low weight, easily wearable, device, e.g., having a total weight less than
100 g, such as less than 20g, such as less than 5g.
[0075] The hearing aid may comprise a 'forward' (or `signal') path for processing an audio
signal between an input and an output of the hearing aid. A signal processor may be
located in the forward path. The signal processor may be adapted to provide a frequency
dependent gain according to a user's particular needs (e.g., hearing impairment).
The hearing aid may comprise an 'analysis' path comprising functional components for
analyzing signals and/or controlling processing of the forward path. Some or all signal
processing of the analysis path and/or the forward path may be conducted in the frequency
domain, in which case the hearing aid comprises appropriate analysis and synthesis
filter banks. Some or all signal processing of the analysis path and/or the forward
path may be conducted in the time domain.
[0076] The hearing aid may comprise an analogue-to-digital (AD) converter to digitize an
analogue input (e.g., from an input transducer, such as a microphone) with a predefined
sampling rate, e.g., 20 kHz. The hearing aids may comprise a digital-to-analogue (DA)
converter to convert a digital signal to an analogue output signal (e.g., for being
presented to a user via an output transducer).
[0077] The hearing system (e.g., hearing aid and/or auxiliary device) may comprise a voice
activity detector (VAD) for estimating whether or not (or with what probability) an
input signal comprises a voice signal (at a given point in time). A voice signal may
in the present context be taken to include a speech signal from a human being. It
may also include other forms of utterances generated by the human speech system (e.g.,
singing). The voice activity detector unit may be adapted to classify a current acoustic
environment of the user as a VOICE or NO-VOICE environment. This has the advantage
that time segments of the electric microphone signal comprising human utterances (e.g.,
speech) in the user's environment can be identified, and thus separated from time
segments only (or mainly) comprising other sound sources (e.g., artificially generated
noise). The voice activity detector may be adapted to detect as a VOICE also the user's
own voice. Alternatively, the voice activity detector may be adapted to exclude a
user's own voice from the detection of a VOICE.
[0078] The hearing system (e.g., hearing aid and/or auxiliary device) may comprise an own
voice detector for estimating whether or not (or with what probability) a given input
sound (e.g., a voice, e.g., speech) originates from the voice of the user of the system.
A microphone system of the hearing aid may be adapted to be able to differentiate
between a user's own voice and another person's voice and possibly from NON-voice
sounds.
[0079] The hearing aid may comprise a classification unit configured to classify the current
situation based on input signals from (at least some of) the detectors, and possibly
other inputs as well. In the present context `a current situation' may be taken to
be defined by one or more of:
- a) the physical environment (e.g., including the current electromagnetic environment,
e.g., the occurrence of electromagnetic signals (e.g., comprising audio and/or control
signals) intended or not intended for reception by the hearing aid, or other properties
of the current environment than acoustic);
- b) the current acoustic situation (input level, feedback, etc.), and
- c) the current mode or state of the user (movement, temperature, cognitive load, etc.);
and
- d) the current mode or state of the hearing aid (program selected, time elapsed since
last user interaction, etc.) and/or of another device in communication with the hearing
aid.
[0080] The classification unit may be based on or comprise a neural network, e.g., a recurrent
neural network, e.g., a trained neural network.
[0081] The hearing aid may comprise a hearing instrument, e.g., a hearing instrument adapted
for being located at the ear or fully or partially in the ear canal of a user, e.g.,
a headset, an earphone, an ear protection device or a combination thereof. The hearing
system may further comprise a speakerphone (comprising a number of input transducers
(e.g., a microphone array) and a number of output transducers, e.g., one or more loudspeakers,
and one or more audio (and possibly video) transmitters e.g., for use in an audio
conference situation), e.g., comprising a beamformer filtering unit, e.g., providing
multiple beamforming capabilities.
[0082] The hearing system is configured to obtain user input indicating at least one of
the one or more audio events to be mapped (e.g., via the hearing aid and/or the auxiliary
device). For example, the hearing system is configured to obtain the user input via
a graphical interface on the auxiliary device and/or via the interface of hearing
aid (e.g., via the input unit, e.g., by having a user of the hearing aid performing
a gesture on the hearing aid, e.g., by tapping the hearing aid).
[0083] The hearing system is configured to obtain an audio signal indicative of a sound
in an environment (e.g., of the hearing aid) via a microphone (e.g., of the hearing
aid and/or the auxiliary device). In one or more example hearing systems, the hearing
aid comprises an input unit configured to convert the sound to at least one electrical
input signal representative of the sound. For example, the audio signal can be seen
as an electrical input signal representative of the sound. In one or more example
hearing device, the hearing aid may be configured to wirelessly transmit the audio
signal to the auxiliary device via a wireless transceiver.
[0084] In one or more example hearing systems, the audio signal may be seen as a voice and/or
a non-voice signal. For example, the hearing system (e.g., auxiliary device and/or
hearing aid) may comprise a VAD for estimating whether or not an input audio signal
comprises a voice signal. In one or more example hearing systems, the audio signal
may be seen as an own voice signal, such as a signal originated from the voice of
the user of the hearing system. For example, the hearing system (e.g., auxiliary device
and/or hearing aid) may comprise an own voice detector for estimating whether or not
an input audio signal (e.g., a voice, e.g., speech) originates from the voice of the
user of the hearing system. In one or more example hearing systems, an input signal
may be obtained under non-favorable environments, e.g., noisy environments. The hearing
system (e.g., auxiliary device and/or hearing aid) may comprise noise reduction techniques
to be applied to an input signal.
[0085] In one or more example hearing devices, the acquisition of the audio signal may be
performed via the interface of the auxiliary device, e.g., the graphical interface
of the auxiliary device. For example, the hearing aid is configured to wirelessly
receive (e.g., via the input unit) the audio signal from the auxiliary device. In
other words, the auxiliary device may comprise a wireless transceiver for transmitting/receiving
the audio signal to/from the hearing aid.
[0086] The hearing system is configured to map (e.g., via the interface and/or processor)
the audio signal to the at least one of the one or more audio events of the hearing
aid. In one or more examples, such mapping can be performed via the auxiliary device
and/or the hearing aid. For example, such mapping can be performed via the processor
and/or graphical interface of the auxiliary device. For example, such mapping can
be performed via the processor and/or interface of the hearing aid.
[0087] The hearing system may be configured to map the audio signal to the at least one
of the one or more audio events upon obtaining the audio signal via the microphone.
The hearing system may be configured to display an interface object in the form of
a symbol via the display of the auxiliary device indicating acceptance and/or confirmation
of such mapping. For example, the hearing system is configured to map the audio signal
to the at least one of the one or more audio events via the graphical interface of
the auxiliary device and/or via the interface of the hearing aid (e.g., by having
a user of the hearing aid performing a gesture on the hearing aid, e.g., by tapping
and/or pressing a button on the hearing aid). In one or more example hearing systems,
the mapping (e.g., assignment) of the audio signal to the at least one of the one
or more audio events may be triggered by acquisition of the recorded audio signal
(such as, without requiring acquisition of the user input indicating the at least
one of the one or more audio events to be mapped).
[0088] The mapping is capable of being performed throughout a time of use of the hearing
aid. In one or more example hearing systems, the time of use of the hearing aid is
in one or more of: a turn on state, a charging state, and a turn off state.
[0089] In one or more example hearing systems, the hearing system is configured to display
(e.g., via the interface, such as a graphical user interface (GUI)) an interface object
indicative of the at least one of the one or more audio events. In one or more example
hearing systems, the hearing system is configured to display the interface object
via the auxiliary device, e.g., via the graphical interface comprising a display (e.g.,
a touch sensitive display) displaying the one or more audio events.
[0090] In one or more example hearing systems, an audio event is associated with a functionality
of the hearing aid. In one or more example hearing systems, an audio event can be
associated with a status of functionality of the hearing aid and/or a state of the
hearing aid (e.g., an internal state). In one or more example hearing systems, an
audio event can be seen as an event on the hearing aid that may trigger a notification
to the user wearing the hearing aid when there is a change in a corresponding status
of functionality and/or corresponding state of the hearing aid.
[0091] For example, the hearing aid can comprise a classification unit configured to classify
a current situation of the hearing aid. For example, the classification unit can be
configured to detect a change in the state of functionality of the hearing aid. In
other words, the classification unit may be configured to identify the occurrence
of an audio event in the hearing aid. In one or more example hearing systems, the
hearing aid is configured to transmit, to the auxiliary device, a control signal indicative
of the audio event occurring in the hearing aid, such as the change in the state of
functionality of the hearing aid associated with such audio event.
[0092] In one or more example hearing systems, the one or more audio events of the hearing
aid are pre-determined (e.g., stored in the memory and/or data repository of the hearing
system, e.g., in the memory of the auxiliary device and/or hearing aid).
[0093] In one or more example hearing systems, the hearing system is configured to map the
audio signal to the at least one of the one or more audio events of the hearing aid
via a graphical interface (e.g., via the graphical interface of the auxiliary device).
Optionally, the hearing system is configured to perform such mapping via the interface
of the hearing aid, e.g., by having the user pressing a button of the hearing aid
and/or tapping the hearing aid.
[0094] In one or more example hearing systems, the hearing system is configured to obtain
(e.g., via the interface) user input request indicative of a request to modify the
mapping of the audio signal to the at least one of the one or more audio events of
the hearing aid. For example, such user input request may be obtained via the graphical
interface of the auxiliary device and/or the interface of the hearing aid (e.g., input
unit of the hearing aid, e.g., by having a user of the hearing aid performing a gesture
on the hearing aid). In one or more example hearing systems, the hearing system is
configured, upon obtaining the user input request, to modify (e.g., via the processor)
the mapping of the audio signal to the at least one of the one or more audio events
of the hearing aid. For example, such modification may be performed via the processor
of the hearing aid and/or the auxiliary device. For example, such modification may
be performed via the graphical interface of the hearing aid and/or the interface of
the auxiliary device.
[0095] In one or more example hearing systems, the hearing system is configured to modify
the mapping of the audio signal by obtaining (e.g., via the interface) a new audio
signal indicative of a new sound in an environment via the microphone. For example,
the acquisition of the new signal can be performed via the graphical interface of
the hearing aid and/or via the interface of the hearing aid, e.g., via the input unit.
In one or more example hearing systems, the hearing system is configured to modify
the mapping of the audio signal by mapping (e.g., via the interface and/or the processor)
the new audio signal to the at least one of the one or more audio events of the hearing
aid.
[0096] In one or more example hearing systems, the hearing system is configured to modify
the mapping of the audio signal by removing (e.g., via the processor and/or interface)
the mapping of the audio signal to the at least one of the one or more audio events
of the hearing aid. For example, the hearing system can be configured to remove such
mapping via the processor of the hearing aid and/or the auxiliary device. For example,
the hearing system can be configured to remove such mapping via the graphical interface
of the hearing aid and/or the interface of the auxiliary device. In one or more example
hearing systems, the hearing system is configured to modify the mapping of the audio
signal by mapping a pre-determined signal to the at least one of the one or more audio
events of the hearing aid. The hearing system may be configured to activate (e.g.,
via the processor) the mapping of the pre-determined signal to the at least one of
the one or more audio events of the hearing aid, e.g., stored in the memory of the
hearing system, e.g., the memory of the auxiliary device and/or the hearing aid.
[0097] In one or more example hearing systems, the hearing system is configured to modify
the mapping of the audio signal by generating (e.g., via the processor) an updated
version of the audio signal based on the audio signal and one or more audio parameters.
In one or more example hearing systems, an audio parameter of the one or more audio
parameters is indicative of an auditory characteristic of the hearing aid. For example,
the generation of the updated version of the audio signal may be performed via the
processor of the auxiliary device and/or the hearing aid.
[0098] In one or more example hearing systems, the hearing system is configured to determine
(e.g., via the processor) an action required to be carried out by the user wearing
the hearing aid. In one or more example hearing systems, the determination of the
action may be performed via the processor of the auxiliary device and/or hearing aid.
The hearing system may be configured to determine the action based on the control
signal indicative of an audio event occurring in the hearing aid, such as a change
in the state of functionality of the hearing aid associated with such audio event.
[0099] In one or more example hearing systems, the hearing system is configured to output
(e.g., via the interface) an audio output signal perceivable by the user as sound
based on one or more of: the audio signal, the pre-determined signal, and the action.
In one or more example hearing systems, the hearing aid comprises an output unit configured
to output the audio output signal perceivable by the user as sound. In other words,
the output unit may be configured to output the audio signal that has been mapped
an audio event occurring in the hearing aid (e.g., when there has been a change in
a status of functionality of the hearing aid). For example, the output unit is configured
to output a spoken message and/or a pre-recorded message perceivable to the user as
sound. In one or more example hearing systems, the audio output signal can be seen
as an analogue output signal.
[0100] The hearing aid may be configured to wirelessly receive (e.g., via the input unit)
the audio output signal from the auxiliary device and output the audio output signal
perceivable by the user as sound.
[0101] In one or more example hearing systems, the hearing system is configured to output
(e.g., via a written message indicative of the action to be carried out by the user
wearing the hearing aid in addition to the audio output signal. In one or more example
hearing systems, the hearing system is configured to output the written message via
the auxiliary device, e.g., via the graphical interface and/or display on the auxiliary
device.
[0102] In one or more example hearing systems, the mapping is capable of being performed
without intervention of an audiologist.
[0103] In one or more example hearing systems, the memory of the hearing system (e.g., the
memory of the hearing aid and/or the auxiliary device) is configured to store one
or more of: the pre-determined signal, the audio signal, the new audio signal, the
one or more events. For example, the memory of the hearing system may include spoken
notifications (e.g., imported spoken messages and/or (pre-)recorded messages), and/or
non-spoken notifications (e.g., tonal indications, e.g., beeps), and/or combinations
thereof.
[0104] FIGS. 1A-1B illustrate an example hearing system 22 according to this disclosure.
[0105] FIG. 1A illustrates an example application scenario of the hearing system 22 according
to this disclosure. The scenario comprises a user 24, hearing aids 22A, 22B, and an
auxiliary device 26, the hearing aids 22A, 22B having one or more audio events. The
hearing system 22 comprises one or more of: the hearing aid 22A (e.g., right hearing
aid), the hearing aid 22B (e.g., left hearing aid) and the auxiliary device 26.
[0106] FIG. 1B illustrates an auxiliary device (e.g., auxiliary device 26) running an example
APP for enabling notification intelligibility of a hearing aid (e.g., hearing aid
22A and/or hearing aid 22B) having one or more audio events. FIG. 1B may illustrate
a representation of a graphical interface and/or display of the auxiliary device 26
when the hearing system 22 is configured to perform any of the methods disclosed in
FIGS. 2A-2B.
[0107] The auxiliary device 26 may be configured to communicate with the hearing aids 22A,
22B via a wireless link (e.g., radio access link, radio frequency (RF) link) 40A,
40B respectively. Such wireless links 40A, 40B may be implemented in the hearing aids
22A, 22B by corresponding antenna and transceiver circuitry, e.g., illustrated in
FIG. 1A as 22AA and 22BA, respectively. In one or more examples, the wireless links
40A, 40B are configured to allow an exchange of audio signals and/or audio information
and/or control signals (e.g., including information regarding a mapping of an audio
signal to at least one of the one or more audio signals and/or audio output signals
in response to an action required to be performed by the user 24 and/or acceptance/rejection
signals in response to user interactions) between the hearing aids 22A, 22B and the
auxiliary device 26 (e.g., audio signals 42AA, 42BA).
[0108] In the embodiment of FIG. 1B, the APP is a non-transitory application (APP) comprising
executable instructions configured to be executed on a processor of the auxiliary
device 26 to implement a graphical interface 26A (e.g., a user interface) for the
hearing system 22. In the embodiment of FIG.1B, the APP is configured to run on a
smartphone, or on another portable device allowing communication with the hearing
aid 22A and/or hearing aid 22B. The auxiliary device 26 comprising the graphical interface
26A may be adapted for being held in a hand of a user (e.g., user 24). In one or more
examples, the APP can be configured to run in one or more of: an electronic device,
a wearable electronic device, and a smartwatch.
[0109] The hearing system 22 is configured to obtain user input indicating at least one
of the one or more audio events 16A, 16B, 16C, 16D to be mapped. In the embodiment
of FIGS. 1A-1B, the hearing system 22 is configured to obtain the user input via the
auxiliary device 26. In other words, the user 24 may indicate the at least one of
the one or more audio events 16A, 16B, 16C, 16D to be mapped via a touch gesture,
e.g., by touching at least one of graphical representations of the one or more audio
events 16A, 16B, 16C, 16D. In the embodiment of FIGS. 1A-1B, the hearing system 22
is configured to obtain the user input indicating that the audio event 16C is to be
mapped to the audio signal.
[0110] In one or more examples, the hearing system 22 is configured to obtain a request
from the user 24 to record and/or import the audio signal to the hearing system 22
(e.g., to the auxiliary device 26). For example, the user 24 may record the audio
signal using the auxiliary device. The recording of the audio signal may be illustrated
by graphical representation 12. The audio signal may be stored in a memory of the
auxiliary device 26. In one or more examples, the hearing system 22 is configured
to obtain the request from the user 24 upon user interaction, e.g., by a touch gesture
on a graphical representation of a recording button 10 in the graphical interface
26A of the auxiliary device 26. Optionally, the hearing system 22 is configured to
obtain the request from the user 24 via a pressure gesture, e.g., by having the user
24 pressing a button in the hearing aid 22A and/or 22B (e.g., button not shown in
FIGS. 1A-1B). The hearing system 22 is configured to obtain an audio signal indicative
of a sound in an environment via a microphone (e.g., a microphone in hearing aid 22A
and/or hearing aid 22B and/or auxiliary device 26). The hearing system 22 may be configured
to obtain the audio signal upon accepting the request from the user 24.
[0111] Optionally, the hearing system 22 is configured to obtain the user input indicating
the at least one of the one or more audio events 16A, 16B, 16C, 16D to be mapped to
the audio signal after obtaining the audio signal.
[0112] In one or more examples, the hearing system 22 is configured to map the audio signal
to the at least one of the one or more audio events 16A, 16B, 16C, 16D of the hearing
aid 22A, 22B. In the embodiment of FIGS. 1A-1B, the hearing system 22 is configured
to map the audio signal to the audio event 16C, as illustrated by symbol 17. In other
words, symbol 17 may illustrate the actual mapping of the audio signal with the audio
event 16C. For example, the user 24 assigns the audio signal to the audio event 16C.
[0113] The mapping is capable of being performed throughout a time of use of the hearing
aid 22A and/or hearing aid 22B. In one or more examples, the time of use of the hearing
aid 22A and/or hearing aid 22B is in one or more of: a turn on state, a charging state,
and a turn off state.
[0114] A screen of an example graphical interface 26A of the auxiliary device 26 is illustrated
in FIG. 1B. The graphical interface 26A comprises a display, e.g., a touch sensitive
display, displaying the one or more audio events 16A, 16B, 16C, 16D of the hearing
aid 22A and/or hearing aid 22B for enabling the user 24 to map an audio signal to
at least one of the one or more audio events 16A, 16B, 16C, 16D. In one or more examples,
the hearing system 22 is configured to map the audio signal to the at least one of
the one or more audio events 16A, 16B, 16C, 16D of the hearing aid 22A, 22B via the
graphical interface 26A. For example, the user 24 can select the at least one of the
one or more audio events 16A, 16B, 16C, 16D of the hearing aid 22A, 22B via the graphical
interface 26A to be assigned to the audio signal.
[0115] In one or more examples, the hearing system 22 is configured to display an interface
object indicative of the at least one of the one or more audio events 16A, 16B, 16C,
16D. In one or more examples, the hearing system 22 is configured to display the interface
object via the auxiliary device 26. In one or more examples, the hearing system 22
is configured to display the one or more audio events 16A, 16B, 16C, 16D via the graphical
interface 26A of the auxiliary device 26.
[0116] In one or more examples, an audio event (e.g., audio events 16A, 16B, 16C, 16D) is
associated with a functionality of the hearing aid 22A and/or hearing aid 22B. In
one or more examples, the one or more audio events 16A, 16B, 16C, 16D of the hearing
aid 22A and/or hearing aid 22B are pre-determined (e.g., stored in a memory and/or
data repository of the auxiliary device 26).
[0117] In one or more examples, the hearing system 22 is configured to obtain user input
request indicative of a request to modify the mapping of the audio signal to the at
least one of the one or more audio events 16A, 16B, 16C, 16D of the hearing aid 22A
and/or hearing aid 22B. In one or more examples, the hearing system 22 is configured
to, upon obtaining the user input request, determining whether the user 24 is allowed
to perform such modification. In one or more examples, the hearing system 22 is configured
to obtain the user input request when detecting a touch gesture on a graphical representation
of a button 14 (e.g., a "Save as" button) in the graphical interface 26A of the auxiliary
device 26.
[0118] In one or more examples, the hearing system 22 is configured to, upon obtaining the
user input request, provide an acceptance mapping message indicating acceptance to
modify the mapping of the mapping of the audio signal to the at least one of the one
or more audio events 16A, 16B, 16C, 16D. In one or more examples, the hearing system
22 is configured to, upon providing the acceptance mapping message, modify the mapping
of the audio signal to the at least one of the one or more audio events 16A, 16B,
16C, 16D of the hearing aid 22A and/or hearing aid 22B. Optionally, the hearing system
22 may be configured to modify the mapping of the audio signal to the at least one
of the one or more audio events 16A, 16B, 16C, 16D of the hearing aid 22A and/or hearing
aid 22B without obtaining the user input request. The hearing system 22 may be configured
to provide the acceptance mapping message perceivable to the user 24 as sound (e.g.,
in form of a spoken message and/or a tonal indication, e.g., a beep).
[0119] In one or more example methods, the hearing device is configured to, upon obtaining
the user input request, providing a rejection mapping message indicating rejection
to modify the mapping of the audio signal to the at least one of the one or more audio
events 16A, 16B, 16C, 16D. The hearing system 22 may be configured to provide the
rejection mapping message perceivable to the user 24 as sound (e.g., in form of a
spoken message and/or a tonal indication, e.g., a beep). The hearing system 22 may
be configured to, upon providing the rejection mapping message, maintain the mapping
of the audio signal to the at least one of the one or more audio events 16A, 16B,
16C, 16D.
[0120] In one or more example methods, the hearing system 22 is configured to obtain new
user input indicating an audio event (e.g., of the one or more audio events 16A, 16B,
16C, 16D) to be mapped. In one or more examples, the hearing system 22 is configured
to modify the mapping of the audio signal by obtaining a new audio signal indicative
of a new sound in an environment via the microphone. Put differently, the user 24
may select an audio event to be mapped to the new audio signal, e.g., by touching
one of the graphical representations of the one or more audio events 16A, 16B, 16C,
16D. The user 24 may select an audio event of the one or more audio events 16A, 16B,
16C, 16D to be mapped to the new audio signal after obtaining the new audio signal.
The user 24 may select an audio event of the one or more audio events 16A, 16B, 16C,
16D to be mapped to the new audio signal before obtaining the new audio signal.
[0121] In one or more examples, the hearing system 22 is configured to modify the mapping
of the audio signal by mapping the new audio signal to the at least one of the one
or more audio events 16A, 16B, 16C, 16D of the hearing aid (e.g., without removing
the mapping of the audio signal).
[0122] For example, the hearing system 22 is configured to modify the mapping of the audio
signal to the audio event 16C, e.g., previously recorded by the user 24 and/or stored
in the memory of the auxiliary device 26. The mapping of the audio signal to the audio
event 16C may be stored in the memory of the auxiliary device 26. In one or more examples,
the hearing system 22 is configured to modify the mapping of the audio signal to the
audio event 16C by mapping the new audio signal to the audio event 16C. In one or
more examples, the hearing system 22 is configured to modify the mapping of the audio
signal to the audio event 16C by mapping the audio signal to at least one of the audio
events 16A, 16B, 16D.
[0123] In one or more examples, the hearing system 22 is configured to modify the mapping
of the audio signal by removing the mapping of the audio signal to the at least one
of the one or more audio events 16A, 16B, 16C, 16D of the hearing aid 22A and/or hearing
aid 22B. In one or more example methods, the hearing system 22 is configured to modify
the mapping of the audio signal by mapping a pre-determined signal to the at least
one of the one or more audio events 16A, 16B, 16C, 16D of the hearing aid 22A and/or
hearing aid 22B. In one or more example methods, the hearing system 22 is configured
to modify the mapping of the audio signal by retrieving from a memory of the hearing
system 22 (e.g., the memory of the auxiliary device 26) the mapping of the pre-determined
signal to the at least one of the one or more audio events 16A, 16B, 16C, 16D.
[0124] For example, the hearing system 22 is configured to modify the mapping of the audio
signal to the audio event 16C by removing the mapping of the audio signal to the audio
event 16C. In other words, the hearing system 22 may be configured to remove the mapping
of the audio signal to the audio event 16C by disabling and/or deactivating such mapping.
Optionally, the hearing system 22 may be configured to remove the mapping of the audio
signal to the audio event 16C by deleting the audio signal from the memory of the
hearing system 22, e.g., from the memory of the auxiliary device 26.
[0125] In one or more examples, the hearing system 22 is configured to modify the mapping
of the audio signal by generating an updated version of the audio signal based on
the audio signal and one or more audio parameters 14A, 14B, 14C. In one or more examples,
an audio parameter of the one or more audio parameters 14A, 14B, 14C is indicative
of an auditory characteristic of the hearing aid 22A and/or hearing aid 22B. An auditory
characteristic of the hearing aid 22A and or hearing aid 22B may comprise one or more
of: background noise reduction, speed, pitch, a loudness level, timbre, rhythm, and
any other suitable characteristic.
[0126] The audio parameters 14A, 14B, 14C may be indicative of auditory characteristics
such as pitch, speed, and noise reduction (NR) respectively. In one or more examples,
each audio parameter is associated with a modification range (e.g., modification range
15A, 15B, 15C).
[0127] In one or more examples, the hearing system 22 is configured to modify the mapping
of the audio signal (e.g., and/or the new audio signal) upon user interaction. In
other words, the hearing system 22 may be configured to detect a change in the value
of the one or more audio parameters 14A, 14B, 14C.
[0128] For example, the hearing system 22 is configured to detect a change in the value
of the one or more audio parameters 14A, 14B, 14C when detecting a touch gesture on
a modification range 15A, 15B, 15C in the graphical interface 26A of the auxiliary
device 26. The hearing system 22 may be configured to detect a change in the value
of the one or more audio parameters 14A, 14B, 14C when the user 24 moves object 15AA,
15BA, 15CA towards a value and/or level a desired by the user 24 within the modification
range 15A, 15B, 15C respectively.
[0129] In one or more examples, the hearing system 22 is configured to obtain the user input
request, e.g., when detecting a touch gesture on the graphical representation of a
button 14 (e.g., a "Save as" button) in the graphical interface 26A of the auxiliary
device 26. The hearing system 22 may be configured to obtain the user input request
(e.g., as a touch gesture in the button 14), after detecting a change in the value
of the one or more audio parameters 14A, 14B, 14C (e.g., when detecting a touch gesture
on at least one of the modification ranges 15A, 15B, 15C in the graphical interface
26A of the auxiliary device 26). In one or more examples, the hearing system 22 is
configured to, upon obtaining the user input request, providing a modification acceptance
message indicating acceptance to modify the one or more audio parameters. In one or
more example methods, the hearing system 22 is configured to, upon obtaining the user
input request, providing a modification rejection message indicating rejection to
modify the one or more audio parameters. The hearing system 22 may be configured to,
upon providing the modification rejection message, do not generate of the updated
version of the audio signal (e.g., and/or of the new audio signal).
[0130] For example, the hearing system 22 can be configured to modify the audio signal mapped
to the audio event 16C by modifying at least one of the one or more audio parameters
14A, 14B, 14C.
[0131] For example, the hearing system 22 can be configured to modify the mapping of the
audio signal to the audio event 16C by mapping the new audio signal to the audio event
16C and modifying at least one of the one or more audio parameters 14A, 14B, 14C associated
with the new audio signal. The user 24 may request validation of such modification
by touching the graphical representation of the button 14. The hearing system 22 may
approve or reject such modification.
[0132] In one or more examples, button 14 may be enable validation or rejection of one or
more modifications. Such one or more modifications may include one or more of: removal
of the audio signal from the mapping, mapping of a new audio signal to an audio event
already mapped to another audio signal, mapping of the audio signal to another audio
event, and modification of one or more audio parameters associated with a hearing
perception of the user 24 (e.g., for easing perception of the audio signal and/or
the new audio signal).
[0133] In one or more examples, the hearing system 22 is configured to determine an action
required to be carried out by the user wearing the hearing aid. For example, the hearing
system 22 is configured to determine the action based on a status of functionality
of the hearing aid 22A and/or the hearing aid 22B, e.g., when there is a change in
the status of functionality of the hearing aid 22A and/or hearing aid 22B. In the
embodiment of FIGS. 1A-1B, the hearing system 22 may be configured to determine a
change in the battery level of the hearing aid 22A and/or the hearing aid 22B. In
other words, the hearing system may be configured to determine that the battery level
of the hearing aid 22A and/or hearing aid 22B is below a power threshold, e.g., that
the hearing aid 22A and/or hearing aid 22B are running out of battery.
[0134] In one or more examples, the hearing system 22 is configured to output an audio output
signal perceivable by the user 24 as sound based on one or more of: the audio signal,
the pre-determined signal, and the action. The hearing system 22 may be configured
to output the audio output signal based on the new audio signal and the action when
modifying the mapping of the audio signal to at least one of the one or more audio
events 16A, 16B, 16C, 16D. The hearing system 22 may be configured to output the audio
output signal based on the updated version of the audio signal and the action when
modifying the one or more audio parameters. The hearing system 22 may be configured
to output the audio output signal based on the pre-determined signal and the action
when removing the audio signal from the memory of the hearing system (e.g., the memory
of the auxiliary device 26). In one or more examples, the hearing system 22 is configured
to output the audio output signal perceivable by the user 24 as sound via the hearing
aid 22A and/or hearing aid 22B. In one or more examples, the audio output signal perceivable
by the user 24 as sound can be seen as a recorded message in the auxiliary device
24 and/or an imported message to the auxiliary device 26. Put differently, the audio
output signal perceivable by the user 24 as sound may be seen as a spoken notification.
[0135] In one or more examples, the hearing system 22 is configured to output a written
message 18 indicative of the action to be carried out by the user wearing the hearing
aid in addition to the audio output signal. In the embodiment of FIGS. 1A-1B, the
hearing system 22 is configured to output the written message 18 for notifying the
user 24 of the low battery level of the hearing aid 22A and/or hearing aid 22B.
[0136] In one or more examples, the mapping is capable of being performed without intervention
of an audiologist. In one or more examples, the mapping is performed by the user 24.
The present disclosure may enable the user 24 to adjust the audio signal and/or to
modify the mapping of the audio signal to an audio event of the one or more audio
events to its own preference, thereby improving notification perception. In other
words, the user 24 may easily (e.g., without any effort) understand that there has
been a change in the level of battery (e.g., a change in the status of the battery
of the hearing aid 22A and/or hearing aid 22B).
[0137] The hearing system 22 may be configured to perform any of the method disclosed in
FIGS. 2A-2B.
[0138] The hearing system 22 (e.g., the auxiliary device 26 and/or hearing aid 22A, 22B)
may comprise a processor (e.g., a signal processing unit), the processor being optionally
configured to perform any of the operations disclosed in FIGS. 2A-2B (such as any
one or more of: S101, S102, S104, S106, S106A, S108, S110, S110A, S110B, S110C, S110D,
S110E, S112, S114, S116). The operations of the hearing system 22 (e.g., the auxiliary
device 26 and/or hearing aid 22A, 22B) may be embodied in the form of executable logic
routines (e.g., lines of code, software programs, etc.) that are stored on a non-transitory
computer readable medium (for example, a memory) and are executed by such processor.
[0139] The hearing system 22 (e.g., the auxiliary device 26 and/or hearing aid 22A, 22B)
may comprise a memory, the memory being one or more of a buffer, a flash memory, a
hard drive, a removable media, a volatile memory, a non-volatile memory, a random-access
memory (RAM), and any other suitable device. In a typical arrangement, such memory
may include a non-volatile memory for long term data storage and a volatile memory
that functions as system memory for the processor. Such memory may exchange data with
the processor over a data bus. The memory 10A may be a non-transitory computer readable
medium.
[0140] Such memory may be configured to store information such as one or more audio events,
at least one of the one or more audio events to be mapped, an audio signal, a mapping
of the audio signal to at least one of the one or more audio events, a mapping of
a new audio signal to the at least one of the one or more audio events, a pre-determined
signal, a mapping of the pre-determined signal to at least one of the one or more
audio events, an updated version of the audio signal, an action required to be carried
out by a user, the audio output signal in a part of the memory.
[0141] FIGS. 2A-2B are a flow-chart of an example method 100 of enabling notification intelligibility
of a hearing aid having one or more audio events according to this disclosure.
[0142] The method 100 comprises obtaining S102 user input indicating at least one of the
one or more audio events to be mapped. The method 100 comprises obtaining S104 an
audio signal indicative of a sound in an environment via a microphone. The method
100 comprises mapping S106 the audio signal to the at least one of the one or more
audio events of the hearing aid. The mapping is capable of being performed throughout
a time of use of the hearing aid.
[0143] In one or more example methods, the time of use of the hearing aid is in one or more
of: a turn on state, a charging state, and a turn off state.
[0144] In one or more example methods, the method 100 comprises displaying S 101 an interface
object indicative of the at least one of the one or more audio events.
[0145] In one or more example methods, an audio event is associated with a functionality
of the hearing aid.
[0146] In one or more example methods, the one or more audio events of the hearing aid are
pre-determined.
[0147] In one or more example methods, the method 100 comprises mapping S106A the audio
signal to the at least one of the one or more audio events of the hearing aid via
a graphical interface.
[0148] In one or more example methods, the method 100 comprises obtaining S108 user input
request indicative of a request to modify the mapping of the audio signal to the at
least one of the one or more audio events of the hearing aid. In one or more example
methods, the method 100 comprises, upon obtaining S108 the user input request, modifying
S 110 the mapping of the audio signal to the at least one of the one or more audio
events of the hearing aid.
[0149] In one or more example methods, modifying S 110 the mapping of the audio signal comprises
obtaining S110A a new audio signal indicative of a new sound in an environment via
the microphone. In one or more example methods, modifying S 110 the mapping of the
audio signal comprises mapping S110B the new audio signal to the at least one of the
one or more audio events of the hearing aid.
[0150] In one or more example methods, modifying S 110 the mapping of the audio signal comprises
removing S110C the mapping of the audio signal to the at least one of the one or more
audio events of the hearing aid. In one or more example methods, modifying S110 the
mapping of the audio signal comprises mapping S110D a pre-determined signal to the
at least one of the one or more audio events of the hearing aid.
[0151] In one or more example methods, modifying S 110 the mapping of the audio signal comprises
generating S 1 10E an updated version of the audio signal based on the audio signal
and one or more audio parameters. In one or more example methods, an audio parameter
of the one or more audio parameters is indicative of an auditory characteristic of
the hearing aid.
[0152] In one or more example methods, the method 100 comprises determining S 112 an action
required to be carried out by the user wearing the hearing aid. In one or more example
methods, the method 100 comprises outputting S 114 an audio output signal perceivable
by the user as sound based on one or more of: the audio signal, the pre-determined
signal, and the action.
[0153] In one or more example methods, the method 100 comprises outputting S 116 a written
message indicative of the action to be carried out by the user wearing the hearing
aid in addition to the audio output signal.
[0154] In one or more example methods, the mapping is capable of being performed without
intervention of an audiologist.
[0155] It is intended that the structural features of the devices described above, either
in the detailed description and/or in the claims, may be combined with steps of the
method, when appropriately substituted by a corresponding process.
[0156] As used, the singular forms "a," "an," and "the" are intended to include the plural
forms as well (i.e., to have the meaning "at least one"), unless expressly stated
otherwise. It will be further understood that the terms "includes," "comprises," "including,"
and/or "comprising," when used in this specification, specify the presence of stated
features, integers, steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers, steps, operations,
elements, components, and/or groups thereof. It will also be understood that when
an element is referred to as being "connected" or "coupled" to another element, it
can be directly connected or coupled to the other element, but an intervening element
may also be present, unless expressly stated otherwise. Furthermore, "connected" or
"coupled" as used herein may include wirelessly connected or coupled. As used herein,
the term "and/or" includes any and all combinations of one or more of the associated
listed items. The steps of any disclosed method are not limited to the exact order
stated herein, unless expressly stated otherwise.
[0157] It should be appreciated that reference throughout this specification to "one embodiment"
or "an embodiment" or "an aspect" or features included as "may" means that a particular
feature, structure or characteristic described in connection with the embodiment is
included in at least one embodiment of the disclosure. Furthermore, the particular
features, structures or characteristics may be combined as suitable in one or more
embodiments of the disclosure. The previous description is provided to enable any
person skilled in the art to practice the various aspects described herein. Various
modifications to these aspects will be readily apparent to those skilled in the art.
[0158] The claims are not intended to be limited to the aspects shown herein but are to
be accorded the full scope consistent with the language of the claims, wherein reference
to an element in the singular is not intended to mean "one and only one" unless specifically
so stated, but rather "one or more". Unless specifically stated otherwise, the term
"some" refers to one or more.