[0001] A new hearing aid is provided with improved localization of a monaural signal source.
[0002] Hearing impaired individuals often experience at least two distinct problems:
- 1) A hearing loss, which is an increase in hearing threshold level, and
- 2) A loss of ability to understand speech in noise in comparison with normal hearing
individuals. For most hearing impaired patients, the performance in speech-in-noise
intelligibility tests is worse than for normal hearing people, even when the audibility
of the incoming sounds is restored by amplification. Speech reception threshold (SRT)
is a performance measure for the loss of ability to understand speech, and is defined
as the signal-to-noise ratio required in a presented signal to achieve 50 percent
correct word recognition in a hearing in noise test.
[0003] In order to compensate for hearing loss, today's digital hearing aids typically use
multi-channel amplification and compression signal processing to restore audibility
of sound for a hearing impaired individual. In this way, the patient's hearing ability
is improved by making previously inaudible speech cues audible.
[0004] However, loss of ability to understand speech in noise, including speech in an environment
with multiple speakers, remains a significant problem of most hearing aid users.
[0005] One tool available to a hearing aid user in order to increase the signal to noise
ratio of speech originating from a specific speaker, is to equip the speaker in question
with a microphone, often referred to as a spouse microphone, that picks up speech
from the speaker in question with a high signal to noise ratio due to its proximity
to the speaker. The spouse microphone converts the speech into a corresponding audio
signal with a high signal to noise ratio and transmits the signal, preferably wirelessly,
to the hearing aid for hearing loss compensation. In this way, a speech signal is
provided to the user with a signal to noise ratio well above the SRT of the user in
question.
[0006] Another way of increasing the signal to noise ratio of speech from a speaker that
a hearing aid user desires to listen to, such as a speaker addressing a number of
people in a public place, e.g. in a church, an auditorium, a theatre, a cinema, etc.,
or through a public address systems, such as in a railway station, an airport, a shopping
mall, etc., is to use a telecoil to magnetically pick up audio signals generated,
e.g., by telephones, FM systems (with neck loops), and induction loop systems (also
called "hearing loops"). In this way, sound may be transmitted to hearing aids with
a high signal to noise ratio well above the SRT of the hearing aid users.
[0007] In all of the above-mentioned examples a monaural audio signal is transmitted wirelessly
to the hearing aid.
[0008] Hearing aids, and in particular binaural hearing aid systems, typically reproduce
sound in such a way that the user perceives sound sources to be localized inside the
head. The sound is said to be internalized rather than being externalized. A common
complaint for hearing aid users when referring to the "hearing speech in noise problem"
is that it is very hard to follow anything that is being said even though the signal
to noise ratio (SNR) should be sufficient to provide the required speech intelligibility.
A significant contributor to this fact is that the hearing aid reproduces an internalized
sound field. This adds to the cognitive loading of the hearing aid user and may result
in listening fatigue and ultimately that the user removes the hearing aid(s).
[0009] Thus, there is a need for a new hearing aid with improved localization of sound sources
emitting sound signals that are transmitted wirelessly as monaural sound signals to
a user, i.e. there is a need for a new hearing aid capable of adding spatial cues
to a monaural sound signal corresponding to a direction and possibly distance of a
sound source from which the monaural signal originates, with relation to the orientation
of the head of a user of the hearing aid.
[0010] With improved localization, different sound sources will typically be perceived to
be positioned in different spatial positions in the sound environment of the user.
In this way, the user's auditory system's binaural signal processing is utilized to
improve the user's capability of separating signals from different sound sources and
of focussing his or her listening to a desired one of the sound sources, or even to
simultaneously listen to and understand more than one of the sound sources.
[0011] Human beings detect and localize sound sources in three-dimensional space by means
of the human binaural sound localization capability.
[0012] The input to the hearing consists of two signals, namely the sound pressures at each
of the eardrums, in the following termed the binaural sound signals. Thus, if sound
pressures at the eardrums that would have been generated by a given spatial sound
field are accurately reproduced at the eardrums, the human auditory system will not
be able to distinguish the reproduced sound from the actual sound generated by the
spatial sound field itself.
[0013] The transmission of a sound wave from a sound source positioned at a given direction
and distance in relation to the left and right ears of the listener is described in
terms of two transfer functions, one for the left ear and one for the right ear, that
include any linear distortion, such as coloration, interaural time differences and
interaural spectral differences. Such a set of two transfer functions, one for the
left ear and one for the right ear, is called a Head Related Transfer Function (HRTF).
Each transfer function of the HRTF is defined as the ratio between a sound pressure
p generated by a plane wave at a specific point in or close to the appertaining ear
canal (p
L in the left ear canal and p
R in the right ear canal) in relation to a reference. The reference traditionally chosen
is the sound pressure p
I that would have been generated by a plane wave at a position right in the middle
of the head with the listener absent.
[0014] The HRTF contains all information relating to the sound transmission to the ears
of the listener, including diffraction around the head, reflections from shoulders,
reflections in the ear canal, etc., and therefore, the HRTF varies from individual
to individual.
[0015] In the following, one of the transfer functions of the HRTF will also be termed the
HRTF for convenience.
[0016] The HRTF changes with direction and distance of the sound source in relation to the
ears of the listener. It is possible to measure the HRTF for any direction and distance
and simulate the HRTF, e.g. electronically, e.g. by filters. If such filters are inserted
in the signal path between a audio signal source, such as a microphone, and headphones
used by a listener, the listener will achieve the perception that the sounds generated
by the headphones originate from a sound source positioned at the distance and in
the direction as defined by the transfer functions of the filters simulating the HRTF
in question, because of the true reproduction of the sound pressures in the ears.
[0017] Binaural processing by the brain, when interpreting the spatially encoded information,
results in several positive effects, namely better signal source segregation, direction
of arrival (DOA) estimation, and depth/distance perception.
[0018] It is not fully known how the human auditory system extracts information about distance
and direction to a sound source, but it is known that the human auditory system uses
a number of cues in this determination. Among the cues are spectral cues, reverberation
cues, interaural time differences (ITD), interaural phase differences (IPD) and interaural
level differences (ILD).
[0019] The most important cues in binaural processing are the interaural time differences
(ITD) and the interaural level differences (ILD). The ITD results from the difference
in distance from the source to the two ears. This cue is primarily useful up till
approximately 1.5 kHz and above this frequency the auditory system can no longer resolve
the ITD cue.
[0020] The level difference is a result of diffraction and is determined by the relative
position of the ears compared to the source. This cue is dominant above 2 kHz but
the auditory system is equally sensitive to changes in ILD over the entire spectrum.
[0021] It has been argued that hearing impaired subjects benefit the most from the ITD cue
since the hearing loss tends to be less severe in the lower frequencies.
[0022] A new method of processing a monaural audio signal in a hearing aid is provided,
wherein a monaural audio signal originating from a sound source, such as a monaural
signal received from a spouse microphone, a loudspeaker, a hearing loop system, a
teleconference system, a radio, a TV, a telephone, a device with an alarm, etc., is
filtered in such a way that the user perceives the received monaural audio signal
to be emitted by the sound source positioned in its current position and/or arriving
from a direction towards its current position.
[0023] The perceived externalization and perceived spatial positioning of the sound source
assists the user in understanding speech from the sound source, and in focussing the
user's listening on the sound source, if desired.
[0024] For example, in a binaural hearing aid, a binaural filter may be configured to output
signals based on the monaural audio signal and intended for the right ear and left
ear of the user of the binaural hearing aid system, wherein the output signals are
phase shifted with a phase shift with relation to each other in order to introduce
an interaural time difference based on and corresponding to the position of the sound
source from which the monaural audio signal originates, whereby the perceived position
of the corresponding sound source is shifted outside the head and laterally with relation
to the orientation of the head of the user of the binaural hearing aid system.
[0025] In a monaural hearing aid, a filter may be configured to output a signal based on
the monaural audio signal and intended for the right ear or left ear of the user of
the monaural hearing aid, wherein the output signal is phase shifted with relation
to the monaural signal in order to introduce an interaural time difference with respect
to the naturally received sound at the other ear of the user, corresponding to the
position of the sound source from which the monaural audio signal originates, whereby
the perceived position of the corresponding sound source is shifted outside the head
and laterally with relation to the orientation of the head of the user of the binaural
hearing aid system.
[0026] Alternatively, or additionally, in the binaural hearing aid, the binaural filter
may be configured to output signals based on the monaural audio signal and intended
for the right ear and left ear, respectively, of the user of the binaural hearing
aid system, wherein the output signals are equal to the monaural audio signal multiplied
with a right gain and a left gain, respectively; in order to obtain an interaural
level difference based on and corresponding to the position of the sound source from
which the monaural audio signal originates, whereby the perceived position of the
corresponding sound source is shifted laterally with relation to the orientation of
the head of the user of the binaural hearing aid system.
[0027] In the monaural hearing aid, the filter may be configured to output a signal based
on the monaural audio signal and intended for the right or left ear of the user of
the binaural hearing aid system, wherein the output signal is equal to the monaural
audio signal multiplied with a right gain or a left gain; in order to obtain an interaural
level difference with respect to the naturally received sound at the other ear of
the user, based on and corresponding to the position of the sound source from which
the monaural audio signal originates, whereby the perceived position of the corresponding
sound source is shifted laterally with relation to the orientation of the head of
the user of the binaural hearing aid system.
[0028] For example, in the binaural hearing aid, the binaural filter may have a selected
HRTF of a the direction and distance towards the sound source from which the monaural
signal originates so that the user perceives the received monaural audio signal to
be emitted by the sound source at its current position with relation to the user.
[0029] In the monaural hearing aid, the filter may have the right ear part or the left ear
part of the HRTF of the direction and distance towards the sound source from which
the monaural signal originates so that the user perceives the received monaural audio
signal to be emitted by the sound source at its current position with relation to
the user, since the other part of the HRTF is naturally performed by the other ear.
[0030] In accordance with the new method, the monaural audio signal may be filtered with
approximations to respective HRTFs. For example, HRTFs may be determined using a manikin,
such as KEMAR. In this way, an approximation to the individual HRTFs is provided that
can be of sufficient accuracy for the hearing aid user to maintain sense of direction
when wearing the hearing aid. Sufficient accuracy is obtained when a user perceives
a sensation of direction towards a sound source from which the monaural audio signal
originates; or, a user perceives localization of the sound source. For example, based
on the monaural signal, the user may receive acoustic signals at his or her eardrums
with an interaural time difference and/or an interaural level difference sufficient
for the perceived position of the sound source from which the monaural signal originates,
to be shifted outside the head and laterally with relation to the orientation of the
head of the user of the binaural hearing aid system, preferably into a perceived position
corresponding to the actual position of the sound source, e.g. laterally within ±
45° of the actual position.
[0031] A panel of listeners may assess the perceived sense of direction in a listening test,
e.g. a three-alternative-forced-choice test.
[0032] The filtering of the monaural audio signal performed by the filter may be determined
based on a signal provided by one microphone, or a combination of microphones, located
in position(s) with relation to a user of the hearing aid, wherein spatial cues of
sounds arriving at these position(s) are substantially the same as the spatial cues
of sound that would have been received at the user's eardrum with the hearing aid
absent. A microphone may for example be positioned in the outer ear of the user in
front of the pinna, for example at the entrance to the ear canal; or, inside the ear
canal, in which positions spatial cues of sounds are substantially identical to the
corresponding spatial cues of sounds arriving at the ear drum with the hearing aid
absent, to a much larger extent than what is possible with e.g. the microphone behind
the ear as with a conventional BTE hearing aid. A position below the triangular fossa
has also proven advantageous with relation to preservation of spatial cues.
[0033] Thus, a new hearing aid is provided in which a monaural signal that does not originate
from a microphone accommodated in a hearing aid housing; rather the monaural signal
originates from another sound source external to the hearing aid housing, such as
a spouse microphone, a media player, a hearing loop system, a teleconference system,
a radio, a TV, a telephone, a device with an alarm, etc., is filtered with a filter
in such a way that a user can locate the position of the sound source from which the
monaural signal originates.
[0034] The new hearing aid may comprise
an electronic input for provision of a monaural audio signal received at the electronic
input, the monaural audio signal representing sound output by a sound source located
in a position with relation to a user of the hearing aid,
an ITE microphone housing accommodating at least one ITE microphone and that may be
configured to be positioned in the outer ear of the user for fastening and retaining
the at least one ITE microphone in its operating position, the at least one ITE microphone
being configured to provide an output signal,
a filter for filtering the monaural audio signal and configured to output an output
signal, wherein the filter is configured to
phase shift the monaural audio signal based on the output signal of the at least one
ITE microphone,
apply a gain to the monaural audio signal based on the output signal of the at least
one ITE microphone, or
phase shift and apply the gain to the monaural audio signal based on the output signal
of the at least one ITE microphone.
[0035] The at least one ITE microphone may be constituted by one ITE microphone.
[0036] The hearing aid may form part of a binaural hearing aid system.
[0037] The hearing aid may further have a processor configured to generate a hearing loss
compensated output signal based on the output signal of the filter.
[0038] The hearing aid may further have a receiver for conversion of the hearing loss compensated
output signal into an acoustic signal for transmission towards an eardrum of the user
of the hearing aid.
[0039] The processor may control the filter based on an output signal of the at least one
ITE microphone in such a way that at least one spatial cue contained in the acoustic
sound received by the at least one ITE microphone and indicating the position of the
sound source from which the monaural audio signal originates, is transferred to the
monaural audio signal and included in the output signal of the filter.
[0040] In this way, the at least one ITE microphone is utilized to obtain spatial cues relating
to the sound source from which the monaural audio signal originates, and the filter
is utilized to transfer at least one the spatial cues relating to the position of
the sound source, to the monaural audio signal. For example, the acoustic speech of
a person speaking into a spouse microphone, or a hearing loop system, providing the
monaural audio signal, is also received by the at least one ITE microphone probably
with a relatively low signal-to-noise ratio; however including at least one spatial
cue relating to the position of the person.
[0041] The processor may be configured to calculate a cross-correlation between the monaural
audio signal and an output signal of the at least one ITE microphone and to determine
the phase shift based on the calculated cross-correlation.
[0042] The filter may be a digital filter having an input that is configured for reception
of the monaural audio signal, and filter coefficients that are adapted to reduce a
difference between the output signal of the at least one ITE microphone and an output
signal of the filter.
[0043] For example, the filter coefficients may be adapted towards a solution of:

wherein
SIEC(
f, t) is a short time spectrum at time
t of the output signal of the at least one ITE microphone, and
S is a short time spectrum at time
t of the monaural audio signal,
G(
f,
t) is a transfer function of the -processing filter,
p is a norm factor, and
W(f) is a frequency weighting factor, e.g. in one embodiment W(f) = 1.
[0044] The algorithm controlling the adaption could (without being restricted to) e.g. be
based on least mean square (LMS) or recursive least squares (RLS), possibly normalized,
optimization methods in which p = 2.
[0045] Various weights may be incorporated into the minimization problems above so that
the solution is optimized as specified by the values of the weights. For example,
frequency weights W(f) may optimize the solution in certain one or more frequency
ranges.
[0046] The filter may be prevented from further adapting when the filter coefficient values
have ceased changing significantly.
[0047] Further, in one or more selected frequency ranges, only magnitude of the transfer
functions may be taken into account during minimization while phase is disregarded,
i.e. in the one or more selected frequency range, the transfer function is substituted
by its absolute value.
[0048] The processor may be configured for
determination of signal magnitudes of an output signal of the at least one ITE microphone
at a plurality of frequencies, and
determination of signal magnitudes of the monaural audio signal at the plurality of
frequencies, and
determination of gain values of the filter at respective frequencies of the plurality
of frequencies based on the determined signal magnitudes.
[0049] Signal magnitudes at the plurality of frequencies may be determined as absolute values
of the Fourier transformed signal, or as rms-values, absolute values, amplitude values,
etc., of the signal, appropriately bandpass filtered and averaged, etc.
[0050] The monaural audio signal may be processed so that differences in signal magnitudes
between the monaural audio signal and the output signal of the at least one ITE microphone
are reduced. The processing may be performed in a selected frequency range, or in
a plurality of selected frequency ranges, or in the entire frequency range in which
the hearing aid circuitry is capable of operating.
[0051] For example, in the selected frequency range(s), spectrum analysis is performed whereby
the absolute value B(f) as a function of frequency of the monaural audio signal and
the absolute value A(f) as a function of frequency of the output signal of the at
least one ITE microphone are determined. Then, multiplier gain values G(f) as a function
of frequency are determined G(f) = A(f)/B(f), and the multiplier with the determined
gain values G(f) is inserted in the signal path of the monaural audio signal.
[0052] In general, determined gain values at the plurality of frequencies may be converted
to corresponding filter coefficients of a linear phase filter inserted into the signal
path of the monaural audio signal; or, the gain values may be applied directly to
the monaural audio signal in the frequency domain.
[0053] In general, determined gain values may be compared to the respective maximum stable
gain values at each of the plurality of frequencies, and gain values that are larger
than the respective maximum stable gain values may be substituted by the respective
maximum stable gain value, possibly minus a margin, to avoid risk of feedback.
[0054] The new hearing aid may be a BTE hearing aid of the type disclosed in
EP 2 611 218 A1.
[0055] Thus, the new hearing aid may further comprise a BTE hearing aid housing to be worn
behind the pinna of a user and accommodating at least one BTE sound input transducer,
such as an omni-directional microphone, a directional microphone, a transducer for
an implantable hearing aid, etc., for conversion of a sound signal into respective
audio sound signals, and
a processor configured to generate a hearing loss compensated output signal based
on the audio sound signals, an output signal of the at least one ITE microphone, and
the monaural audio signal.
[0056] The new hearing aid may further comprise a sound signal transmission member for transmission
of a signal representing the hearing loss compensated output signal from a sound output
of the BTE hearing aid housing at a first end of the sound signal transmission member
to the ear canal of the user at a second end of the sound signal transmission member,
and
an earpiece configured to be inserted in the ear canal of the user for fastening and
retaining the sound signal transmission member in its intended position in the ear
canal of the user.
[0057] The ITE microphone housing accommodating at least one ITE microphone may be combined
with, or be constituted by, the earpiece so that the at least one microphone is positioned
proximate the entrance to the ear canal when the earpiece is fastened in its intended
position in the ear canal.
[0058] The ITE microphone housing may be connected to the earpiece with an arm, possibly
a flexible arm that is intended to be positioned inside the pinna, e.g. around the
circumference of the conchae abutting the antihelix and at least partly covered by
the antihelix for retaining its position inside the outer ear of the user. The arm
may be pre-formed during manufacture, preferably into an arched shape with a curvature
slightly larger than the curvature of the antihelix, for easy fitting of the arm into
its intended position in the pinna. In one example, the arm has a length and a shape
that facilitate positioning of the at least one ITE microphone in an operating position
immediately below the triangular fossa.
[0059] The processor may be accommodated in the BTE hearing aid housing, or in the ear piece,
or part of the processor may be accommodated in the BTE hearing aid housing and part
of the processor may be accommodated in the ear piece. There is a one-way or two-way
communication link between circuitry of the BTE hearing aid housing and circuitry
of the earpiece. The link may be wired or wireless.
[0060] Likewise, there is a one-way or two-way communication link between circuitry of the
BTE hearing aid housing and the at least one ITE microphone. The link may be wired
or wireless.
[0061] The new hearing aid may be a multi-channel hearing aid in which signals to be processed
are divided into a plurality of frequency channels, and wherein signals, including
the monaural audio signal, are processed individually in each of the frequency channels.
[0062] The processor may be configured for processing the output signals of the at least
one ITE microphone and the monaural audio signal in such a way that the hearing loss
compensated output signal substantially preserves spatial cues of the output signals
of the at least one ITE microphone in a selected frequency band.
[0063] Throughout the present disclosure, spatial cues are said to be substantially preserved
when a user perceives a sensation of direction towards a sound source from which the
monaural audio signal originates; or, a user perceives localization of the sound source.
For example, based on the monaural signal, the user may receive acoustic signals at
his or her eardrums with an interaural time difference and/or an interaural level
difference sufficient for the perceived position of the sound source from which the
monaural signal originates, to be shifted outside the head and laterally with relation
to the orientation of the head of the user of the binaural hearing aid system, preferably
into a perceived position corresponding to the actual position of the sound source,
e.g. laterally within ± 45° of the actual position.
[0064] A panel of listeners may assess the preservation of spatial cues in a listening test,
e.g. a three-alternative-forced-choice test.
[0065] The selected frequency band may comprise one or more of the frequency channels, or
all of the frequency channels. The selected frequency band may be fragmented, i.e.
the selected frequency band need not comprise consecutive frequency channels.
[0066] The plurality of frequency channels may include warped frequency channels, for example
all of the frequency channels may be warped frequency channels.
[0067] When the user does not listen to the monaural audio signal, the at least one ITE
microphone may be connected conventionally in the hearing aid circuitry as is well-known
in the art of hearing aids.
[0068] Throughout the present disclosure, the "output signals of the at least one ITE microphone"
may be used to identify any analogue or digital signal forming part of the signal
path from the output of the at least one ITE microphone to an input of the processor,
including pre-processed output signals of the at least one ITE microphone.
[0069] Likewise, the "output signals of the at least one BTE sound input transducer" may
be used to identify any analogue or digital signal forming part of the signal path
from the at least one BTE sound input transducer to an input of the processor, including
pre-processed output signals of the at least one BTE sound input transducer.
[0070] In use, the at least one ITE microphone is positioned so that the output signal of
the at least one ITE microphone generated in response to the incoming sound has a
transfer function that constitutes a good approximation to the HRTFs of the user.
The filter conveys the directional information contained in the output signal of the
at least one ITE microphone to the resulting hearing loss compensated output signal
of the processor so that the hearing aid transfer function constitutes a good approximation
to the HRTFs of the user whereby improved localization is provided to the user.
[0071] The output signal of the at least one ITE microphone of the earpiece may be a combination
of several pre-processed ITE microphone signals or the output signal of a single ITE
microphone of the at least one ITE microphone. The short time spectrum for a given
time instance of the output signal of the at least one ITE microphone of the earpiece
is denoted
SIEC(
f, t) (IEC =
In the
Ear
Component).
[0072] One or more output signals of the at least one BTE sound input transducers are provided.
The spectra of these signals are denoted

and

etc (BTEC =
Behind
The
Ear
Component). The output signals may be pre-processed. Preprocessing may include, without
excluding any form of processing; adaptive and/or static feedback suppression, adaptive
or fixed beamforming and pre-filtering.
[0073] As disclosed in more detail in
EP 2 611 218 A1, adaptive filters may be configured to adaptively filter the electronic output signals
of the at least one BTE sound input transducer so that they correspond to the output
signal of the at least one ITE microphone as closely as possible. The adaptive filters
G
1, G
2, ... , G
n have the respective transfer functions:
G1(
f,
t), G2(
f,
t), ...,
Gn(
f, t).
[0074] The at least one ITE microphone operates as monitor microphone(s) for generation
of an electronic sound signal with the desired spatial information of the current
sound environment.
[0075] For example, in a hearing aid with one ITE microphone, and in the event that the
incident sound field consist of sound emitted by a single speaker, the emitted sound
having the short time spectrum X(f,t); then, under the assumption that the ITE microphone
reproduces the actual HRTF perfectly, the following signals are provided:

and

where
S(
f,
t) is the short time spectrum of the monaural audio signal, and
H(
f) is the related transfer function of the transmission path of the monaural audio
signal from the speaker to the electronic input.
[0076] After sufficient adaptation, the transfer function
G(
f,
t) of the filter fulfils that

[0077] If the speaker moves and thereby changes the HRTF, the filter, i.e. the algorithm
adjusting the filter coefficients, adapts towards the new HRTF. The time constants
of the adaptation are set to appropriately respond to changes of the current sound
environment.
[0078] Throughout the present disclosure, one signal is said to represent another signal
when the one signal is a function of the other signal, for example the one signal
may be formed by analogue-to-digital conversion, or digital-to-analogue conversion
of the other signal; or, the one signal may be formed by conversion of an acoustic
signal into an electronic signal or vice versa; or the one signal may be formed by
analogue or digital filtering or mixing of the other signal; or the one signal may
be formed by transformation, such as frequency transformation, etc., of the other
signal; etc.
[0079] Further, signals that are processed by specific circuitry, e.g. in a processor, may
be identified by a name that may be used to identify any analogue or digital signal
forming part of the signal path of the signal in question from its input of the circuitry
in question to its output of the circuitry. For example an output signal of a microphone,
i.e. the microphone audio signal, may be used to identify any analogue or digital
signal forming part of the signal path from the output of the microphone to its input
to the receiver, including any processed microphone audio signals.
[0080] The new monaural hearing aid and the new binaural hearing aid system may additionally
provide circuitry used in accordance with other conventional methods of hearing loss
compensation so that the new circuitry or other conventional circuitry can be selected
for operation as appropriate in different types of sound environment. The different
sound environments may include speech, babble speech, restaurant clatter, music, traffic
noise, etc.
[0081] The new monaural hearing aid and the new binaural hearing aid system may for example
comprise a Digital Signal Processor (DSP), the processing of which is controlled by
selectable signal processing algorithms, each of which having various parameters for
adjustment of the actual signal processing performed. The gains in each of the frequency
channels of a multi-channel hearing aid are examples of such parameters.
[0082] One of the selectable signal processing algorithms operates in accordance with the
new method.
[0083] For example, various algorithms may be provided for conventional noise suppression,
i.e. attenuation of undesired signals and amplification of desired signals.
[0084] Signal processing in the new hearing aid may be performed by dedicated hardware or
may be performed in a signal processor, or performed in a combination of dedicated
hardware and one or more signal processors.
[0085] As used herein, the terms "processor", "signal processor", "controller", "system",
etc., are intended to refer to CPU-related entities, either hardware, a combination
of hardware and software, software, or software in execution. The term processor may
also refer to any integrated circuit that includes some hardware, which may or may
not be a CPU-related entity. For example, in some embodiments, a processor may include
a filter.
[0086] For example, a "processor", "signal processor", "controller", "system", etc., may
be, but is not limited to being, a process running on a processor, a processor, an
object, an executable file, a thread of execution, and/or a program.
[0087] By way of illustration, the terms "processor", "signal processor", "controller",
"system", etc., designate both an application running on a processor and a hardware
processor. One or more "processors", "signal processors", "controllers", "systems"
and the like, or any combination hereof, may reside within a process and/or thread
of execution, and one or more "processors", "signal processors", "controllers", "systems",
etc., or any combination hereof, may be localized on one hardware processor, possibly
in combination with other hardware circuitry, and/or distributed between two or more
hardware processors, possibly in combination with other hardware circuitry.
[0088] Also, a processor (or similar terms) may be any component or any combination of components
that is capable of performing signal processing. For examples, the signal processor
may be an ASIC processor, a FPGA processor, a general purpose processor, a microprocessor,
a circuit component, or an integrated circuit.
[0089] In the following, preferred embodiments of the invention is explained in more detail
with reference to the drawing, wherein
- Fig. 1
- shows in perspective a new BTE hearing aid with an ITE-microphone residing in the
outer ear of a user,
- Fig. 2
- shows a schematic block diagram of the new hearing aid,
- Fig. 3
- shows a schematic block diagram of an exemplary new hearing aid with an adaptive filter,
and
- Fig. 4
- shows a schematic block diagram of another exemplary new hearing aid.
[0090] The new method and hearing aid will now be described more fully hereinafter with
reference to the accompanying drawings, in which various examples of the new binaural
hearing aid system are shown. The new method and binaural hearing aid system may,
however, be embodied in different forms and should not be construed as limited to
the examples set forth herein. Rather, these examples are provided so that this disclosure
will be thorough and complete, and will fully convey the scope of the invention to
those skilled in the art.
[0091] It should be noted that the accompanying drawings are schematic and simplified for
clarity, and they merely show details which are essential to the understanding of
the invention, while other details have been left out.
[0092] Like reference numerals refer to like elements throughout. Like elements will, thus,
not be described in detail with respect to the description of each figure.
[0093] Fig. 1 shows a BTE hearing aid 10 in its operating position with the BTE housing
12 behind the ear, i.e. behind the pinna 100, of the user. The BTE housing 12 conventionally
accommodates a front microphone (not visible) and a rear microphone (not visible)
for conversion of a sound signal into respective audio sound signals.
[0094] The illustrated BTE hearing aid 10 has an ITE microphone 26 positioned in the outer
ear of the user outside the ear canal at the free end of an arm 30. The arm 30 is
flexible and the arm 30 is intended to be positioned inside the pinna 100 , e.g. around
the circumference of the conchae 102 behind the tragus 104 and antitragus 106 and
abutting the antihelix 108 and at least partly covered by the antihelix for retaining
its position inside the outer ear of the user. The arm may be pre-formed during manufacture,
preferably into an arched shape with a curvature slightly larger than the curvature
of the antihelix 104, for easy fitting of the arm 30 into its intended position in
the pinna. The arm 30 contains electrical wires (not visible) for interconnection
of the ITE microphone 26 with other parts of the BTE hearing aid circuitry.
[0095] In one example, the arm 30 has a length and a shape that facilitate positioning of
the ITE microphone 26 in an operating position below the triangular fossa.
[0096] An earpiece 24 may alternatively, or additionally, hold one ITE microphone that is
positioned at the entrance to the ear canal when the earpiece is positioned in its
intended position in the ear canal of the user.
[0097] The ITE microphone 26 is connected to an A/D converter (not shown) and optional to
a pre-filter (not shown) in the BTE housing 12, with electrical wires (not visible)
contained in a sound transmission member 20.
[0098] A processor is also accommodated in the BTE housing 12 and configured to generate
a hearing loss compensated output signal based on the audio sound signals, an output
signal of the at least one ITE microphone, and a monaural audio signal.
[0099] The hearing loss compensated output signal is transmitted through electrical wires
contained in the sound signal transmission member 20 to a receiver (not visible) for
conversion of the hearing loss compensated output signal to an acoustic output signal
for transmission towards the eardrum of the user. The receiver (not visible) is contained
in the earpiece 24 that is shaped (not shown) to be comfortably positioned in the
ear canal of the user for fastening and retaining the sound signal transmission member
20 in its intended position in the ear canal of the user as is well-known in the art
of BTE hearing aids.
[0100] Fig. 2 is a block diagram illustrating one example of signal processing in the new
hearing aid 10, e.g. the hearing aid shown in Fig. 1. The hearing aid 10 has an ITE
microphone 26 to be positioned in the outer ear of the user. An output signal 28 of
the ITE microphone 26 is digitized and optionally pre-processed, such as pre-filtered,
in a pre-processor 30, and an output 32 of the pre-processor 30 is input to a processor
34.
[0101] The hearing aid 10 also comprises an electronic input 36, such as an antenna, a telecoil,
etc., for provision of a received 38 signal representing sound emitted by a sound
source (not shown) and received at the input 36 that is not coupled to a microphone
that is accommodated in a housing of the hearing aid 10.
[0102] The sound emitted by the sound source may be recorded with a spouse microphone (not
shown) carried by a person that the hearing aid user desires to listen to. The output
signal of the spouse microphone is encoded for transmission to the hearing aid 10
using wireless or wired data transmission, preferably wireless data transmission.
The receiver and decoder 40 receive the transmitted data representing the spouse microphone
output signal and decode the received signal 38 into the monaural audio signal 42.
[0103] The monaural audio signal 42 is filtered with a filter 44 in such a way that a user
can locate the position of the sound source from which the monaural signal 42 originates.
[0104] The filter 44 is controlled by processor 34 based on the, optionally pre-processed,
output signal 32 of the ITE microphone 26 and the monaural audio signal 42, and possibly
an output signal 46 of the filter 44 providing feedback to the processor 34. The processor
34 controls the filter 44 in such a way that spatial cues in the acoustic sound signal
received by the ITE microphone 26 are transferred, or substantially transferred, to
the filtered monaural audio signal 46, whereby spatial cues of the acoustic sound
signal received by the ITE microphone 26 are transferred, or substantially transferred,
to the filtered monaural audio signal 46 so that a user perceives a sensation of direction
towards a sound source from which the monaural audio signal originates; or, a user
perceives localization of the sound source. For example, based on the monaural signal,
the user may receive acoustic signals at his or her eardrums with an interaural time
difference and/or an interaural level difference sufficient for the perceived position
of the sound source from which the monaural signal originates, to be shifted outside
the head and laterally with relation to the orientation of the head of the user of
the binaural hearing aid system, preferably into a perceived position corresponding
to the actual position of the sound source, e.g. laterally within ± 45° of the actual
position.
[0105] The filtered monaural audio signal 46 is input to a processor 48 for hearing loss
compensation. The hearing loss compensated signal 50 is output to a receiver 52 that
converts the signal 50 into an acoustic signal for transmission towards the ear drum
of the user.
[0106] The processor 34 may for example control the filter 44 to phase shift the monaural
audio signal 42 with a phase shift θ, wherein θ is based on the output signal 32 of
the ITE microphone 26, and/or to multiply the monaural audio signal 42 with a gain
based on the output signal 32 of the ITE microphone.
[0107] For example, the processor 34 may be configured to calculate a cross-correlation
between the monaural audio signal 42 and the output signal 32 of the ITE microphone
26 and to determine the phase shift θ to correspond to the maximum value of the cross-correlation
and, thus, to correspond to the phase shift between the monaural audio signal 42 and
the output signal 32 of the ITE microphone 26 and/or the gain as the ratio between
the monaural signal phase shifted with the determined phase shift θ and the output
signal 32 of the ITE microphone 26. In this way, the output signal 46 of the filter
44 will contribute to the interaural time difference and/or the interaural level difference,
respectively, in substantially the same way as the acoustic signal received by the
ITE microphone 26 would have done in absence of the hearing aid.
[0108] For example, in a binaural hearing aid system with a hearing aid for the left ear
and a hearing aid for the right ear as shown in Fig. 2, the monaural audio signal
is received in both hearing aids and the respective filters 44 may output signals
intended for the right ear and left ear of the user of the binaural hearing aid system
that are phase shifted and/or amplified based on the respective cross-correlations
as disclosed above, whereby the filtered monaural signals 46 in the hearing aids obtain
substantially the same interaural time difference and/or substantially the same interaural
level difference as the corresponding acoustic signals arriving at the ears in absence
of the hearing aids so that the perceived position of the sound source from which
the monaural signal originates is shifted outside the head and laterally with relation
to the orientation of the head of the user of the binaural hearing aid system into
a perceived position corresponding to the actual position of the sound source.
[0109] Likewise, if the hearing aid shown in Fig. 2 is used as a monaural hearing aid, the
phase shift and/or amplification of the filter 44 introduce an interaural time difference
and/or interaural level difference with respect to the naturally received sound at
the other ear of the user, corresponding to the position of the sound source from
which the monaural audio signal originates.
[0110] Additionally, the processor 34 may control the transfer function of the filter 44
to be an appropriate one of the right ear part or left ear part of a selected HRTF
with the interaural time difference and/or interaural level difference corresponding
to the phase shift θ and/or gain, respectively, determined with the cross-correlation
so that the user perceives the received monaural audio signal to be emitted by the
sound source at its current position with relation to the user.
[0111] The new hearing aid circuitry shown in Fig. 2 may operate in the entire frequency
range of the hearing aid 10.
[0112] The hearing aid 10 shown in Fig. 2 may be a multi-channel hearing aid in which the
ITE microphone audio signal 28 and the monaural audio signal to be processed are divided
into a plurality of frequency channels, and wherein the signals are processed individually
in each of the frequency channels.
[0113] For a multi-channel hearing aid 10, Fig. 2 may illustrate the circuitry and signal
processing in a single frequency channel. The circuitry and signal processing may
be duplicated in a plurality of the frequency channels, e.g. in all of the frequency
channels.
[0114] For example, the signal processing illustrated in Fig. 2 may be performed in a selected
frequency band, e.g. selected during fitting of the hearing aid to a specific user
at a dispenser's office.
[0115] The selected frequency band may comprise one or more of the frequency channels, or
all of the frequency channels. The selected frequency band may be fragmented, i.e.
the selected frequency band need not comprise consecutive frequency channels.
[0116] The plurality of frequency channels may include warped frequency channels, for example
all of the frequency channels may be warped frequency channels.
[0117] The ITE microphone 26 may be connected conventionally as an input source to the processor
48 of the hearing aid so that in some situations, conventional hearing loss compensation
may be selected, and in other situations the filtered monaural audio signal 46 may
be selected for hearing loss compensation in processor 48.
[0118] An arbitrary number N of ITE microphones may substitute the ITE microphone 26, and
a combination of output signals from the N ITE microphones may be combined in a ITE
signal combiner to form the, optionally pre-processed, output signal 32, e.g. as a
weighted sum. The weights may be frequency dependent.
[0119] Fig. 3 shows a hearing aid 10 similar to the hearing aid of Fig. 2; however with
an example of filtering of the monaural audio signal 42 that is different from the
examples explained in connection with Fig. 2. The explanation of similar components
and features is not repeated, but reference is made to the description of Fig. 2.
[0120] In the hearing aid 10 of Fig. 3, the filter 44 is a digital adaptive filter with
filter coefficients controlled by the processor 34 including adaptive controller 54.
The controller 54 controls the adaptation of the filter coefficients to minimize the
difference 56 between the filtered monaural audio signal 46 and the, optionally pre-processed,
output signal 32 of the ITE microphone 26. The difference 56 is provided by subtractor
58 of the processor 34.
[0121] In this way, the filtered monaural audio signal 46 approximates the, optionally pre-processed,
output signal 32 of the ITE microphone 26, and thus also substantially attains a transfer
function corresponding to an HRTF of the user, since the ITE microphone 26 is positioned
in a position in the outer ear of the user, wherein the hearing aid transfer functions
are substantially equal to the right ear part or the left ear part of the HRTFs of
the user.
[0122] The, optionally pre-processed, output signal 32 of the ITE microphone 26 has a short
time spectrum denoted
SIEC(
f, t) (IEC =
In the
Ear
Component).
[0123] The short time spectrum of the monaural audio sound signal 42 is denoted
S(f,t). Preprocessing may include, without excluding any form of processing; adaptive and/or
static feedback suppression, adaptive or fixed beamforming and pre-filtering.
[0124] The adaptive controller 54 is configured to control the filter coefficients of adaptive
filter 44 so that the filter output 46 corresponds to the, optionally pre-processed,
output signal 32 of the ITE microphone 26 as closely as possible.
[0125] The filter 44 has the transfer function:
G(f, t).
[0126] The ITE microphone 26 operates as monitor microphone for generation of an electronic
sound signal 46 with the desired spatial information of the current sound environment.
[0127] Thus, the filter coefficients are adapted to obtain an exact or approximate solution
to the following minimization problem:

[0128] Wherein p is the norm-factor, and W(f) is a frequency weighting factor, e.g. W(f)
= 1.
[0129] The algorithm controlling the adaption could (without being restricted to) e.g. be
based on least mean square (LMS) or recursive least squares (RLS), possibly normalized,
optimization methods in which p = 2.
[0130] For example, in the event that the incident sound field consist of sound emitted
by a single speaker, the emitted sound having the short time spectrum X(f,t); then,
under the assumption that the ITE microphone 26 reproduces the actual HRTF perfectly
then the following signals are provided:

where H(f) is the transfer function of the monaural audio signal 42.
[0131] After sufficient adaptation, the hearing aid transfer function of the monaural audio
signal 42 will be equal the actual HRTF so that

[0132] If the speaker moves and thereby changes the HRTF, the adaptive filter 44, i.e. the
controller 54 adjusting the filter coefficients, adapt towards the new HRTF. The time
constants of the adaptation are set to appropriately respond to changes of the current
sound environment.
[0133] The new hearing aid circuitry shown in Fig. 3 may operate in the entire frequency
range of the hearing aid 10.
[0134] The hearing aid 10 shown in Fig. 3 may be a multi-channel hearing aid in which the
ITE microphone audio signal 28 and the monaural audio signal to be processed are divided
into a plurality of frequency channels, and wherein the signals are processed individually
in each of the frequency channels.
[0135] For a multi-channel hearing aid 10, Fig. 3 may illustrate the circuitry and signal
processing in a single frequency channel. The circuitry and signal processing may
be duplicated in a plurality of the frequency channels, e.g. in all of the frequency
channels.
[0136] For example, the signal processing illustrated in Fig. 3 may be performed in a selected
frequency band, e.g. selected during fitting of the hearing aid to a specific user
at a dispenser's office.
[0137] The selected frequency band may comprise one or more of the frequency channels, or
all of the frequency channels. The selected frequency band may be fragmented, i.e.
the selected frequency band need not comprise consecutive frequency channels.
[0138] The plurality of frequency channels may include warped frequency channels, for example
all of the frequency channels may be warped frequency channels.
[0139] The ITE microphone 26 may be connected conventionally as an input source to the processor
48 of the hearing aid so that in some situations, conventional hearing loss compensation
may be selected, and in other situations the filtered monaural audio signal 46 may
be selected for hearing loss compensation in processor 48.
[0140] An arbitrary number N of ITE microphones may substitute the ITE microphone 26, and
a combination of output signals from the N ITE microphones may be combined in a ITE
signal combiner to form the, optionally pre-processed, output signal 32, e.g. as a
weighted sum. The weights may be frequency dependent.
[0141] Fig. 4 shows a hearing aid 10 similar to the hearing aids of Figs. 2 and 3, respectively;
however, with an example of filtering of the monaural audio signal 42 that is different
from the examples explained in connection with Figs. 2 and 3. The explanation of similar
components and features is not repeated, but reference is made to the descriptions
of Figs. 2 and 3.
[0142] In Fig. 4, the filter 44 amplifies the monaural audio signal 42 with gain values
that are determined so that the signal magnitudes of the filtered monaural audio signal
46 are identical to, or substantially identical to, the signal magnitudes of the,
optionally pre-processed, output signal 32 of the ITE microphone 26 at a plurality
of frequencies, whereby spatial cues in the, optionally pre-processed, output signal
32 of the ITE microphone 26, are transferred to the filtered monaural audio signal
46.
[0143] The processor 60 performs a spectral analysis of the, optionally pre-processed, output
signal 32 of the ITE microphone 26, and the signal magnitude calculator 62 calculates
signal magnitudes of the, optionally pre-processed, output signal 32 of the ITE microphone
26 at a plurality of frequencies.
[0144] Likewise, the processor 64 performs a spectral analysis of the monaural audio signal
42, and the signal magnitude calculator 66 determines signal magnitudes of the monaural
audio signal 42 at the plurality of frequencies.
[0145] The gain processor 68 calculates gain values at respective frequencies of the plurality
of frequencies based on a ratio between calculated signal magnitudes of monaural audio
signal 42 and signal magnitudes of the, optionally pre-processed, output signal 32
of the ITE microphone 26, and outputs the determined gain values to the filter 44
that is connected for multiplying the monaural audio signal 42 with the determined
gain values at the respective frequencies.
[0146] The monaural audio signal 42 is processed so that differences in signal magnitudes
between the monaural audio signal 42 and the ITE audio sound signal 32 are reduced.
The processing may be performed in a selected frequency range, or in a plurality of
selected frequency ranges, or in the entire frequency range in which the hearing aid
circuitry is capable of operating.
[0147] The determined gain values at the plurality of frequencies may be converted to corresponding
filter coefficients of a linear phase filter inserted into the signal path of the
monaural sound signal 42, or, the gain values may be applied directly to the monaural
sound signal 42 in the frequency domain.
[0148] The new hearing aid circuitry shown in Fig. 4 may operate in the entire frequency
range of the hearing aid 10.
[0149] The hearing aid 10 shown in Fig. 4 may be a multi-channel hearing aid in which the
ITE microphone audio signal 28 and the monaural audio signal to be processed are divided
into a plurality of frequency channels, and wherein the signals are processed individually
in each of the frequency channels.
[0150] For a multi-channel hearing aid 10, Fig. 4 may illustrate the circuitry and signal
processing in a single frequency channel. The circuitry and signal processing may
be duplicated in a plurality of the frequency channels, e.g. in all of the frequency
channels.
[0151] For example, the signal processing illustrated in Fig. 4 may be performed in a selected
frequency band, e.g. selected during fitting of the hearing aid to a specific user
at a dispenser's office.
[0152] The selected frequency band may comprise one or more of the frequency channels, or
all of the frequency channels. The selected frequency band may be fragmented, i.e.
the selected frequency band need not comprise consecutive frequency channels.
[0153] The plurality of frequency channels may include warped frequency channels, for example
all of the frequency channels may be warped frequency channels.
[0154] An arbitrary number N of ITE microphones may substitute the ITE microphone 26, and
a combination of output signals from the N ITE microphones may be combined in a ITE
signal combiner to form the, optionally pre-processed, output signal 32, e.g. as a
weighted sum. The weights may be frequency dependent.