SUMMARY
[0001] Hearing care is about restoring the ability to hear. A dominant element of this restoration
is to regain the ability to understand speech in various sound environments.
[0002] Another less explored element of rehabilitation is the importance of daily use of
the hearing aids. Currently we can track if the instruments are being worn, but not
if they are actively used to re-engage the wearer in conversation.
[0003] The ability to monitor, nudge, and guide the wearer to continuously challenge and
improve hearing ability and social interaction is important to regain an active social
lifestyle.
[0004] The proposed solution may track and estimate the activity level of the user and derive
a score for a relative objective in a rehabilitation plan. The proposed solution may
be used to document treatment outcome for users frequently connected as well as users
offline between consultations.
[0005] Using an own voice detector in the hearing aids, the hearing instruments may be configured
to log the time a wearer is speaking and combine with data concerning the sound environment,
such as SNR, or activity level of other identified speech sources.
[0006] The result may help a hearing care professional to prescribe specific targets to
aid in the rehabilitation for a specific user (e.g. active participation in conversations,
its frequency and/or duration).
[0007] By observing a window of variable time, the tracking of own voice activity and other
talker activity can be combined with a ratio of pauses. From this ratio it can be
derived if the user is engaging in a conversation rather than passively listening
to for example the TV. The ratio (or equivalent data) for a given time window may
be stored in the instrument for periodic retrieval.
[0008] In the present context, 'a window of variable time' is taken to mean an 'observation
window' that may vary in time. The data that are observed in the observation window
may e.g. include voice activity detection data (e.g. own voice and other voice activity
or no-activity). An indication of the voice activity in the environment of a user
may e.g. be provided by a ratio of the sum of time periods with voice activity and
the total time period of the observation window (the total being the sum of time periods
with voice activity and the sum of speech pauses). Further sub-indicators may be provided
by a) the ratio of the sum of time periods with own voice activity and the total time
period of the observation window, and b) the ratio of the sum of time periods with
other voice activity and the total time period of the observation window. The pauses
may be further classified as 'pauses before own voice' and 'pauses before other voice',
which may additionally be used to evaluate the user's conversation participation pattern.
[0009] The ratio for one or more time windows may be transferred to a different apparatus
which is capable of further processing the data and/or present the data in a user
interface. Such apparatus or processing device may be constituted by or comprise a
fitting system, or a smartphone or a remote control device for the hearing aid, etc.
[0010] US2006222194A1 deals with a hearing aid comprising a datalogger and with the learning from these
data. The hearing aid comprises an input unit, a signal processing unit, and a user
interface for converting user interaction to a control signal thereby controlling
a processing setting of the signal processing unit. The hearing aid further comprises
a memory unit comprising a control section storing a set of control parameters associated
with the acoustic environment, and a datalogger section receiving data from the input
unit, the signal processing unit, and the user interface. The signal processing unit
configures the setting according to the set of control parameters and comprises a
learning controller adapted to adjust the set of control parameters according to the
data in the data logging section.
[0011] Compared to previous wearing time tracking, the proposed solution enriches the result
with the social activity and active participation of the wearer.
A hearing aid:
[0012] In an aspect of the present application, a hearing aid configured to be worn at or
in an ear of a user is provided. The hearing aid comprises
- An input unit comprising at least one input transducer, e.g. a microphone, for picking
up sound from the environment of the hearing aid and configured to provide at least
one electric input signal representing said sound;
- An own voice detector configured to detect whether or not, or with what probability,
said at least one electric input signal, or a processed version thereof, comprises
a voice from the user of the hearing aid, and to provide an own voice control signal
indicative thereof; and
- A datalogger for logging data related to the use of said hearing aid over time.
The hearing aid may be configured to log data - in said datalogger - representative
of absolute or relative time periods of own voice activity in dependence of said own
voice control.
[0013] Thereby an improved hearing aid may be provided.
[0014] The hearing aid may be configured to further log data concerning a sound environment,
at least during said time periods of own voice activity. The data concerning a sound
environment may be logged with the same (or lower) frequency as the own voice activity
is logged. The hearing aid may comprise one or more detectors of the acoustic environment.
The hearing aid may be configured o receive data from one or more detectors of the
acoustic environment located in other devices or systems, e.g. an external device,
such as a smartphone, or a charging station or other auxiliary device in communication
with the hearing aid.
[0015] The data concerning a sound environment may include a measure of sound quality, e.g.
a signal to noise ratio (SNR). The hearing aid may comprise a detector for providing
a measure of sound quality of the electric input signal or a signal originating therefrom.
The hearing aid may comprise a detector for estimating an SNR of the at least one
electric input signal, or a processed version thereof. The hearing aid may comprise
a level detector for estimating a current level of the at least one electric input
signal or of a signal derived therefrom.
[0016] The data concerning a sound environment may include an activity level of other (identified)
speech sources. 'An activity level' of a sound source (an external or the user) may
e.g. be a duration of activity in an absolute or relative time scale (e.g. in seconds
or in a number of time units (absolute or arbitrary) relative to the number of time
units of a total period of observation). 'An activity level' may e.g. include a number
of distinct events of activity (e.g. separated by a certain minimum time period) and
a total duration of activity in an absolute or relative scale of the sound source
in question.
[0017] The data may comprise a requested gain from a compressive amplification algorithm
of the hearing aid. The compressive amplification algorithm may be configured to compensate
for the user's hearing impairment. The compressive amplification algorithm may be
configured to provide frequency and level dependent gain (amplification or attenuation)
to the at least one electric input signal or to a signal derived therefrom.
[0018] The hearing aid may comprise a voice activity detector configured to detect whether
or not, or with what probability, said at least one electric input signal, or a processed
version thereof, comprises a human voice and to provide a voice control signal indicative
thereof. The voice activity detector may be configured to detect speech. The voice
activity detector may be configured to differentiate between the voice of the user
wearing the hearing aid and other voices (e.g. using a level differentiation, and/or
a trained algorithm, e.g. a neural network), in which case the voice activity detector
may include the own voice detector. The voice activity detector may, however, also
be configured NOT to differentiate between the voice of the user wearing the hearing
aid and other voices. In such case, time periods where a voice other than the user's
voice is present may be determined from a (e.g. logic) combination of the own voice
control signals and the (other) voice control signals., e.g. OTHER VOICE = VOICE AND
(NOT OWN VOICE).
[0019] The hearing aid may be configured to further log absolute or relative time periods
of NO own voice activity. The tracking of own voice activity can e.g. be combined
with the logging of speech pauses, and/or the logging of total (absolute or relative)
time elapsed in the observation window in question. A ratio of time periods of own
voice activity to speech pauses may be logged. A ratio of time periods of voice activity
to speech pauses may be logged. A ratio of time periods of other voice activity (than
own voice) to speech pauses may be logged. From this ratio it can be derived if the
user is engaging in a conversation rather than passively listening to for example
the TV. The ratio for a given (observation) time window can be stored in the instrument
for periodic retrieval. The ratio for one or more windows can be transferred to a
different apparatus (e.g. a smartphone, a similar processing device, or a fitting
system) which is capable of further processing the data and/or present the data in
a user interface.
[0020] The datalogger may be configured to log data in successive observation windows of
variable time, e.g. of increasing length over time, but with a constant or decreasing
number of logged data values of successive observation windows. By observing a time
window, e.g. an observation window of variable time, e.g. an observation window of
increasing length over time (but with a constant (or decreasing) number of logged
data values of successive windows), data can be logged over an extended time even
with a limited storage capacity of a memory of the datalogger of the hearing aid,
see e.g. FIG. 6. This may be advantageous in cases where a time between opportunities
for off-loading data from the datalogger are unknown. Data stored by the datalogger
may e.g. be off-loaded during charging of a rechargeable battery of the hearing aid
in a charging station, se e.g. FIG. 7.
[0021] The hearing aid may comprise a communication interface allowing data to be exchanged
with another device or system. The communication interface may be based on a cabled
connection, e.g. comprising appropriate connectors, allowing easy connection (and
dis-connection) of the hearing aid to/from the 'another device or system'. The communication
interface may be based on a wireless connection to the 'another device or system',
e.g. via a network.
[0022] The hearing aid may comprise an output unit, and wherein the output unit comprises
a number of electrodes of a cochlear implant type hearing aid or a vibrator of a bone
conducting hearing aid, or a loudspeaker of an air conduction hearing aid, or a combination
thereof.
[0023] The hearing aid may be adapted to provide a frequency dependent gain and/or a level
dependent compression and/or a transposition (with or without frequency compression)
of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate
for a hearing impairment of a user. The hearing aid may comprise a signal processor
for applying one or mor processing algorithms to enhance the input signals and providing
a processed output signal.
[0024] The hearing aid may comprise an output unit for providing a stimulus perceived by
the user as an acoustic signal based on a processed electric signal. The output unit
may comprise a number of electrodes of a cochlear implant (for a CI type hearing aid)
or a vibrator of a bone conducting hearing aid. The output unit may comprise an output
transducer. The output transducer may comprise a receiver (loudspeaker) for providing
the stimulus as an acoustic signal to the user (e.g. in an acoustic (air conduction
based) hearing aid). The output transducer may comprise a vibrator for providing the
stimulus as mechanical vibration of a skull bone to the user (e.g. in a bone-attached
or bone-anchored hearing aid).
[0025] The hearing aid may comprise an input unit for providing an electric input signal
representing sound. The input unit may comprise an input transducer, e.g. a microphone,
for converting an input sound to an electric input signal. The input unit may comprise
a wireless receiver for receiving a wireless signal comprising or representing sound
and for providing an electric input signal representing said sound. The wireless receiver
may e.g. be configured to receive an electromagnetic signal in the radio frequency
range (3 kHz to 300 GHz). The wireless receiver may e.g. be configured to receive
an electromagnetic signal in a frequency range of light (e.g. infrared light 300 GHz
to 430 THz, or visible light, e.g. 430 THz to 770 THz).
[0026] The hearing aid may comprise a directional system adapted to spatially filter sounds
from the environment, and thereby enhance a target acoustic source among a multitude
of acoustic sources in the local environment of the user wearing the hearing aid.
The directional system may be adapted to detect (such as adaptively detect) from which
direction a particular part of the input signal originates. This can be achieved in
various different ways as e.g. described in the prior art. In hearing aids, a microphone
array beamformer is often used for spatially attenuating background noise sources.
Many beamformer variants can be found in literature. The minimum variance distortionless
response (MVDR) beamformer is widely used in microphone array signal processing. Ideally
the MVDR beamformer keeps the signals from the target direction (also referred to
as the look direction) unchanged, while attenuating sound signals from other directions
maximally. The generalized sidelobe canceller (GSC) structure is an equivalent representation
of the MVDR beamformer offering computational and numerical advantages over a direct
implementation in its original form.
[0027] Communication between the hearing aid and other devices or systems may be wired or
wireless. Wireless communication may e.g. be in the base band (audio frequency range,
e.g. between 0 and 20 kHz). Preferably, communication between the hearing aid and
the other device is based on some sort of modulation at frequencies above 100 kHz.
Preferably, frequencies used to establish a communication link between the hearing
aid and the other device is below 70 GHz, e.g. located in a range from 50 MHz to 70
GHz, e.g. above 300 MHz, e.g. in an ISM range above 300 MHz, e.g. in the 900 MHz range
or in the 2.4 GHz range or in the 5.8 GHz range or in the 60 GHz range (ISM=Industrial,
Scientific and Medical, such standardized ranges being e.g. defined by the International
Telecommunication Union, ITU). The wireless link may be based on a standardized or
proprietary technology. The wireless link may be based on Bluetooth technology (e.g.
Bluetooth Low-Energy technology).
[0028] The hearing aid may be or form part of a portable (i.e. configured to be wearable)
device, e.g. a device comprising a local energy source, e.g. a battery, e.g. a rechargeable
battery. The hearing aid may e.g. be a low weight, easily wearable, device, e.g. having
a total weight less than 100 g, such as less than 20 g.
[0029] The hearing aid may comprise a forward or signal path between an input unit (e.g.
an input transducer, such as a microphone or a microphone system and/or direct electric
input (e.g. a wireless receiver)) and an output unit, e.g. an output transducer. The
signal processor may be located in the forward path. The signal processor may be adapted
to provide a frequency dependent gain according to a user's particular needs. The
hearing aid may comprise an analysis path comprising functional components for analyzing
the input signal (e.g. determining a level, a modulation, a type of signal, an acoustic
feedback estimate, etc.). Some or all signal processing of the analysis path and/or
the signal path may be conducted in the frequency domain. Some or all signal processing
of the analysis path and/or the signal path may be conducted in the time domain.
[0030] The hearing aid may be configured to operate in different modes, e.g. a normal mode
and one or more specific modes, e.g. selectable by a user, or automatically selectable.
A mode of operation may be optimized to a specific acoustic situation or environment.
A mode of operation may include a low-power mode, where functionality of the hearing
aid is reduced (e.g. to save power), e.g. to disable wireless communication, and/or
to disable specific features of the hearing aid.
[0031] The hearing aid may comprise a number of detectors configured to provide status signals
relating to a current physical environment of the hearing aid (e.g. the current acoustic
environment), and/or to a current state of the user wearing the hearing aid, and/or
to a current state or mode of operation of the hearing aid. Alternatively or additionally,
one or more detectors may form part of an
external device in communication (e.g. wirelessly) with the hearing aid. An external device
may e.g. comprise another hearing aid, a remote control, and audio delivery device,
a telephone (e.g. a smartphone), an external sensor, etc.
[0032] One or more of the number of detectors may operate on the full band signal (time
domain). One or more of the number of detectors may operate on band split signals
((time-) frequency domain), e.g. in a limited number of frequency bands.
[0033] The number of detectors may comprise a level detector for estimating a current level
of a signal of the forward path. The detector may be configured to decide whether
the current level of a signal of the forward path is above or below a given (L-)threshold
value. The level detector operates on the full band signal (time domain). The level
detector operates on band split signals ((time-) frequency domain).
[0034] The hearing aid may comprise a voice activity detector (VAD) for estimating whether
or not (or with what probability) an input signal comprises a voice signal (at a given
point in time).
[0035] A voice signal may in the present context be taken to include a speech signal from
a human being. It may also include other forms of utterances generated by the human
speech system (e.g. singing). The voice activity detector unit may be adapted to classify
a current acoustic environment of the user as a VOICE or NO-VOICE environment. This
has the advantage that time segments of the electric microphone signal comprising
human utterances (e.g. speech) in the user's environment can be identified, and thus
separated from time segments only (or mainly) comprising other sound sources (e.g.
artificially generated noise). The voice activity detector may be adapted to detect
as a VOICE also the user's own voice. Alternatively, the voice activity detector may
be adapted to exclude a user's own voice from the detection of a VOICE.
[0036] The hearing aid may comprise an own voice detector for estimating whether or not
(or with what probability) a given input sound (e.g. a voice, e.g. speech) originates
from the voice of the user of the system. A microphone system of the hearing aid may
be adapted to be able to differentiate between a user's own voice and another person's
voice and possibly from NON-voice sounds.
[0037] The number of detectors may comprise a movement detector, e.g. an acceleration sensor.
The movement detector may be configured to detect movement of the user's facial muscles
and/or bones, e.g. due to speech or chewing (e.g. jaw movement) and to provide a detector
signal indicative thereof.
[0038] The hearing aid may comprise a classification unit configured to classify the current
situation based on input signals from (at least some of) the detectors, and possibly
other inputs as well. In the present context 'a current situation' may be taken to
be defined by one or more of
- a) the physical environment (e.g. including the current electromagnetic environment,
e.g. the occurrence of electromagnetic signals (e.g. comprising audio and/or control
signals) intended or not intended for reception by the hearing aid, or other properties
of the current environment than acoustic);
- b) the current acoustic situation (input level, feedback, etc.), and
- c) the current mode or state of the user (movement, temperature, cognitive load, etc.);
- d) the current mode or state of the hearing aid (program selected, time elapsed since
last user interaction, etc.) and/or of another device in communication with the hearing
aid.
[0039] The hearing aid may comprise of a multi-level data storage system to distil conversation
history across current, recent, and past conversations. The data storage scheme comprises
that more data are stored in current conversation and made available for other classifiers
and less data are stored for conversations that are no longer active. E.g. data are
aggregated and stored in memory bins representing shorter or longer time intervals.
For each storage bin, the data may be represented by a single numeric counter or a
ratio value, in place of the time-domain classifier result that is logged in an active
conversation. The hearing aid may be designed in a way that the available data storage
and availability of means for data transport to other apparatus determine the degree
of data aggregation. This dynamic aggregation may allow the hearing aid to store conversation
tracking data for an arbitrary time period without sacrificing the detailed time-domain
data for a specific number of conversation minutes, see e.g. FIG. 6.
[0040] The classification unit may be based on or comprise a neural network, e.g. a trained
neural network.
[0041] The hearing aid may further comprise other relevant functionality for the application
in question, e.g. compression, noise reduction, feedback control, etc.
[0042] The hearing aid may comprise a hearing instrument, e.g. a hearing instrument adapted
for being located at the ear or fully or partially in the ear canal of a user, e.g.
a headset, an earphone, an ear protection device or a combination thereof.
Use:
[0043] In an aspect, use of a hearing aid as described above, in the 'detailed description
of embodiments' and in the claims, is moreover provided. Use may be provided in a
system comprising audio distribution. Use may be provided in a system comprising one
or more hearing aids (e.g. hearing instruments), headsets, ear phones, active ear
protection systems, etc.
A method of operating a hearing aid:
[0044] In an aspect, a method of operating a hearing aid configured to be worn at of in
an ear of a user is provided. The method may comprise
- providing at least one electric input signal representing sound;
- detecting whether or not, or with what probability, said at least one electric input
signal, or a processed version thereof, comprises a voice from the user of the hearing
aid, and providing an own voice control signal indicative thereof;
- logging data related to the use of said hearing aid over time is furthermore provided
by the present application.
The logging data may comprise logging absolute or relative time periods of own voice
activity in dependence of said own voice control signal.
[0045] It is intended that some or all of the structural features of the device described
above, in the 'detailed description of embodiments' or in the claims can be combined
with embodiments of the method, when appropriately substituted by a corresponding
process and vice versa. Embodiments of the method have the same advantages as the
corresponding devices.
[0046] The method may apply progressive abstraction of data over a period of time. The hearing
aid preserve detailed conversation data logging for a number of minutes, after which
the data will be aggregated into more abstract usable counters and ratios. This allow
the hearing aid to track a high degree of data resolution if the user has a connected
to a connected apparatus and still allow the hearing aid to maintain relevant data
between visits to a clinic if the user is offline the entire time, see e.g. FIG. 6.
A method of extracting information about a hearing aid user's conversations:
[0047] In an aspect, a method of extracting information about a hearing aid social engagement
in conversations is provided.
[0048] The method comprises
- Logging data in a hearing aid worn by the user over an extended period of time (e.g.
weeks or months);
- Wherein the logged data includes data representative of
∘ the user's own voice activity over time, and
∘ a general voice activity in an environment of the user over time;
- Analyzing the logged data with a view to the user's own voice activity and the voice
activity in the environment, to estimate the user's engagement in conversations.
[0049] The user's engagement in conversations may e.g. be estimated by identifying a conversation
in the combined data from an own voice detector and a general voice detector (or a
dedicated 'not-own voice' detector). A conversation is detected, if a user's voice
followed by another voice are detected, one voice following the other, without longer
speech pauses between them (i.e. pauses are not larger than a threshold value Δt
PAUSE, e.g. 5-10 seconds), see e.g. FIG. 3, 4.
A computer readable medium or data carrier:
[0050] In an aspect, a tangible computer-readable medium (a data carrier) storing a computer
program comprising program code means (instructions) for causing a data processing
system (a computer) to perform (carry out) at least some (such as a majority or all)
of the (steps of the) method described above, in the 'detailed description of embodiments'
and in the claims, when said computer program is executed on the data processing system
is furthermore provided by the present application.
[0051] By way of example, and not limitation, such computer-readable media can comprise
RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other
magnetic storage devices, or any other medium that can be used to carry or store desired
program code in the form of instructions or data structures and that can be accessed
by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc,
optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks
usually reproduce data magnetically, while discs reproduce data optically with lasers.
Other storage media include storage in DNA (e.g. in synthesized DNA strands). Combinations
of the above should also be included within the scope of computer-readable media.
In addition to being stored on a tangible medium, the computer program can also be
transmitted via a transmission medium such as a wired or wireless link or a network,
e.g. the Internet, and loaded into a data processing system for being executed at
a location different from that of the tangible medium.
A computer program:
[0052] A computer program (product) comprising instructions which, when the program is executed
by a computer, cause the computer to carry out (steps of) the method described above,
in the 'detailed description of embodiments' and in the claims is furthermore provided
by the present application.
A data processing system:
[0053] In an aspect, a data processing system comprising a processor and program code means
for causing the processor to perform at least some (such as a majority or all) of
the steps of the method described above, in the 'detailed description of embodiments'
and in the claims is furthermore provided by the present application.
A hearing system:
[0054] In a further aspect, a hearing system comprising a hearing aid as described above,
in the 'detailed description of embodiments', and in the claims, AND an auxiliary
device is moreover provided.
[0055] The hearing system may be adapted to establish a communication link between the hearing
aid and the auxiliary device to provide that information (e.g. control and status
signals, possibly audio signals) can be exchanged or forwarded from one to the other.
[0056] The auxiliary device may comprise a remote control, a smartphone, or other portable
or wearable electronic device, such as a smartwatch or the like.
[0057] The auxiliary device may be constituted by or comprise a programming device (e.g.
running a fitting software for adapting processing of the hearing aid to the needs,
e.g. a hearing impairment, of the user of the hearing aid).
[0058] The auxiliary device may be constituted by or comprise a remote control for controlling
functionality and operation of the hearing aid(s). The function of a remote control
may be implemented in a smartphone, the smartphone possibly running an APP allowing
to control the functionality of the audio processing device via the smartphone (the
hearing aid(s) comprising an appropriate wireless interface to the smartphone, e.g.
based on Bluetooth or some other standardized or proprietary scheme).
[0059] The auxiliary device may be constituted by or comprise another hearing aid. The hearing
system may comprise two hearing aids adapted to implement a binaural hearing system,
e.g. a binaural hearing aid system.
[0060] The auxiliary device may e.g. be or comprise a programming device, e.g. implementing
a fitting system of the hearing aid. The auxiliary device may comprise a charging
station comprising a memory (e.g. acting as an intermediate storage medium, e.g. of
'day-to-day data' from the datalogger, cf. e.g. FIG. 7). The auxiliary device may
comprise a communication interface allowing the (wired or wireless) communication
link to the hearing aid to be established. The communication interface(s) may comprise
appropriate antenna and transceiver circuitry to implement a wireless link, e.g. based
on Bluetooth or similar technology. The auxiliary device may comprise a communication
interface allowing a connection to a server on a network, e.g. the Internet, e.g.
'in the cloud', to be established. Thereby data from the datalogger received from
the hearing aid may be relayed from the auxiliary device (e.g. a cellphone or a charging
station for the hearing aid) to a server accessible for analysis of the data, e.g.
by a fitting system for the hearing aid.
[0061] The hearing system may be configured to download data from said datalogger to said
auxiliary device. The auxiliary device may comprise a memory for storing data from
the datalogger of the hearing aid. The auxiliary device may comprise an analyzing
unit for analyzing data stored in the datalogger or the hearing aid and/or stored
in the memory of the auxiliary device originating from the datalogger of the hearing
aid. Data in the memory may origin from different time periods, e.g. time periods
that together span more than one week, such as more than one month, such as more than
6 months. The auxiliary device may be configured to extract changes overtime of said
data originating from the datalogger of the hearing aid. The changes over time may
relate to the user's vocal activity, e.g. in connection with other persons' vocal
activity (e.g. related to conversations vs. passive listening). The auxiliary device
may comprise a user interface, allowing a user to interact with the auxiliary device,
e.g. via touch sensitive display, and or a keyboard. The user interface may be configured
to allow a user to display results of an analysis of the from the datalogger. The
user interface may e.g. allow a user (e.g. a hearing care professional) to access
data from the datalogger origination from previous observation periods, thereby allowing
a development or trend in user behavior to be extracted from the data.
An APP:
[0062] In a further aspect, a non-transitory application, termed an APP, is furthermore
provided by the present disclosure. The APP comprises executable instructions configured
to be executed on an auxiliary device to implement a user interface for a hearing
aid or a hearing system described above in the 'detailed description of embodiments',
and in the claims. The APP may be configured to run on cellular phone, e .g. a smartphone,
or on another portable (or stationary) electronic device allowing communication with
said hearing aid or said hearing system (e.g. a charging station).
[0063] The APP may implement a
Datalogging APP, from which a user may configure the datalogger. The user may e.g. select the data
that should be logged, e.g. Own-voice data, Other voice data, internal and external
sensor data (e.g. sensors or detectors related to an acoustic environment, and/or
to a state of the user, e.g. a mental state). The sensors or detectors that may be
selected for logging together the voice activation data, may include a movement sensor,
a sound quality detector, a detector of body signals, e.g. brainwaves (e.g. EEG),
a PPG sensor, etc. The APP may further allow the user to off-load logged data to another
device or system, e.g. to a fitting system, to a smartphone or to a charging station,
etc. (see e.g. FIG. 7). The APP may further allow the user to select a strategy or
scheme for off-loading logged data to another device or system (e.g. among a number
of predefined schemes).
Definitions:
[0064] In the present context, a hearing aid, e.g. a hearing instrument, refers to a device,
which is adapted to improve, augment and/or protect the hearing capability of a user
by receiving acoustic signals from the user's surroundings, generating corresponding
audio signals, possibly modifying the audio signals and providing the possibly modified
audio signals as audible signals to at least one of the user's ears. Such audible
signals may e.g. be provided in the form of acoustic signals radiated into the user's
outer ears, acoustic signals transferred as mechanical vibrations to the user's inner
ears through the bone structure of the user's head and/or through parts of the middle
ear as well as electric signals transferred directly or indirectly to the cochlear
nerve of the user.
[0065] The hearing aid may be configured to be worn in any known way, e.g. as a unit arranged
behind the ear with a tube leading radiated acoustic signals into the ear canal or
with an output transducer, e.g. a loudspeaker, arranged close to or in the ear canal,
as a unit entirely or partly arranged in the pinna and/or in the ear canal, as a unit,
e.g. a vibrator, attached to a fixture implanted into the skull bone, as an attachable,
or entirely or partly implanted, unit, etc. The hearing aid may comprise a single
unit or several units communicating (e.g. acoustically, electrically or optically)
with each other. The loudspeaker may be arranged in a housing together with other
components of the hearing aid, or it may be an external unit in itself (possibly in
combination with a flexible guiding element, e.g. a dome-like element).
[0066] More generally, a hearing aid comprises an input transducer for receiving an acoustic
signal from a user's surroundings and providing a corresponding input audio signal
and/or a receiver for electronically (i.e. wired or wirelessly) receiving an input
audio signal, a (typically configurable) signal processing circuit (e.g. a signal
processor, e.g. comprising a configurable (programmable) processor, e.g. a digital
signal processor) for processing the input audio signal and an output unit for providing
an audible signal to the user in dependence on the processed audio signal. The signal
processor may be adapted to process the input signal in the time domain or in a number
of frequency bands. In some hearing aids, an amplifier and/or compressor may constitute
the signal processing circuit. The signal processing circuit typically comprises one
or more (integrated or separate) memory elements for executing programs and/or for
storing parameters used (or potentially used) in the processing and/or for storing
information relevant for the function of the hearing aid and/or for storing information
(e.g. processed information, e.g. provided by the signal processing circuit), e.g.
for use in connection with an interface to a user and/or an interface to a programming
device. In some hearing aids, the output unit may comprise an output transducer, such
as e.g. a loudspeaker for providing an air-borne acoustic signal or a vibrator for
providing a structure-borne or liquid-borne acoustic signal. In some hearing aids,
the output unit may comprise one or more output electrodes for providing electric
signals (e.g. to a multi-electrode array) for electrically stimulating the cochlear
nerve (cochlear implant type hearing aid).
[0067] In some hearing aids, the vibrator may be adapted to provide a structure-borne acoustic
signal transcutaneously or percutaneously to the skull bone. In some hearing aids,
the vibrator may be implanted in the middle ear and/or in the inner ear. In some hearing
aids, the vibrator may be adapted to provide a structure-borne acoustic signal to
a middle-ear bone and/or to the cochlea. In some hearing aids, the vibrator may be
adapted to provide a liquid-borne acoustic signal to the cochlear liquid, e.g. through
the oval window. In some hearing aids, the output electrodes may be implanted in the
cochlea or on the inside of the skull bone and may be adapted to provide the electric
signals to the hair cells of the cochlea, to one or more hearing nerves, to the auditory
brainstem, to the auditory midbrain, to the auditory cortex and/or to other parts
of the cerebral cortex.
[0068] A hearing aid may be adapted to a particular user's needs, e.g. a hearing impairment.
A configurable signal processing circuit of the hearing aid may be adapted to apply
a frequency and level dependent compressive amplification of an input signal. A customized
frequency and level dependent gain (amplification or compression) may be determined
in a fitting process by a fitting system based on a user's hearing data, e.g. an audiogram,
using a fitting rationale (e.g. adapted to speech). The frequency and level dependent
gain may e.g. be embodied in processing parameters, e.g. uploaded to the hearing aid
via an interface to a programming device (fitting system), and used by a processing
algorithm executed by the configurable signal processing circuit of the hearing aid.
[0069] A 'hearing system' refers to a system comprising one or two hearing aids, and a 'binaural
hearing system' refers to a system comprising two hearing aids and being adapted to
cooperatively provide audible signals to both of the user's ears. Hearing systems
or binaural hearing systems may further comprise one or more 'auxiliary devices',
which communicate with the hearing aid(s) and affect and/or benefit from the function
of the hearing aid(s). Such auxiliary devices may include at least one of a remote
control, a remote microphone, an audio gateway device, an entertainment device, e.g.
a music player, a wireless communication device, e.g. a mobile phone (such as a smartphone)
or a tablet or another device, e.g. comprising a graphical interface.. Hearing aids,
hearing systems or binaural hearing systems may e.g. be used for compensating for
a hearing-impaired person's loss of hearing capability, augmenting or protecting a
normal-hearing person's hearing capability and/or conveying electronic audio signals
to a person. Hearing aids or hearing systems may e.g. form part of or interact with
public-address systems, active ear protection systems, handsfree telephone systems,
car audio systems, entertainment (e.g. TV, music playing or karaoke) systems, teleconferencing
systems, classroom amplification systems, etc.
[0070] Embodiments of the disclosure may e.g. be useful in applications such as hearing
aids for compensation for a user's hearing impairment.
BRIEF DESCRIPTION OF DRAWINGS
[0071] The aspects of the disclosure may be best understood from the following detailed
description taken in conjunction with the accompanying figures. The figures are schematic
and simplified for clarity, and they just show details to improve the understanding
of the claims, while other details are left out. Throughout, the same reference numerals
are used for identical or corresponding parts. The individual features of each aspect
may each be combined with any or all features of the other aspects. These and other
aspects, features and/or technical effect will be apparent from and elucidated with
reference to the illustrations described hereinafter in which:
FIG. 1 shows a first embodiment of a hearing aid comprising a datalogger according
to the present disclosure,
FIG. 2 shows a second embodiment of a hearing aid comprising a datalogger according
to the present disclosure,
FIG. 3 shows a first time sequence reflecting a conversation of the user of the hearing
aid with another person as detected by an own voice detector and a voice activity
detector,
FIG. 4 shows a second time sequence reflecting a varying acoustic environment of the
user of the hearing aid, including sub-sequences reflecting a varying degree of speech-participation
by the user,
FIG. 5 shows an embodiment of a hearing system comprising a hearing aid and a programming
device according to the present disclosure,
FIG. 6 schematically illustrates an example of data aggregation according to the present
disclosure,
FIG. 7 schematically illustrates a hearing aid system according to the present disclosure,
wherein an external processor and memory is built-into a charging station for a hearing
aid or a pair of hearing aids, which can be used to offload data from a datalogger
of the hearing aid(s), and
FIG. 8 shows an embodiment of a hearing aid according to the present disclosure comprising
a BTE-part located behind an ear or a user and an ITE part located in an ear canal
of the user, and an auxiliary device comprising a user interface.
[0072] The figures are schematic and simplified for clarity, and they just show details
which are essential to the understanding of the disclosure, while other details are
left out. Throughout, the same reference signs are used for identical or corresponding
parts.
[0073] Further scope of applicability of the present disclosure will become apparent from
the detailed description given hereinafter. However, it should be understood that
the detailed description and specific examples, while indicating preferred embodiments
of the disclosure, are given by way of illustration only. Other embodiments may become
apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
[0074] The detailed description set forth below in connection with the appended drawings
is intended as a description of various configurations. The detailed description includes
specific details for the purpose of providing a thorough understanding of various
concepts. However, it will be apparent to those skilled in the art that these concepts
may be practiced without these specific details. Several aspects of the apparatus
and methods are described by various blocks, functional units, modules, components,
circuits, steps, processes, algorithms, etc. (collectively referred to as "elements").
Depending upon particular application, design constraints or other reasons, these
elements may be implemented using electronic hardware, computer program, or any combination
thereof.
[0075] The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated
circuits (e.g. application specific), microprocessors, microcontrollers, digital signal
processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices
(PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g.
flexible PCBs), and other suitable hardware configured to perform the various functionality
described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering
physical properties of the environment, the device, the user, etc. Computer program
shall be construed broadly to mean instructions, instruction sets, code, code segments,
program code, programs, subprograms, software modules, applications, software applications,
software packages, routines, subroutines, objects, executables, threads of execution,
procedures, functions, etc., whether referred to as software, firmware, middleware,
microcode, hardware description language, or otherwise.
[0076] The present application relates to the field of hearing aids, in particular to data
logging.
[0077] FIG. 1 shows a first embodiment of a hearing aid comprising a datalogger according
to the present disclosure. FIG. 1 schematically illustrates a hearing aid (HA) configured
to be worn at or in an ear of a user (or for being partially or fully implanted in
the head at an ear of the user). The hearing aid (HA) comprises an input unit (IU.
The input unit may e.g. comprise one or more input transducers, e.g. one or more microphones,
configured to pick up sound (Acoustic input) from the environment of the hearing aid
and to provide at least one electric input signal (IN) representing the sound. The
input unit (IU) may comprise an analogue to digital converter for converting an analogue
signal to a digitized signal (e.g. with a specific sampling frequency, e.g. f
s=20 kHz). The input unit (IU) may further comprise an analysis filter bank for converting
a (e.g. digitized) time domain signal to a time-frequency domain signal (e.g. represented
as a multitude of frequency sub-band signals, each representing a frequency sub-range
of the frequency range of operation of the hearing aid). The hearing aid (HA) further
comprises an own voice detector (OVD) configured to detect whether or not, or with
what probability, the at least one electric input signal (IN), or a processed version
thereof, comprises a voice from the user of the hearing aid, and to provide a user
voice control signal (UVC) indicative thereof. The hearing aid (HA) further comprises
a datalogger (DLOG) for logging over time data related to the use of the hearing aid,
including absolute or relative time periods of own voice activity in dependence of
the own voice control signal. The hearing aid may be configured to log parameters
of the current acoustic environment, including the own voice control signal, over
time according to a predefined or adaptively determined scheme. The hearing aid may
be configured to log parameters of the current acoustic environment with a specific
log frequency, e.g. with a frequency larger than 0.1 Hz. The logged data may be (temporarily)
stored in a memory of the datalogger. The hearing aid (HA) further comprises a processor
(PRO) for applying one or more processing algorithms to the at least one electric
input signal (IN). The one or more processing algorithms may include one or more of
a compressive amplification algorithm configured to compensate for a hearing impairment
of the user, a noise reduction algorithm, a feedback control algorithm, a directional
beamforming algorithm, etc. The processor (PRO) provides a processed signal (OUT)
representing sound (e.g. the sound picked up by the input unit (IU), and/or sound
received from another device), which is fed to an output unit (OU). The output unit
is configured to provide stimuli perceivable as sound to the user based on the processed
signal (OUT). The output unit (OU) may comprise an output transducer, e.g. a loudspeaker
for providing air-conducted sound, or a vibrator for providing bone-conducted sound.
The output unit (OU) may comprise a multi-electrode array for directly stimulating
the cochlear nerve of an ear of the user. The output unit may further comprise a synthesis
filter bank in case the processed output signal (OUT) comprises a multitude of a frequency-sub-band
signals (time-frequency domain signal) and/or a digital to analogue converter for
converting a digitized signal to an analogue signal according to the specific application.
The signal path from the input unit (IU) to the output unit (OU) via the processor
(PRO) defines a forward path of the hearing aid (for processing the input sound to
an output signal perceivable as sound to the user). The hearing aid further comprises
a communication interface (IF), e.g. comprising an appropriate connector or antenna
and transceiver circuitry, allowing data to be exchanged between the hearing aid and
another device or system, e.g. via a network. The communication interface (IF) may
be based on near-field (e.g. inductive) communication or on far-field communication
(e.g. based on Bluetooth or similar technologies).
[0078] FIG. 2 shows a second embodiment of a hearing aid comprising a datalogger according
to the present disclosure. The embodiment of a hearing aid (HA) in FIG. 2 comprises
the same elements as the embodiment described in connection with FIG. 1. In addition,
the embodiment of FIG. 2 comprises further detectors (DET) including e.g. an SNR estimator
and/or a level estimator to monitor the acoustic environment. The embodiment of FIG.
2 comprises separate own voice (OVD) and voice activity detectors (VAD) providing
respective indicators OVC and VAC regarding the presence of the user's voice ('own
voice') and other voices, respectively. Other voices may include or exclude the user's
voice as considered practical in the specific application in question. As mentioned,
the voices of the user and that of other voices may (if necessary) be excluded by
appropriate combination of the two indicators (OVC, VAD). The hearing aid may thereby
be configured to log the voice activity (e.g. a level of activity) of the user as
well as of other persons in the environment of the user. The hearing aid may e.g.
be configured to log data concerning a sound environment (e.g. level, SNR) at least
during specific time periods, e.g. periods of (a certain) own voice activity (e.g.
OVC=1, or ≥ 50%), and/or triggered by specific events, e.g. during changes of an acoustic
environment. The embodiment a hearing aid (HA) of FIG. 2 comprises a detector unit
(DET), which may comprise a level estimator for estimating a current level of the
at least one electric input signal (IN) or of a signal derived therefrom. The hearing
aid (e.g. the detector unit (DET)) may comprise an estimator of signal quality, e.g.
of SNR, of the at least one electric input signal (IN) or of a signal derived therefrom.
The hearing aid may comprise an estimator of an ambient noise level, which may be
estimated using the level detector and available voice activity detector(s), e.g.
by making a noise estimate during speech pauses as determined by the voice indicator(s)
(OVC, VAC). A crude SNR may then be estimated by Level(voice)/Level(noise), the mentioned
levels being e.g. determined during speech (e.g. Signal level = Level(VAC = 1)) and
speech pauses (e.g. Noise level = Level(VAC =0)). By monitoring the acoustic environment
(including a noise level) the hearing aid may be configured to log the
conditions for engaging in a conversation.
[0079] The hearing aid (HA) may be configured to log data representing a currently requested
gain from a compressive amplification algorithm of the hearing aid (cf. signal GRQ
from the processor (PRO) to the datalogger (DLOG) in FIG. 2). The compressive amplification
algorithm may be configured to provide frequency and level dependent gain (amplification
or attenuation) to the at least one electric input signal or to a signal derived therefrom.
The requested gain or changes to the requested gain reflects properties of the current
acoustic environment of the user.
[0080] The hearing aid may be configured to further log absolute or relative time periods
of NO own voice activity. In the embodiment of FIG. 2, the datalogger comprises or
interfaces to a timing unit (cf. unit TIME in FIG. 2) providing an absolute time or
a relative time elapsed, e.g. since the last power up of the hearing aid (the latter
may be relatively easily determined by an appropriate counter and knowledge of the
relevant clock frequency of the hearing device). By observing a specific time window,
the tracking of own voice activity as the sum of time segments wherein the own voice
indicator is high (e.g. = 1, user voice detected) and no own voice activity as the
sum of time segments wherein the own voice indicator is low (e.g. = 0, no user voice
detected) can be combined, e.g. to define a ratio of vocal activity to vocal pauses
of the user (or vocal activity to total time elapsed). From this ratio, a degree of
active user engagement in conversations and a degree of passive listening (example
to the TV) can be estimated.
[0081] The logged data for a given time window (e.g. from power on of the hearing aid to
power off, e.g. corresponding to a single day of normal operation) or for several
time windows, e.g. corresponding to a larger period of time, e.g. a week or a month,
or the like, can be stored in a memory of the hearing aid. The logged data (DATA)
can e.g. - via the communication interface (IF) - be transferred to a different apparatus
(e.g. a smartphone, a similar processing device, or a fitting system) which is capable
of analyzing and/or possible further processing the data and/or of presenting the
data in a user interface. The hearing aid may e.g. be configured to off-load logged
data to another device or server (e.g.in the cloud) according to a specific or adaptive
scheme, e.g. in dependence of a current amount of logged data (or rest-capacity of
a memory), or of a measure of a time elapsed.
[0082] An absolute timing (e.g. a time of day) may e.g. be obtained from specific timing-circuitry,
e.g. included in the hearing aid, e.g. in communication with a time standard (e.g.
the DCF77 in Frankfurt), or from another device (e.g. a smartphone or similar device,
e.g. a watch) or from a network, e.g. including from a server.
[0083] The logging of data related to the user's (active) participation in conversations
is illustrated in FIG. 3 and 4.
[0084] FIG. 3 shows a first time sequence reflecting a conversation of the user of the hearing
aid with another person as detected by an own voice detector and a voice activity
detector. FIG. 3 shows values of different voice indicators (here control signals
UVC (representing the user's voice) and OVC (representing other voice(s))) versus
time (Time) for a time segment of an electric input signal of the hearing aid. FIG.
3 shows an output of a voice activity detector that is capable of differentiating
a user's voice from other voices in an environment of the user wearing the hearing
aid. The vocal activity or inactivity of the user or other persons is implied by control
signals UVC or OVC, respectively, being 1 or 0 (could also or alternatively be indicated
by a speech presence probability (SPP) being above or below a threshold, respectively).
In the time sequence depicted in FIG. 3, the top graph represents vocal activity of
the user (between time t
u,1 and t
u,2 (time period ΔtUser(1)= t
u,2 - t
u,1) and between time t
u,3 and t
u,4 (time period ΔtUser(2)= t
u,4 - t
u,3). The middle graph represents vocal activity of other persons (between time t
o,1 and t
o,2 (time period ΔtOther(1)= t
u,2 - t
u,1) and the lower graph represents vocal activity of the user and other persons in combination
(at times and time periods s indicated for the top and middle graphs). In the bottom
graph time periods of user voice and other persons' voice are indicated by different
filling. An analysis of the combination of indicators (UVC and OVC, respectively)
of the presence or absence of user voice and other persons' voice may reveal a possible
conversation with participation of the user. Identification of conversation involving
the user may be identified by a sequential (alternating) occurrence of user voice
(UVC) and other voice (OVC) indicators over a time period. In the simplified example
of FIG. 3, a conversation involving the user from time t
u,1 to t
u,4 (i.e. over a total time period of t
u,4 - t
u,1 = ΔtUser(1) + ΔtOther(1) + ΔtUser(2)) can be identified. During analysis, a criterion
regarding the distance ΔtUser-Other in time between the user voice indicator (UVC)
shifting from active to inactive and the other person's voice indicator (OVC) shifting
from inactive to active (or vice versa) may by applied. For the two 'transitions'
of FIG. 3, ΔtUser-Other = t
o,1 - t
u,2 and t
o,2 - t
u,3, respectively. Such criterion may e.g. be ΔtUser-Other ≤ 2 s. A slight overlap may
be accepted, and a further criterion may e.g. be ΔtUser-Other ≥ -2 s. (thereby accepting
a small period of 'double-talk). A further criterion regarding the time period of
each single period of active voice of the user (and/or the other person(s)) may be
imposed, e.g. ΔtUser(j) ≥ Δ
tu,min., j=1, ..., J, where J is the number of 'contributions' of the user in a given conversation
(in FIG. 3, J=2). The minimum duration may e.g. be 5 s. Other analysis criteria may
relate to the average length of the 'contributions' of the user <ΔtUser(j)> in a given
conversation (j=1, ..., J) and/or over all conversations of a given time period (e.g.
a day or a week).
[0085] FIG. 4 shows a second time sequence reflecting a varying acoustic environment of
the user of the hearing aid, including sub-sequences reflecting a varying degree of
speech-participation by the user. FIG. 4 schematically illustrates a time window time
dependent values of wherein indictors of the user voice (UVC) and other persons' voice
(VAC) are indicated (an 'active indication' of the respective UVC and VAC indicators
is shown by different fillings, as in FIG. 3, bottom). The time window comprises two
time periods that indicate a user in conversation with another person, two time periods,
that indicate silence (or no significant voice activity) and one time period of another
persons' voice (without user participation, e.g. reflecting another person talking
(without the user replying), e.g. voice from a radio, TV or other audio delivery device,
or a person talking in the environment of the user). The time window of FIG. 4 has
a range from t
1 to t
6, i.e. spans a time period of duration Δt
w from t
6 - t
1. The time window of FIG. 4 comprises in consecutive order: (a 1
st period of) 'conversation', (a 1
st period of) 'silence', (a 1
st period of) 'one way speech', (a 2
nd period of) 'silence', and (a 2
nd period of) 'conversation'. The individual time periods of each acoustic event ('conversation'(user
voice, another voice), 'one way speech' (either the user or another voice), 'silence'
(no voice)) may e.g. be estimated based on the logged data, either in the hearing
aid or in another device or system to which the data are transferred.
[0086] The data logged over time, cf. time windows as illustrated in FIG. 3, 4 (and in practice
comprising more acoustic events (represent longer time periods, e.g. days or weeks)
and their subsequent analysis may allow extraction of information regarding the user's
(voiced) social activity, e.g. in dependence of the acoustic environment (noisy environments
may result in decreased activity), e.g. in dependence of the time of the day (a decrease
with time of day (or time from power on of the hearing aid e.g. reflecting some sort
of cognitive fatigue). The analysis may result in changes being made to the processing
of the hearing aid (e.g. increased noise reduction and/or more directionality in noisy
environments). The logged data may e.g. be used to extract information about the complexity
(and length) of conversations engaged in by the user and in particular to changes
in such parameters.
[0087] The repeated logging over time of own voice activity, other voice activity, input
signal level (e.g. low, medium high), noise level and/or signal-to-noise-ratio may
e.g. allow such information to be extracted.
[0088] Corresponding values of the parameters (P
1, ..., P
Q), Q being the number of logged parameters, e.g. Q ≤ 5, may e.g. be logged with a
predefined frequency f
L, e.g. every 100 ms (i.e. f
L=10 Hz). The logged data may e.g. be up-loaded (off-loaded) to another device or server
with a predefined frequency, e.g. 5 minutes, or every hour, or once a day (e.g. as
part of a power-off procedure).
[0089] The hearing aid may be configured to take specific measures in case the intended
(planned) off-loading of the logged parameters (to empty the memory and make room
for new data) cannot be performed, e.g. due to lack of a communication ling, lack
of power of the hearing aid, lack availability of the receiving device or system,
etc. Such specific measures may be to minimize the amount of data (and thus being
able to cover a longer time window) by averaging values of the parameters (P
1, ..., P
Q) over time. The parameters may be averaged over different time periods (e.g. so that
voice detection data (e.g. in particular own voice detection data) are prioritized
over other parameters, e.g. level, or SNR, which may be assumed to vary more slowly,
than the dynamic events of a conversation).
[0090] FIG. 5 shows an embodiment of a hearing system comprising a hearing aid and a programming
device according to the present disclosure. In the hearing system of FIG. 5, the hearing
aid (HA) is in communication with a programming device (PD, e.g. a fitting system
or other processing device, e.g. a smartphone). The communication is e.g. via a direct
link (LINK) or via a network. The programming device (PD) comprises a communication
interface (IF) allowing to establish a communication link to the hearing aid and to
receive data from and transmit data to the hearing aid. The programming device (PD)
may e.g. receive logged data from the hearing aid and store the date in a memory (MEM)
for analysis by an analyser (ANA) and possibly further processing of the data in a
processing unit (COMP). The processing unit may comprise a digital signal processor
configured to run fitting software of the hearing aid, e.g. to adapt processing parameters
of the processor (PRO) of the hearing aid to the needs of particular user (cf. double
arrowed line between the processor (PRO) and the communication interface (IF) of the
hearing aid (HD). The programming device (PD) further comprises a user interface coupled
to the processing unit (COMP), the memory (MEM) and the analyser (ANA). The user interface
comprises a visual display (DISP) and a keyboard (KEYB) allowing data to be displayed,
e.g. graphically (DISP) and data to be entered (KEYB). The programming device may
(via the memory) e.g. have access to logged data from several time windows, e.g. representing
observations over a time span of weeks or months. The programming device may have
access corresponding data from the available detectors in a time series spanning the
mentioned period of weeks or months, e.g. including voice activities of the user and
other persons in the environment of the user in a time resolution that allows an analysis
of changes in the user's social vocal activity to be identified. In the screen of
the display, a schematic comparison of logged data for a particular user for two different
time periods are shown. A development in the user's active participation in conversations
is (schematically) indicated from less in time period TP#1 to more in time period
TP#2. This may be a result of changed parameter settings of the hearing aid (or the
fitting of another (improved) hearing aid model) between time period TP#1 and TP#2,
or it may be the result of a deliberate effort of the user to be more active (or both).
The results of the analysis may be inputs to a discussion with the user about his
or her satisfaction with the hearing aid, and/or to the changing of parameter settings,
fitting of a new hearing aid with improved features, etc. Important learnings of the
data are possible changes (over time) in the length and complexity of the user's conversations
with other people, which can be taken as an indication of an improved social engagement
(decreased self-isolation).
[0091] FIG. 6 schematically illustrates an example of data aggregation according to the
present disclosure. FIG. 6 shows values of averaged parameters ('Parameters averaged
over Δt', normalized scale), e.g. voice activity detection, overtime ('Time [s]').
[0092] Each 'vertical box' represent a data container (DataC). Each data container holds
a value (e.g. an average value) of one or more 'conversation parameters' intended
for being logged by the hearing aid or an external device connected to the hearing
aid). A conversation parameter may e.g. be a ratio of time periods with voice activity
(e.g. own voice activity) to time periods of speech pauses.
[0093] FIG. 6 shows a multitude of observation windows of variable duration in time, here
t
1, t
2, ..., t
n, t
n+1, . Each data container (DataC) of a given observation window t
n has a common width Δt
n representing the time range that the data of that data container represents, e.g.
a single value sampled in the time range time Δt
n or an average of values sampled in the time range Δt
n. In the embodiment of FIG. 6, Δt
n representing the time range of the data container of observation window n is indicated
to be smaller than or equal to A, B, and N, INF for observation windows t
1, t
2, and t
n, t
n+1, respectively. It may be assumed that A < B < N < INF.
[0094] The observation windows (indexed by n) may e.g. be of increasing duration in time
(t
1, t
2, ...). The duration in time may e.g. increase with increasing n, e.g. increasing
with increasing n for n larger than a first threshold value n
th1. The duration in time of the observation windows may be different for different hearing
aid models or styles (e.g. dependent on processor clock frequency, memory, processing
algorithms, etc.). The duration in time of the observation windows may e.g. vary from
t
1 being of the order of milliseconds or of the order of minutes or larger.
[0095] Each observation window (t
1, t
2, ...) contains a number N
DC,n of data containers (DataC). All observation windows (t
1, t
2, ...) may contain the
same number N
DC of data containers (DataC).
[0096] The observation windows may comprise different numbers N
DC,n of data containers, e.g. decreasing with increasing n, e.g. decreasing with increasing
n for n larger than a threshold value second n
th2. The first and second threshold values of n (n
th2, n
th2) may be equal or different.
[0097] Regarding memory space, it is here assumed that each data container (DataC, irrespective
of its width in time Δt
n) occupies the same space in the memory (because it holds the same number of data
values).
[0098] The duration t
1 of the first observation window may e.g. be the shortest of the multitude of observation
windows (n=1, 2, ..., N
W) of variable duration in time. The 'time width' Δt
1 of the data containers of the first observation window may correspond to a sampling
time (t
s=1/f
s, where f
s is a sampling frequency), or a down-sampled version thereof, e.g. corresponding to
the length of a time frame (e.g. Δt
1 = 3.2 ms, for fs = 20 kHz, and 64 samples per time frame).
[0099] After N
DC,1 parameter values have been stored during the first observation window, the storage
frequency is reduced in the second observation window (n=2), e.g. in that a multitude
(e.g. 5 or more) of successive sample values corresponding to Δt
2 are averaged and stored in each successive data container (DataC) of the second observation
window. Thereby the use of memory for storage of the relevant data can be reduced.
Likewise, after the storage of N
DC,2 (averaged) parameter values in respective of the N
DC,2 data containers of second observation window have, the storage frequency is further
reduced in the third observation window (e.g. by another factor of 5), etc. The reduction
of the storage frequency can be repeated an arbitrary number of times. The reduction
of the storage frequency can be terminated after a number of observation windows,
after which the storage frequency is kept constant.
[0100] The strategy for successively reducing the storage frequency can be controlled by
a storage controller, e.g. in dependence of one or more of a memory size, a battery
status of the hearing aid, an estimated time to the next possible off-loading of logged
data, etc.
[0101] Thereby, it is possible to provide that even a memory of a relatively small size
(as in a hearing aid), can hold data representing a relatively long time period, and
thus capture relevant data representative of the time between data off-loads (nearly
irrespective of how long time it takes).
[0102] The logged data may e.g. be off-loaded to an external device (e.g.via an APP or directly,
e.g. automatically, when the hearing aid is connected to the external device), e.g.
to a memory of a portable device, e.g. to a smartphone, or to a fitting system of
the hearing aid. A reason for applying such storage strategy is that it may be difficult
to predict a time between data off-loads. A successful data off-load may be dependent
on connectivity conditions at a given time (e.g., is the data receiver (e.g. a smart
phone or a fitting system). A successful data off-load may be dependent on the hearing
aid having sufficient power to establish a link to the receiving device or system,
etc. A successful data off-load may be dependent on the receiving device or system
being ready to receive data from the hearing aid (there may be other tasks that have
higher priorities than the reception of logged data from the hearing aid).
[0103] FIG. 7 schematically illustrates a hearing aid system according to the present disclosure,
wherein an external processor (PRO) and memory (MEM) is built-into a charging station
(CHAS) for a hearing aid (HA1) or a pair of hearing aids (HA1, HA2). The charging
station (CHAS) can thereby be used to off-load data (LOGD) from a datalogger (cf.
DLOG in FIG. 1, 2) of the hearing aid(s). The charging station comprises a memory
(MEM) for receiving data from the datalogger.
[0104] The charging station (CHAS) comprises (antenna (ANT) and) transceiver circuitry (WLIF)
for establishing a communication link (WL) to the hearing aid(s). The charging station
may e.g. comprise one or more sensors for classifying the environment around the charging
station, e.g. a microphone or other sensor, e.g. for of background noise. The sensor
data may be added to the logged data (LOGD) while the hearing aids are located in
the charging station (and/or as long as the charging station (CHAS) and the hearing
aid (HA1, HA2) are in communication via the communication link (WL), e.g. as long
as the distance D between them is smaller than a maximum transmission/reception range
of the link (WL). The charging station may e.g. comprise an absolute clock that can
be added to the logged data, when the hearing aid are located in the charging station.
[0105] The processor (PRO) of the charging station may have a larger processing power than
a processor of the wearable device. The processor may be configured to analyze the
logged data from the hearing aid(s). The charging station may be located on a support
(Support), e.g. a table, in an appropriate place with a view to being accessible to
the hearing aids when the user moves around.
[0106] The charging station may be a pocket-size, portable, device comprising an interface
(PSIF), e.g. including a connector, to an electricity network, and/or a local (e.g.
rechargeable) battery (BAT) for charging a battery or batteries of the hearing aids
(HA1, HA2). Thereby the charging station can provide its function for a limited time,
even in the absence of access to the electricity network. The battery of the charging
station is assumed to have a significantly larger capacity than a battery of the hearing
aid.
[0107] The charging station may further comprise an interface (DIF) to a data network. The
interface is configured to establish a (here wireless) connection to the data network
(cf. link WLDL, e.g. WiFi) e.g. to provide access to servers, e.g. a fitting system,
on the Internet (cloud computing). Thereby the off-loaded date may be uploaded to
the fitting system via the data network.
[0108] FIG. 8 shows an embodiment of a hearing aid (HA) according to the present disclosure
comprising a BTE-part located behind an ear or a user and an ITE part located in an
ear canal of the user, and an auxiliary device (AD) in communication with the hearing
aid comprising a user interface (UI). Together, the hearing aid (HA) and the auxiliary
device (AD) may constitute a hearing system according to the present disclosure.
[0109] FIG. 8 illustrates an exemplary hearing aid (HA) formed as a receiver in the ear
(RITE) type hearing aid comprising a BTE-part (
BTE) adapted for being located behind pinna and a part (ITE) comprising an output transducer
(OT, e.g. a loudspeaker/receiver) adapted for being located in an ear canal (Ear canal)
of the user (e.g. exemplifying a hearing aid (HA) as shown in FIG. 1, 2). The BTE-part
(BTE) and the ITE-part (ITE) are connected (e.g. electrically connected, e.g. via
a cable comprising a multitude of conductors, e.g. three or more, auch as six or more)
by a connecting element (IC). In the embodiment of a hearing aid of FIG. 8, the BTE
part (BTE) comprises two input transducers (here microphones) (M
BTE1, M
BTE2) each for providing an electric input audio signal representative of an input sound
signal (S
BTE) from the environment (in the scenario of FIG. 8, from sound source S, e.g. a communication
partner). The hearing aid (HA) of FIG. 8 further comprises two wireless transceivers
(WLR
1, WLR
2) for receiving and/or transmitting signals (e.g. comprising audio and/or information,
e.g. logged data according to the present disclosure). The hearing aid (HA) further
comprises a substrate (SUB) whereon a number of electronic components are mounted,
functionally partitioned according to the application in question (analogue, digital,
passive components, etc.), but including a configurable digital signal processor (DSP),
a front-end chip (FE), and a memory unit (MEM) coupled to each other and to input
and output units via electrical conductors
Wx. The mentioned functional units (as well as other components) may be partitioned in
circuits and components according to the application in question (e.g. with a view
to size, power consumption, analogue vs digital processing, etc.), e.g. integrated
in one or more integrated circuits, or as a combination of one or more integrated
circuits and one or more separate electronic components (e.g. inductor, capacitor,
etc.). The configurable signal processor (DSP) provides an enhanced audio signal (cf.
signal OUT in FIG. 1, 2), which is intended to be presented to a user. The front-end
integrated circuit (FE) is adapted for providing an interface between the configurable
signal processor (DSP) and the input and output transducers, etc., and typically comprising
interfaces between analogue and digital signals. The input and output transducers
may be individual separate components, or integrated (e.g. MEMS-based) with other
electronic circuitry. In the embodiment of a hearing aid device in FIG. 8, the ITE
part (ITE) comprises an output unit in the form of a loudspeaker (receiver) (SPK)
for converting the electric signal (OUT) to an acoustic signal (providing, or contributing
to, acoustic signal S
ED at the ear drum (Ear drum). The ITE-part further comprises an input unit comprising
an input transducer (e.g. a microphone) (M
ITE) for providing an electric input audio signal representative of an input sound signal
S
ITE from the environment at or in the ear canal. In another embodiment, the hearing aid
may comprise
only the BTE-microphones (M
BTE1, M
BTE2). In yet another embodiment, the hearing aid may comprise an input unit located elsewhere
than at the ear canal in combination with one or more input units located in the BTE-part
and/or the ITE-part. The ITE-part further comprises a guiding element, e.g. a dome,
(DO) for guiding and positioning the ITE-part in the ear canal of the user.
[0110] The hearing aid (HA) exemplified in FIG. 8 is a portable device and further comprises
a battery (BAT) for energizing electronic components of the BTE- and ITE-parts. The
hearing aid (HA) may be identical to the hearing aid(s) illustrated in FIG. 7.
[0111] The hearing aid (HA) may comprise a directional microphone system (e.g. a beamformer
filter) adapted to enhance a target acoustic source among a multitude of acoustic
sources in the local environment of the user wearing the hearing aid.
[0112] The memory unit (MEM) may form part of the datalogger and comprise logged data according
to the present disclosure.
[0113] The hearing aid of FIG. 8 may constitute or form part of a binaural hearing aid system
according to the present disclosure.
[0114] The hearing aid (HA) according to the present disclosure may comprise a user interface
UI, e.g. as shown in the bottom part of FIG. 8 implemented in an auxiliary device
(AD), e.g. a remote control, e.g. implemented as an APP in a smartphone or other portable
(or stationary) electronic device (e.g. a charging station). In the embodiment of
FIG.8, the screen of the user interface (UI) illustrates a
Datalogging APP. The user may configure the datalogger via the APP. The user may e.g. select the data
that should be logged, e.g. Own-voice data, Other voice data, internal and external
sensor data (termed HA-sensors and External sensors, respectively, in the exemplified
screen of FIG. 8). In the embodiment of FIG. 8, Own-voice, Other voice, and HA-sensors
have been selected (as indicated by the filled square symbols ■). The user may further
off-load logged data to another device or system, e.g. to a fitting system, a Smartphone
or to a Charging station (see e.g. FIG. 7). In the embodiment of FIG. 8, connection
to the smartphone is selected (as indicated by the filled square symbol ■). Unselected
options are indicated by open square symbols (□).
[0115] The auxiliary device (AD) and the hearing aid (HA) are adapted to allow communication
of data representative of the currently selected direction (if deviating from a predetermined
direction (already stored in the hearing aid)) to the hearing aid via a, e.g. wireless,
communication link (cf. dashed arrow WL2 in FIG. 8). The communication link WL2 may
e.g. be based on far field communication, e.g. Bluetooth or Bluetooth Low Energy (or
similar technology), implemented by appropriate antenna and transceiver circuitry
in the hearing aid (HA) and the auxiliary device (AD), indicated by transceiver unit
WLR
2 in the hearing aid.
[0116] It is intended that the structural features of the devices described above, either
in the detailed description and/or in the claims, may be combined with steps of the
method, when appropriately substituted by a corresponding process.
[0117] As used, the singular forms "a," "an," and "the" are intended to include the plural
forms as well (i.e. to have the meaning "at least one"), unless expressly stated otherwise.
It will be further understood that the terms "includes," "comprises," "including,"
and/or "comprising," when used in this specification, specify the presence of stated
features, integers, steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers, steps, operations,
elements, components, and/or groups thereof. It will also be understood that when
an element is referred to as being "connected" or "coupled" to another element, it
can be directly connected or coupled to the other element but an intervening element
may also be present, unless expressly stated otherwise. Furthermore, "connected" or
"coupled" as used herein may include wirelessly connected or coupled. As used herein,
the term "and/or" includes any and all combinations of one or more of the associated
listed items. The steps of any disclosed method is not limited to the exact order
stated herein, unless expressly stated otherwise.
[0118] It should be appreciated that reference throughout this specification to "one embodiment"
or "an embodiment" or "an aspect" or features included as "may" means that a particular
feature, structure or characteristic described in connection with the embodiment is
included in at least one embodiment of the disclosure. Furthermore, the particular
features, structures or characteristics may be combined as suitable in one or more
embodiments of the disclosure. The previous description is provided to enable any
person skilled in the art to practice the various aspects described herein. Various
modifications to these aspects will be readily apparent to those skilled in the art,
and the generic principles defined herein may be applied to other aspects.
[0119] The claims are not intended to be limited to the aspects shown herein but are to
be accorded the full scope consistent with the language of the claims, wherein reference
to an element in the singular is not intended to mean "one and only one" unless specifically
so stated, but rather "one or more." Unless specifically stated otherwise, the term
"some" refers to one or more.
[0120] Accordingly, the scope should be judged in terms of the claims that follow.
REFERENCES