(19)
(11) EP 3 525 487 A1

(12) EUROPEAN PATENT APPLICATION

(43) Date of publication:
14.08.2019 Bulletin 2019/33

(21) Application number: 19152034.5

(22) Date of filing: 16.01.2019
(51) International Patent Classification (IPC): 
H04R 25/00(2006.01)
H04R 3/00(2006.01)
(84) Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(30) Priority: 09.02.2018 US 201862628495 P

(71) Applicant: WIDEX A/S
3540 Lynge (DK)

(72) Inventors:
  • Pihl, Michael Johannes
    3540 Lynge (DK)
  • Thomsen, Sven Creutz
    3540 Lynge (DK)
  • Andersen, Johan Myhre
    3540 Lynge (DK)
  • Ungstrup, Michael
    3450 Lynge (DK)
  • Söderlind, Carl Martin Hald
    3540 Lynge (DK)
  • Moeller, Nanna Elkjaer
    3540 Lynge (DK)

   


(54) A COMMUNICATION CHANNEL BETWEEN A REMOTE CONTROL AND A HEARING ASSISTIVE DEVICE


(57) A remote-control unit (10) for controlling a hearing assistive device (20) by sending a control signal with instructions as an acoustic signal, has an input transducer (14), an output transducer (15), and a processor (11) adapted for setting the volume of the output from the output transducer (15). The processor (11) is adapted for activating the input transducer (14) for receiving environmental sound, analyzing the environmental sound, determining and setting the volume of the output from the output transducer (15) based on the environmental sound, and outputting the control signal at the set volume via the output transducer (15).




Description


[0001] The present invention relates to a communication channel between a remote control and a hearing assistive device, and more particularly, an audio-based communication channel. The invention furthermore relates to a method of controlling a hearing assistive device remotely from a remote-control unit, and a computer-readable storage medium having computer-executable instructions, which, when executed by a processor of a remote-control unit, provides an app having a user interface being adapted for user interaction.

[0002] The purpose of the invention is to provide a remote-control unit for controlling a hearing assistive device by sending an acoustic signal containing the control signal with instructions, wherein the remote-control unit provides a user-friendly operating range even in noisy environments. If the volume of the acoustic signal containing the control signal is too low, the signal quality may be poor, and if the volume is too high, the speaker of the remote control may oversteer, or the acoustic signal may annoy the environment. For a smartphone, low playing volume is more discrete than loud playing volume.

[0003] According to the invention, this purpose is achieved by a remote-control unit for controlling a hearing assistive device by sending an acoustic signal containing the control signal with instructions, and having an input transducer, a processor, and an output transducer providing an acoustic output. The processor is adapted for activating the input transducer for receiving environmental sound, analyzing the environmental sound, determining and setting the volume of the output from the output transducer based on the environmental sound, and outputting the control signal at the set volume via the output transducer. Hereby it is possible to adapt the acoustic signal containing the control signal with instructions to have a predefined Signal-to-Noise Ratio relatively to the background noise. This improves the user experience as the operating range for the remote control may be maintained even in noisy environments without having the acoustic remote-control signal continuously on maximum power. As the acoustic remote-control signal is present in the upper part of the audible acoustic spectra, the acoustic remote-control signal may by some persons be sensed as annoying noise. This annoying effect is hereby reduced according to the invention.

[0004] In one embodiment, the analyzing of the environmental sound comprises determination of the sound level for the environmental sound. In one embodiment, the control signal with instructions may be modulated according to a frequency modulation scheme in a frequency band above 10 kHz, preferably above 15 kHz. When outputting the control signal comprising instructions as an acoustic signal from a smartphone, the app controlling the signaling may not know the characteristics of the loudspeaker of the smartphone. It is desired to use a flat part of the output characteristic of the loudspeaker. This put an upper limited on the frequencies applied. Furthermore, it is desired to place the control signal in the upper part for the audio band. This part of the audio band audible for persons with normal hearing but non-audible for many persons.

[0005] In one embodiment, the processor sets the volume for the acoustic output in accordance to a predetermined Signal-to-Noise Ratio.

[0006] In one embodiment, the analyzing of the environmental sound comprises classifying the environmental sound. Some acoustic environments may adversely affect the reception of the acoustic remote-control signal, and a higher Signal-to-Noise Ratio may improve the signaling quality.

[0007] In one embodiment, the remote-control unit is provided as a smartphone, and a software component (app) is running on the processor of the smartphone. The software component (app) generates the control signal with instructions for being output via the output transducer as the acoustic signal containing the control signal with instructions.

[0008] According to a second aspect of the invention there is provided a method of controlling a hearing assistive device remotely from a remote-control unit. The method comprises setting the volume for the acoustic output by activating the input transducer for receiving environmental sound, analyzing the environmental sound, determining and setting the volume of the acoustic output from the output transducer based on the environmental sound, and outputting the control signal at the set volume via the output transducer.

[0009] According to a third aspect of the invention there is provided a computer-readable storage medium having computer-executable instructions. The computer-executable instructions provide an app having a user interface being adapted for user interaction, when executed by a processor of a remote-control unit. The app is adapted for activating the input transducer for receiving environmental sound, analyzing the environmental sound, determining and setting the volume of the output from the output transducer based on the environmental sound, and outputting a remote-control signal at the set volume via the output transducer.

[0010] The invention will be described in further detail with reference to preferred aspects and the accompanying drawing, in which:

fig. 1 illustrates the communication paths between a smartphone and two hearing assistive devices according to one embodiment of the invention;

fig. 2 illustrates an embodiment of a smartphone having a processor for running an application program according to the invention;

fig. 3 illustrates an embodiment of a hearing assistive device according to the invention having an audio signaling block;

fig. 4 illustrates as flow chart for one implementation of an auto-calibration method according to the invention,

fig. 5 illustrates as flow chart for one implementation for a volume setting of the audio signaling method according to the invention,

fig. 6 illustrates the distribution of the tone signal in the acoustic signaling during the auto-calibration, and

fig. 7 illustrates as flow chart for a second embodiment of an auto-calibration method according to the invention.


DETAILED DESCRIPTION



[0011] In one embodiment, the remote-control unit according to the invention is provided by a smartphone. A smartphone is a handheld personal computer with a mobile operating system and an integrated mobile broadband cellular network connection for voice and Internet data communication. Smartphones can run a variety of software components, known as "apps". Most basic apps are pre-installed with the system, while others are available for download from web places like app stores.

[0012] The current invention relates to a remote-control, e.g. a smartphone 10, controlling one or two hearing assistive devices 20 (Left and Right). In the illustrated embodiment, the hearing assistive devices 20 being adapted to at least partly fit into the ear of the wearer and amplify sound, either sound from the environment or streamed sound. Hearing assistive devices include Personal Sound Amplification Products (PSAP) and hearing aids. Both PSAP's and hearing aids are small electroacoustic devices which are designed to process, amplify or limit sound for the wearer. PSAP's are mostly off-the-shelf amplifiers for people with normal hearing or slightly reduced hearing who need a little adjustment in volume (such as during hunting, concerts or bird watching).

[0013] Fig. 1 illustrates the communication paths between the smartphone 10 and the two hearing assistive devices 20. The two hearing assistive devices 20 each includes, according to one embodiment of the invention, a magnetic induction radio being responsible for the inter-ear communication 5 between two hearing assistive devices 20.

[0014] An acoustic communication link 8 and 9 between the smartphone 10 and the respective one of the two hearing assistive devices 20 is according to the invention provided by an audio modulator application software (App) stored in the smartphone 10 and an audio transceiver implemented in a signal processor of respective hearing assistive devices 20. In one embodiment, there may be provided a short-range radio link (not shown), e.g. Bluetooth®, between the smartphone 10 and the two hearing assistive devices 20. According to the invention, the smartphone 10 may act as remote-control while the two hearing assistive devices 20 are in a flight mode or a power saving mode. This is very important when changing mode or settings with the Bluetooth® radio disabled.

[0015] Some types of hearing assistive devices 20 may, due to size constraints, have been manufactured without a Bluetooth® radio, and therefore a remote-control need to incorporate a magnetic induction radio compatible to the one used for the inter-ear communication 5. According to the invention, there is no need a dedicated remote-control, as the remote-control functionality may be provided by means of smartphones available on the market and appropriate software providing the required acoustic signaling functionality.

[0016] In an embodiment where the sole communication link between the smartphone 10 and the two hearing assistive devices 20 is provided by the acoustic communication link 8 and 9, the inter-ear communication link 5 based upon an inductive link may improve robustness as the two hearing assistive devices 20 may detect the same acoustically transmitted data, and the transmitted data may be verified and or corrected via the inter-ear communication link 5. This may reduce the head shadow effect.

[0017] Fig. 2 illustrates the basic elements of a smartphone 10. The smartphone 10 includes a general-purpose processor 11, which is a central processing unit (CPU) that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. The general-purpose processor 11 is associated with memory 16 forming a computer-readable storage medium having computer-executable instructions.

[0018] The smartphone 10 includes a microphone 14 for picking up audio, e.g. speech, and generating an electronic representation for the audio signal to be fed to the general-purpose processor 11. The smartphone 10 is a multi-radio device having radio interfaces towards cellular networks as GSM, WCDMA and LTE, short range networks as WLAN and Bluetooth™, and for positioning systems as GPS. A connectivity manager 18 is managing telephone calls, data transmission and data receiving via a multi-mode radio 13. The smartphone 10 has a user interface 12, such a touchscreen, enabling the user to interact directly with what is displayed.

[0019] Fig. 2 illustrates that user interface 12 displays a screen shot for an acoustic remote-control app 19a including an audio modulator and an audio demodulator for sending and receiving control signals, respectively. The screen shot for the acoustic remote-control app 19a includes a header 12a informing the user about that the current active app is the Acoustic Remote Control, "ARC". A volume control area 12b indicates the current volume by means of a movable column informing the user about the current volume level relative to the volume range permitted for user adjustment and marked by a triangle permitting the user to slide the movable bar between min and max of the permitted volume range. A hearing aid program control area 12c permits the user to shift a hearing aid program. The user can select the appropriate program by swiping and tapping the hearing aid program control area 12c.

[0020] The smartphone 10 includes a speaker 15 for output delivered from the general-purpose processor 11. The memory 16 is illustrated as one unit, but a man skilled in the art is aware that a computer memory comprises a volatile memory part acting as working memory (Random-Access Memory) and requiring power to maintain the stored information, and a non-volatile memory part (e. g. Read-Only Memory, flash memory) in which stored information is persistent after the smartphone 10 has been powered off.

[0021] The memory 16 may contain computer-executable instructions for a plurality of application programs 19 (apps) including an acoustic remote-control app 19a. The application programs 19 may be downloaded from an app store on a remote server or pre-stored in the smartphone 10 when delivered from the factory. The general-purpose processor 11 runs the computer-executable instructions for the acoustic remote-control app 19a and provides an application program having a user interface 12 being adapted for user interaction. The acoustic remote-control app 19a includes computer-executable instructions for generating a control signal with instructions, often in response to a user interaction, and for outputting the control signal with instructions on an audio carrier via the output transducer 15 targeted for the hearing assistive device 20.

[0022] The remote control is according to one embodiment an Internet enabled smartphone 10. The smartphone 10 is via an access point 6 connected to the Internet. The connection may be a wireless connection (e.g. WLAN such as 802.11x), or a cellular connection (e.g. WCDMA or LTE). The smartphone 10 may access a remote server 7 containing hearing aid user accounts.

[0023] Fig. 3 illustrates an embodiment of a hearing assistive device 20 according to the invention comprising a control signal receiver 28 and a control signal transmitter 29. A microphone 24 picks up an acoustic signal, and an analog-to-digital converter 22 converts the signal picked up into a digital representation. The digital input signal is fed to a processing unit 26 comprising a digital signal processing path 21 for alleviating a hearing loss by amplifying sound at frequencies in those parts of the audible frequency range where the user suffers a hearing deficit. From the digital signal processing path 21, a signal is branched to the control signal receiver 28.

[0024] In one embodiment, the control signal with instructions is frequency modulated by means of Frequency-Shift Keying (FSK). Frequency-Shift Keying is a frequency modulation scheme in which digital information is transmitted through discrete frequency changes of a carrier signal. The simplest Frequency-Shift Keying concept is Binary Frequency-Shift Keying (BFSK). Binary Frequency-Shift Keying uses a pair of discrete frequencies to transmit binary (0 and 1) information. In one embodiment, the control signal with instructions contained in a frequency band above 10 kHz, preferably above 15 kHz.

[0025] At the input of the control signal receiver 28, a band-pass filter removes noise present outside the frequency band of the control signal. By means of a mixer, the FSK signal is down converted to base band. Preferably, the mixer creates an in-phase (I) component as well as a quadrature (Q) component being shifted 90° in phase.

[0026] The quadrature signal is demodulated by using a conventional matched filter approach for detecting the frequency the incoming signal, and the data content is detected, and error corrected. Hereafter data content is supplied to a controller 27 translating the data received from the control signal receiver 28 into commands to perform predetermined actions or into instructions to store transmitted data in specified memory locations of the hearing assistive device 20.

[0027] When the controller 27 identifies a need for sending a message to the smartphone 10, a control signal transmitter 29 is instructed to prepare data for transmission. The data is modulated according to the used audio FSK modulation scheme. The audio FSK modulated data is added to data in the digital signal processing path 21 in a summation point, and thereafter converted to sound by means of the output stage 23 and the speaker 25.

[0028] Multiple Frequency Shift Keying (MFSK) are related FSK modulation schemes based on multi-frequency shift keying digital transmission modes in which discrete audio tone bursts of various frequencies convey digital data. Binary-FSK is a first transmission mode using two frequencies. Another transmission mode uses tones of 16 frequencies and may be called MFSK16. Further transmission modes are available. The tones are transmitted successively, and each tone lasts for a fraction of a second.

[0029] Once the user has loaded the acoustic remote-control app 19a to the smartphone 10, the acoustic remote-control app 19a starts testing the hardware of the smartphone 10. The acoustic remote-control app 19a will notify the user about the testing via the user interface 12, and the user is prompted to place the smartphone 10 in a silent environment with limited background noise and in physically soft environment without reflecting surfaces in the vicinity. Hereafter the remote control or smartphone 10 initiates an auto-calibration method according to the invention. The purpose of the auto-calibration method described with reference to fig. 4 is to ensure that the smartphone 10 has a substantial flat output characteristic in the signaling band used by the acoustic signal containing the control signal with instructions.

[0030] The acoustic remote-control app 19a will automatically start the auto-calibration process in step 30 as shown in fig. 4 when opened for the first time. The auto-calibration could also be started from the settings of the app in case the acoustic remote-control app 19a has failed.

Auto-calibration using smartphone as transmitter and receiver



[0031] Upon start of the auto-calibration process, the acoustic remote-control app 19a activates the microphone 14 for listening to the environment. At step 32, the processor 11 sets the parameter N to the value "1". In step 33, the acoustic remote-control app 19a generates and plays the N'th (starting with N=1) discrete audio tone burst via the speaker 15 of the smartphone 10. In step 34, the acoustic remote-control app 19a detects and record the sound level of the N'th (starting with N=1) discrete audio tone burst via the microphone 14 of the smartphone 10. In case MFSK16 is the preferred and default frequency modulation scheme, N is compared to a pre-set value (16 due to the default frequency modulation scheme) in step 36. By incrementing N with one in step 35, the acoustic remote-control app 19a will run through the play-and-record sub-routine for all sixteen frequencies predefined for the MFSK16 frequency modulation scheme or another pre-set value for another default frequency modulation scheme.

[0032] Once the acoustic remote-control app 19a in step 36 finds that N has reached the pre-set value (all signaling frequencies have been tested), the acoustic remote-control app 19a starts in step 37 the evaluation of the recorded sound levels for the signaling frequencies. Furthermore, the acoustic remote-control app 19a deactivates the microphone 14 as the testing of the speaker 15 has been completed. The evaluation has the purpose of ensuring that the discrete audio tone bursts output by the speaker 15 have substantially the same sound level. If some of the discrete audio tone bursts output by the speaker 15 is detected to have sound levels falling outside a predetermined range of sound levels, the acoustic remote-control app 19a may have to modify the frequency modulation scheme based on the analyzed sound levels in step 38.

[0033] The modification of the frequency modulation scheme in step 38 may comprise adjusting the balance between frequency components present in the frequency modulation scheme. Hereby the processor 11 uses equalization of the frequency components present in the frequency modulation scheme to compensate for the lack of flatness of the output from the speaker 15 in the frequency band used by the control signal according to the applied frequency modulation scheme.

[0034] Another option would be to apply a frequency modulation scheme occupying a narrower frequency band. This is done by changing transmission mode. Finally, it would be possible to change carrier frequency and thereby use a lower or a higher frequency band. The cost may be that the control signals becomes audible more for more people.

[0035] The auto-calibration process will now be completed, and the acoustic remote-control app 19a may hereafter be used for remote-controlling an appropriate hearing assistive device 20 by means of the applied frequency modulation scheme. In one embodiment, the remote control or smartphone 10 sends a pre-defined sequence to the hearing assistive device 20 containing information about the applied frequency modulation scheme. The hearing assistive device 20 stores this information and starts to apply frequency modulation scheme for decoding the acoustic remote-control signals.

[0036] Fig. 6 shows an example for the auto-calibration process as disclosed above. The acoustic remote-control app 19a uses a frequency band 52 for the audio signaling. The auto-calibration process according to one embodiment of the invention uses a plurality of audio tone bursts 51.1 - 51.N at N discrete frequencies contained in the frequency band 52. During the auto-calibration, the N discrete frequencies are successively tested by outputting the audio tone bursts 51.1 - 51.N one by one. The signal level picked up by the microphone 14 of the smartphone 10 is a signal level curve 53. It is seen that the signal level curve 53 is not flat over the entire frequency band 52. The acoustic remote-control app 19a then must choose a narrower frequency band for an alternative frequency modulation scheme or selectively increase the gain for tones or frequencies reproduced at too low levels.

[0037] In one embodiment of the invention, the step 33 (fig. 4) includes generating and playing of the discrete tone at a specific frequency, includes successively generating and playing of the discrete tone at a plurality of multimedia volume settings, e.g. at three different volume setting. The multimedia volume setting is normally used by the user to control the output sound of the speaker 15 in a multimedia application. By allowing the acoustic remote-control app 19a to test the signal sound level for the discrete tone at a plurality of multimedia volume settings, the acoustic remote-control app 19a will afterward be able to use interpolation to identify a multimedia volume setting providing the desired signal sound level.

[0038] According to one embodiment of the auto-calibration process according to the invention, the flatness of the speaker 15 is tested by outputting a white noise signal containing the entire frequency band 52. The audio signal picked up by the microphone 14 of the smartphone 10 is used to generate a signal level curve including the frequencies for the audio tone bursts used for the audio signaling. In case the signal level curve is not flat over the entire frequency band 52, the acoustic remote-control app 19a then must choose to selectively increase (or adjust) the gain for tones or frequencies reproduced at too low (or too high) levels, or to choose a narrower or shifted frequency band for an alternative frequency modulation scheme.

[0039] Fig. 5 illustrates as flow chart for one embodiment for the volume setting of the audio signaling method according to the invention. When the user, in step 40, activates the acoustic remote-control app 19a on the smartphone 10, the acoustic remote-control app 19a activates, in step 41, the microphone 14 and starts listening to the environment of the smartphone 10. During step 42, the smartphone 10 classifies the environment as some environments may have many spikes and fluctuations in noise level at the frequencies audio signaling, whereby the audio transmission from the acoustic remote-control app 19a may be challenged. In challenging environments, it is beneficial to increase the Signal-to-Noise Ratio to keep the Bit Error Rate (BER) low. The Bit Error Rate (BER) is the number of bit errors per unit time. Signal-to-noise ratio (SNR) is a measure that compares the level of a desired signal to the level of background noise. Signal-to-noise ratio (SNR) is defined as the ratio of signal power (meaningful information) and the power of background noise (unwanted signal): SNR = Psignal/Pnoise. The acoustic remote-control app 19a includes a look-up table from where it in step 43 reads a predetermined Signal-to-Noise Ratio associated with the classified sound environment.

[0040] In one embodiment, the control signal has a signaling rate up to 100 single symbols per second.

[0041] In one embodiment, the Signal-to-Noise Ratio (SNR) is set to a fixed value from manufacturing.

[0042] If the background noise is fluctuating (having many of spikes and varying Sound Pressure Level (SPL) at the frequency band 52 used by the control signal) the robustness or the Bit Error Rate (BER) for the control signal will be improved by increasing the volume for the control signal and thereby the Sound Pressure Level (SPL) for the output acoustic signal.

[0043] The sound level or the Sound Pressure Level (SPL) of the sound output by the speaker 15 of the smartphone is controlled by adjusting the volume of the smartphone.

[0044] In step 44, the smartphone 10 detects the sound level (Pnoise) of the background noise, and in step 45 the smartphone 10 sets the signal level (Psignal) for the discrete audio tone bursts generated by the acoustic remote-control app 19a based on the applied Signal-to-Noise Ratio (SNR).

[0045] Hereafter, the smartphone 10, in step 46, outputs an acoustic signal containing the acoustic remote-control signal with instructions for the hearing assistive device 20 at the volume set at step 45. In step 47, the acoustic remote-control app 19a evaluates whether further instructions need to be sent. If so, the acoustic remote-control app 19a goes to step 42 for reclassification of the environment and detection of the changed sound level prior to sending the further instructions.

[0046] If no further instruction is to be sent in step 47, the acoustic remote-control app 19a deactivates the microphone 24 as the sending of the acoustic remote-control signal with instructions has been completed. The acoustic remote-control app 19a is terminated in step 48. The classification of the environmental sound (step 42) and the detection the sound level (step 44) may take place as concurrent activities.

[0047] In one embodiment, the sound environments classification of step 42, the detection of the environmental sound level of step 44, the volume adjustment of step 45, and the outputting of control signals in step 46 are concurrent processes. This means that the smartphone 10 is outputting a train of single symbols and simultaneously monitors the background noise. If the background noise changes, the processor 11 adjusts the volume of the speaker 15 during the ongoing outputting of the single symbols. The volume is preferably adjusted in between the single symbols.

[0048] By using a frequency band 52 for the audio signaling above the normal speech spectrum, e.g. above 10 kHz, it is possible to isolate the control signal from a speech signal by means of high-pass filtering in the hearing assistive device. By using a carrier signal above the normal speech spectrum, e.g. at 15 kHz or above, it is possible to use a smartphone for the signaling without the control signal becomes too annoying for persons close to the hearing aid user.

[0049] In one embodiment, the processor 11 of the smartphone sets the volume for the control signal with instructions in accordance to a predetermined Signal-to-Noise Ratio (SNR), e.g. 20 dB. Hereby the app software run by the smartphone processor 11 ensures that the volume for the control signal across various smartphone platforms is sufficiently high relatively to the current background noise picked up by the hearing aid. In one embodiment, the Signal-to-Noise Ratio (SNR) is set higher, e.g. 30 dB, due to the noise environment classification.

[0050] In one embodiment, the smartphone 10 is paired with the hearing assistive device 20 prior to the auto-calibration discussed with reference to fig. 4. The pairing has the advantage that the acoustic remote-control app 19a running on the smartphone 10 may gain knowledge about the hearing assistive device 20 and use this knowledge when modifying the frequency modulation scheme in step 38.

[0051] The smartphone 10 may access the remote server 7 containing hearing aid user accounts. By means of an ID for the hearing assistive device 20 or identification of the hearing aid user, the smartphone 10 may retrieve information about the hearing assistive device 20 from the remote server 7. This information may include which transmission modes the hearing assistive device 20 supports, and whether the hearing assistive device 20 serves two or more carrier frequencies.

[0052] The pairing of the smartphone 10 and the hearing assistive device 20 may be provided by using the acoustic remote-control app 19a for scanning a QR code e.g. on a packaging label (sales package) of hearing assistive device 20 to read the hearing aid ID. Then the smartphone 10 may retrieve information about the hearing assistive device 20 from the remote server 7.

[0053] In another embodiment the user of the hearing assistive device 20 may enter the hearing aid ID or identify himself via the acoustic remote-control app 19a, whereby smartphone 10 may retrieve the information about the hearing assistive device 20 from the remote server 7.

Auto-calibration using hearing assistive device as audio receiver



[0054] Fig. 7 illustrates as flow chart for a second embodiment of an auto-calibration method according to the invention. A two-way auto-calibration method for the speaker volume is described, and the method also includes equalization of the used frequencies. The acoustic remote-control app 19a will automatically start a two-way auto-calibration process in step 60 when opened for the first time. The user is requested in step 61 to place the smartphone 10 and the hearing assistive device 20 in an environment with limited background noise and without reflecting surfaces in the vicinity. The acoustic remote-control app 19a will bring the hearing assistive device 20 into a two-way auto-calibration mode by means of a control signal instruction output by the speaker 15.

[0055] In step 62, the acoustic remote-control app 19a creates a test plan of tones applied by the frequency modulation scheme, the tones are arranged as tone pairs by the acoustic remote-control app 19a in step 63, and a counter, m, identifying the position of the tone pair in the test plan. The smartphone 10 outputs the tone pair, which is received and evaluated by the control signal receiver 28 of the hearing assistive device 20 in step 65. The simplest evaluation is the detection of the loudest tone. The hearing assistive device 20 uses the control signal transmitter 29 for communicating the outcome of the evaluation back to the smartphone 10 in step 66.

[0056] The acoustic remote-control app 19a receives the evaluation for the m'th tone pair and adjusts the relative volume of the two tones in the m'th tone pair in step 67. Based upon the latest evaluation from the hearing assistive device 20 and the progress in of the test plan, the acoustic remote-control app 19a decides whether the auto-calibration has been completed in step 68. In case the auto-calibration has not been completed yet, the counter, m, is incremented in step 69 and steps 64 to 67 is repeated for the next tone pair.

[0057] Once the acoustic remote-control app 19a decides that the auto-calibration has been completed in step 68, the acoustic remote-control app 19a stores the achieved settings for the volume of the individual tones in step 70, and the auto-calibration procedure is deemed to be completed in step 71. One success criteria may be that all tones are played at an equal sound level. The settings for the volume of the individual tones may now be used in the acoustic remote-control app 19a for remote controlling the hearing assistive device 20 as explained with reference to fig. 5.

[0058] Hereby, the acoustic remote-control decoder (the control signal receiver 28 and the controller 27) of the hearing assistive device 20 may be regarded as the final judge determining what it "hears" and what it detects. By playing the two competing symbols at different relative levels, the point where the two symbols may be detected equally well by the hearing assistive device 20. This procedure is repeated for the different combinations of competing symbols. A sending volume for all symbols where these are seen equally loud may hereby be achieved.

[0059] To avoid playing some symbols unnecessarily loud, the smartphone 10 listens to the background noise and adjusts the sending volume to make it be as small as possible while still being louder than the background noise. The above discussed two-way auto equalization is assumed to take place with the hearing assistive device 20 lying on the smartphone 10 or adjacent to it. The equalization compensates for the frequency response of the entire signal path from acoustic remote-control app 19a to the acoustic remote-control decoder (the control signal receiver 28 and the controller 27) of the hearing assistive device 20, inclusive the transmission environment. This will be a good starting point though the signal path from the smartphone 10 to the hearing assistive device 20 will be different when the HA is sitting on the user's ear.

In-situ fine-tuning



[0060] When the auto-calibration process as described with reference to fig. 4 or fig. 7 has been completed, the signaling quality may be further improved in an in-situ fine-tuning session. The hearing assistive device 20 are placed in the user's ear, and the acoustic remote-control app 19a in the smartphone 10 sends a command to the hearing assistive device 20 about initiating a fine-tuning session. With the hearing assistive device 20 mounted in ear, the acoustic remote-control app 19a in the smartphone 10 sends competing symbols at varying relative volume, two symbols at a time, just like during the first equalization or calibration. Since the equalizing is almost in place, the fine-tuning process only needs to vary the relative volume a little, and only a few packets (consisting of a plurality of tones) need to be sent. The hearing assistive device 20 has a memory in which it records or logs what it hears.

[0061] When the transmission of the few packets has been completed, the user is requested to remove the hearing assistive device 20, place the smartphone 10 on a plane surface with the screen facing upwards and place the hearing assistive device 20 on top of or adjacent to the smartphone 10. Then a two-way acoustic signaling session is initialized by operating the user interface of the acoustic remote-control app 19a, asking the hearing assistive device 20 to output what is stored in the memory log during the in-situ part of the session. Based upon the log data received the acoustic remote-control app 19a calculates a fine-tuning based on what the hearing assistive device 20 received during the in-situ part of the session.


Claims

1. A remote-control unit (10) for controlling a hearing assistive device (20) by sending a control signal with instructions as an acoustic signal, and having an input transducer (14), an output transducer (15), and a processor (11) adapted for setting the volume of the output from the output transducer (15), wherein the processor (11) is adapted for:

- activating the input transducer (14) for receiving environmental sound,

- analyzing the environmental sound,

- determining and setting the volume of the output from the output transducer (15) based on the environmental sound, and

- outputting the control signal at the set volume via the output transducer (15).


 
2. The remote-control unit according to claim 1, wherein the processor (11) set the volume for the acoustic output triggered by user manipulation.
 
3. The remote-control unit according to claim 1, wherein the analyzing of the environmental sound comprises determination of the sound level for the environmental sound.
 
4. The remote-control unit according to any of the preceding claims, wherein the remote-control unit (10) is provided as a smartphone, and wherein a software component (app) is running on the processor (11) and is generating the control signal with instructions for being output via the output transducer (15) as the acoustic signal containing the control signal with instructions.
 
5. A method of controlling a hearing assistive device remotely from a remote-control unit, wherein the method comprises setting the volume for the acoustic output by:

- activating the input transducer for receiving environmental sound,

- analyzing the environmental sound,

- determining and setting the volume of the acoustic output from the output transducer based on the environmental sound, and

- outputting the control signal at the set volume via the output transducer.


 
6. The method according to claim 5, wherein the analyzing of the environmental sound comprises determining of the sound level for the environmental sound.
 
7. The method according to claim 5, comprising modulating the control signal with instructions according to a frequency modulation scheme in a frequency band above 10 kHz, preferably above 15 kHz.
 
8. The method according to claim 7, wherein the analyzing of the environmental sound comprises determining of the sound level for the frequency band containing the control signal with instructions in the environmental sound.
 
9. The method according to claim 5 and comprising setting the volume for the acoustic output in accordance to a predetermined Signal-to-Noise Ratio (SNR).
 
10. The method according to any of the preceding claims and comprising loading an app into a smartphone for providing the remote-control unit for controlling the hearing assistive device and generating the control signal with instructions for being output via the output transducer as the acoustic signal containing the control signal with instructions.
 
11. A computer-readable storage medium having computer-executable instructions, which, when executed by a processor (11) of a remote-control unit (10), provides an app having a user interface (12) being adapted for user interaction, wherein the app is adapted for:

- activating the input transducer (14) for receiving environmental sound,

- analyzing the environmental sound,

- determining and setting the volume of the output from the output transducer (15) based on the environmental sound, and

- outputting a remote-control signal at the set volume via the output transducer (15).


 
12. The computer-readable storage medium having computer-executable instructions according to claim 14, wherein the app is adapted to determine the sound level for the environmental sound.
 
13. The computer-readable storage medium having computer-executable instructions according to claim 11, wherein the app is adapted to modulate the control signal with instructions according to a frequency modulation scheme in a frequency band above 10 kHz, preferably above 15 kHz.
 
14. The computer-readable storage medium having computer-executable instructions according to claim 13, wherein the app is adapted to determining of the sound level for the frequency band containing the control signal with instructions in the environmental sound.
 
15. The computer-readable storage medium having computer-executable instructions according to claim 11, wherein the app is adapted to setting the volume for the acoustic output in accordance to a predetermined Signal-to-Noise Ratio (SNR).
 




Drawing
















Search report









Search report