(19)
(11) EP 4 068 805 A1

(12) EUROPEAN PATENT APPLICATION

(43) Date of publication:
05.10.2022 Bulletin 2022/40

(21) Application number: 21166351.3

(22) Date of filing: 31.03.2021
(51) International Patent Classification (IPC): 
H04R 25/00(2006.01)
(52) Cooperative Patent Classification (CPC):
H04R 25/505; H04R 2225/41; H04R 25/558
(84) Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR
Designated Extension States:
BA ME
Designated Validation States:
KH MA MD TN

(71) Applicant: Sonova AG
8712 Stäfa (CH)

(72) Inventors:
  • MÜLLER, Stephan
    8712 Staefa (CH)
  • KLOCKGETHER, Stefan
    8634 Hombrechtikon (CH)

(74) Representative: Qip Patentanwälte Dr. Kuehn & Partner mbB 
Goethestraße 8
80336 München
80336 München (DE)

   


(54) METHOD, COMPUTER PROGRAM, AND COMPUTER-READABLE MEDIUM FOR CONFIGURING A HEARING DEVICE, CONTROLLER FOR OPERATING A HEARING DEVICE, AND HEARING SYSTEM


(57) A method for configuring a hearing device (12) is provided. The hearing device (12) comprises at least one sound input component (20), at least one sound output component (24), and a sound processor (22), which is coupled to the sound output component (24) and which is configured in accordance with a first sound program for modifying a sound output of the hearing device (12). The method comprises: receiving an audio signal from the at least one sound input component (20) and/or a sensor signal from the at least one further sensor (50); determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal; determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value; configuring the sound processor (22) in accordance with the second sound program such that the sound output is modified according to the second sound program; receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and reconfiguring the sound processor (22) in accordance with the first sound program.




Description

FIELD OF THE INVENTION



[0001] The invention relates to a method, a computer program, and a computer-readable medium, in which the computer program is stored, for configuring a hearing device. Furthermore, the invention relates to a controller for operating the hearing device, and to a hearing system comprising at least the one hearing device and optionally a connected user device, such as a smartphone.

BACKGROUND OF THE INVENTION



[0002] Hearing devices are generally small and complex devices. Hearing devices can include a processor, a microphone as a sound input component, an integrated loudspeaker as a sound output component, a memory, a housing, and other electronical and mechanical components. Some example hearing devices are Behind-The-Ear (BTE), Receiver-In-Canal (RIC), In-The-Ear (ITE), Completely-In-Canal (CIC), and Invisible-In-The-Canal (IIC) devices. A user can prefer one of these hearing devices compared to another device based on hearing loss, aesthetic preferences, lifestyle needs, and budget.

[0003] In modern hearing devices, numerous features are implemented to facilitate speech intelligibility or to improve a hearing comfort for the user. However, the benefit of these features varies strongly dependent on the acoustic environment of the user. Therefore, conventional hearing aids classify the acoustic situation, e.g. the acoustic environment, of the wearer continuously in order to automatically adapt the feature parameters, such as the Noise Canceller or the Beamformer Strength, if the acoustic situation changes. Depending on the classified acoustic situation, a set of feature parameters is selected as a determined sound program. Because of the adaptation to the acoustic situation, the user might perceive a switch into a new sound program sudden and/or unexpected. Further, the accuracy of the classification system may be limited, what may lead to a misclassification of the situation. Additionally, the hearing intention of the user may not be considered by the classifier, e.g. the user wants to communicate at a concert and the hearing aid adapts to the music instead of the conversation partner.

[0004] A common way to consider the user's intention is to provide him or her with manual programs with predefined sets of feature parameters that the user can switch in between by pressing a button on the hearing instrument. A modern approach is to allow the user to adjust the single parameters directly via a mobile application. However, these solutions require a certain degree of understanding how the features affect the listening impression. Also, the benefit of many features come with compromises (e.g. stronger noise reduction leads to reduced sound quality). Understanding these compromises is a complex matter for non-technical affine users. It also may take too long time until the user has the paired smartphone available or finds the right manual program on the hearing aid. So, the user experience to the user wearing the user device may be not convenient for the user. Further, since the manual programs do have to be set up in advance, an adequate manual program for a specific situation might just not be quickly available.

DESCRIPTION OF THE INVENTION



[0005] It is an objective of the invention to provide a method, a computer program, and a computer-readable medium, in which the computer program is stored, for configuring a hearing device, and a controller for operating the hearing device and a system comprising the hearing device. It is a further objective of the invention to provide a convenient user experience to the user wearing the user device.

[0006] These objectives are achieved by the subject-matter of the independent claims. Further exemplary embodiments are evident from the dependent claims and the following description.

[0007] A first aspect of the invention relates to a method for configuring a hearing device. The hearing device comprises at least one sound input component, at least one sound output component, and a sound processor, which is coupled to the sound output component and which is configured in accordance with a first sound program for modifying a sound output of the hearing device. The method comprises: receiving an audio signal from the at least one sound input component and/or a sensor signal from the at least one further sensor; determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal; determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value; configuring the sound processor in accordance with the second sound program such that the sound output is modified according to the second sound program; receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and reconfiguring the sound processor in accordance with the first sound program.

[0008] The method may be a computer-implemented method, which may be performed automatically by a hearing system, part of which the user's hearing device is. The hearing system may, for instance, comprise one or two hearing devices used by the same user. One or both of the hearing devices may be worn on and/or in an ear of the user. A hearing device may be a hearing aid, which may be adapted for compensating a hearing loss of the user. Also a cochlear implant may be a hearing device. The hearing system may optionally further comprise at least one connected user device, such as a smartphone, smartwatch or other devices carried by the user and/or a personal computer etc. The further sensor(s) may be any type(s) of physical sensor(s) - e.g. an accelerometer and/or optical and/or temperature sensor - integrated in the hearing device or possibly also in a connected user device such as a smartphone or a smartwatch. The first and/or second sound program may be referred to as a sound processing feature. The sound processing feature may for example be a Noise Canceller or a Beamformer Strength. The sound input may correspond to the user's speaking activity and/or the user's acoustic environment.

[0009] The reconfiguration of the sound processor in accordance with the first sound program provides a revert function that allows the user to immediately return to the previous automatic setting, i.e. the first sound program. The revert function empowers the user to revert automatic changes that are not in agreement with his hearing intention. When the user notices an undesired change to the acoustics of his surroundings, he provides the user input indicating that he does not agree with the configuration in accordance with the second sound program to return to the preferred previous setting.

[0010] The major advantage of the above revert function over common interfaces is that the user can make changes to the hearing system, in particular the hearing device, without needing knowledge about the technical details. The user only expresses his disagreement with the classification of his environment. This is considered a great facilitation compared to common methods of interacting with the hearing instrument.

[0011] According to an embodiment of the invention, a determination algorithm for determining, whether the first sound program is adapted to the determined classification value, is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing device is able to learn the preferences of the user and to consider them in a future determination process. Thus, the reverse function also may deliver real-life feedback data on how happy the user is with the current classifier system comprising the determination algorithm and/or for which situations the determination algorithm and the corresponding automatic sound program steering procedures may be adapted. In one or more embodiments, the adaption of the determination algorithm only may be carried out, if the predetermined user input has been given for a predetermined number of times under a similar speaking activity and/or acoustic environment.

[0012] According to an embodiment of the invention, the predetermined user input is input via input means of the hearing device, an application of a mobile device, and/or a gesture detection. The gesture detection may be carried out by the hearing device, e.g. by a tap control with an accelerometer or pressure sensor of the hearing device. Alternatively, the gesture detection may be carried out by the connected user device.

[0013] According to an embodiment of the invention, if the predetermined user input is input by the user, although the sound program has not been changed, the hearing device provides a predetermined output to the user, which informs the user that the sound program has not been changed. For example, the acoustic environment of the user changes and the user perceives a change of his listening experience. Then, the user may believe that this change was induced by an automatic change of the sound program and may provide the predetermined user input indicating that he does not agree with this alleged change of the sound program. In this case, the predetermined output provides the user with the information that the sound program has not been changed automatically. So, the user knows that the change of the listening experience has an external cause and is not induced by an internal change of the hearing device. So, the predetermined output may enable a differentiation between the internal and the external change.

[0014] According to an embodiment of the invention, the at least one classification value is determined by characterizing the user's speaking activity and/or the user's acoustic environment.

[0015] According to an embodiment of the invention, the at least one classification value is determined by identifying a predetermined state characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal, and by determining the at least one classification value depending on the identified state.

[0016] According to an embodiment of the invention, the one or more predetermined states are one or more of the following: Speech In Quiet; Speech In Noise; Being In Car; Reverberant Speech; Noise; Music; Quiet; Speech In Loud Noise.

[0017] According to an embodiment of the invention, two or more classification values characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined; and the second sound program is adapted to the corresponding determined classification values.

[0018] According to an embodiment of the invention, the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over a predetermined time interval.

[0019] According to an embodiment of the invention, the one or more predetermined user activity values are identified based on the audio signal from the at least one sound input component and/or the sensor signal from the at least one further sensor received over two identical predetermined time intervals separated by a predetermined pause interval.

[0020] Further aspects of the invention relate to a computer program for configuring a hearing device for a user, with the hearing device comprising at least one sound input component, at least one sound output component, and a sound processor, which is coupled to the sound output component and which is configured in accordance with a first sound program of the hearing device, wherein the program, when being executed by a processor, is adapted to carry out the steps of the method of one of the previous claims.

[0021] For example, the computer program may be executed in a processor of a hearing device, which hearing device, for example, may be carried by the person behind the ear. The computer-readable medium may be a memory of this hearing device. The computer program also may be executed by a processor of a connected user device, such as a smartphone or any other type of mobile device, which may be a part of the hearing system, and the computer-readable medium may be a memory of the connected user device. It also may be that some steps of the method are performed by the hearing device and other steps of the method are performed by the connected user device.

[0022] Further aspects of the invention relate to a computer-readable medium, in which the computer program is stored. In general, the computer-readable medium may be a floppy disk, a hard disk, an USB (Universal Serial Bus) storage device, a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory) or a FLASH memory. The computer-readable medium may also be a data communication network, e.g. the Internet, which allows downloading a program code. The computer-readable medium may be a non-transitory or transitory medium.

[0023] A further aspect of the invention relates to a controller for operating the hearing device, the controller comprising a processor, which is adapted to carry out the steps of the above method.

[0024] A further aspect of the invention relates to a hearing system comprising the hearing device worn by the hearing device user and a connected user device, wherein the hearing system comprises: a sound input component; a processor for processing a signal from the sound input component; a sound output component for outputting the processed signal to an ear of the user of the hearing device; a transceiver for exchanging data with the connected user device; at least one classifier configured to identify one or more predetermined classification values based on a signal from the at least one sound input component and/or from at least one further sensor; and wherein the hearing system is adapted for performing the above method.

[0025] The hearing system may further include, by way of example, a second hearing device worn by the same user and/or a connected user device, such as a smartphone or other mobile device or personal computer, used by the same user.

[0026] According to an embodiment, the hearing system further comprises a mobile device, which includes the classifier.

[0027] It has to be understood that features of the method as described above and in the following may be features of the computer program, the computer-readable medium, the controller and/or the hearing system as described above and in the following, and vice versa.

[0028] These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.

BRIEF DESCRIPTION OF THE DRAWINGS



[0029] Below, embodiments of the present invention are described in more detail with reference to the attached drawings.

Fig. 1 schematically shows a hearing device according to an embodiment of the invention.

Fig. 2 schematically shows a hearing system according to an embodiment of the invention.

Fig. 3 shows a flow diagram of a method for configuring a hearing device, according to an embodiment of the invention.



[0030] The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical parts are provided with the same reference symbols in the figures.

DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS



[0031] Fig. 1 schematically shows a hearing device 12 according to an embodiment of the invention. The hearing device 12 is formed as a behind-the-ear device carried by a hearing device user (not shown). It has to be noted that the hearing device 12 is a specific embodiment and that the method described herein also may be performed with other types of hearing devices, such as an in-the-ear device.

[0032] The hearing device 12 comprises a part 15 behind the ear and a part 16 to be put in the ear channel of the user. The part 15 and the part 16 are connected by a tube 18. In the part 15, at least one sound input component 20, e.g. a microphone, a sound processor 22 and a sound output component 24, such as a loudspeaker, are provided. The sound input component 20 may acquire environmental sound of the user and may generate a sound signal. The sound processor 22 may amplify the sound signal. The sound output component 24 may generate sound from the amplified sound signal and the sound may be guided through the tube 18 and the in-the-ear part 16 into the ear channel of the user.

[0033] The hearing device 12 may comprise a processor 26 which is adapted for adjusting parameters of the sound processor 22, e.g. such that an output volume of the sound signal is adjusted based on an input volume. These parameters may be determined by a computer program which is referred to as a sound program run in the processor 26. For example, with an input mean 28, e.g. a knob, of the hearing device 12, a user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of these modifiers may be selected, from this modifier, an adjustment command may be created and processed as described above and below. In particular, processing parameters may be determined based on the adjustment command and based on this, for example, the frequency dependent gain and the dynamic volume of the sound processor 22 may be changed. All these functions may be implemented as different sound programs stored in a memory 30 of the hearing device 12, which sound programs may be executed by the processor 22.

[0034] The hearing device 12 further comprises a transceiver 32 which may be adapted for wireless data communication with a transceiver 34 of a connected user device 70 (see figure 2).

[0035] The hearing device 12 further comprises at least one classifier 48 configured to identify one or more predetermined classification values based on a signal from the sound input device 20 and/or from at least one further sensor 50 (see figure 2), e.g. an accelerometer and/or an optical and/or temperature sensor. The classification value may be used to determine a sound program, which may be automatically used by the hearing device 12, in particular depending on a sound input received via the sound input component 20 and/or the sensor 50. The sound input may correspond to a speaking activity and/or acoustic environment of the user.

[0036] The hearing device 12 is configured for performing a method for configuring the hearing device 12 according to the present invention.

[0037] Fig. 2 schematically shows a hearing system 60 according to an embodiment of the invention. The hearing system 60 includes a hearing device, e.g. the above hearing device 12 and a connected user device 70, such as a smartphone or a tablet computer. The connected user device 70 may comprise the transceiver 34, a processor 36, a memory 38, a graphical user interface 40 and a display 42. Alternatively or additionally to the classifier 48 of the hearing device 12, the connected user device 70 may comprise the classifier 48 or a further classifier 48.

[0038] With the hearing system 60 it is possible that the above-mentioned modifiers and their levels and/or values are adjusted with the connected user device 70 and/or that the adjustment command is generated with the connected user device 70. This may be performed with a computer program run in the processor 36 of the connected user device 70 and stored in the memory 38 of the connected user device 70. The computer program may provide the graphical user interface 40 on the display 42 of the connected user device 70.

[0039] For example, for adjusting the modifier, such as volume, the graphical user interface 40 may comprise a control element 44, such as a slider. When the user adjusts the slider, an adjustment command may be generated, which will change the sound processing of the hearing device 12 as described above and below. Alternatively or additionally, the user may adjust the modifier with the hearing device 12 itself, for example via the input means 28.

[0040] Fig. 3 shows an example for a flow diagram of a method for configuring a hearing device, according to an embodiment of the invention. The method may be a computer-implemented method performed automatically in the hearing device 12 and/or the hearing system 60 of Fig. 1.

[0041] In optional step S2 of the method, in case the hearing device 12 currently provides a sound output to the user, the sound output may be modified in accordance with a first sound program. In general, a sound program may be referred to as sound processing feature. The first and/or second sound program may be referred to as a sound processing feature. The sound processing feature may for example be a Noise Canceller or a Beamformer Strength.

[0042] In step S4 of the method, an audio signal from the at least one sound input component 20 and/or a sensor signal from the at least one further sensor is received, e.g. by the sound processor 22 and the processor 26 of the hearing device 12.

[0043] In step S6 of the method, the signal(s) received in step S4 are evaluated by the one or more classifiers 48 implemented in the hearing device 12 and/or the connected user device 70 so as to identify a state corresponding to the user's speaking activity and/or the user's acoustic environment, and at least one classification value is determined depending on the identified state. The one or more classification values characterize the identified state. The identified classification value(s) may be, for example, output by one of the classifiers 48 to one or both of the processors 26, 36. It also may be that at least one of the classifiers 48 is implemented in the corresponding processor 26, 36 itself or is stored as a program module in the memory 30, 38 so as to be performed by the corresponding processor 26, 36. As already mentioned herein above, all or some of the steps of the method are performed by the processor 26 of the hearing device 12 and/or by the processor 36 of the connected user device 70.

[0044] The identified state may be one or more of the group of Speech In Quiet, Speech In Noise, Being In Car, Reverberant Speech, Noise, Music, Quiet, and Speech In Loud Noise. Optionally, two or more classification values may be determined characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined. In case, the second sound program may be adapted to the corresponding determined two or more classification values. The one or more predetermined classification values may be identified based on the audio signal from the at least one sound input component 20 and/or the sensor signal from the at least one further sensor 50 received over a one or more predetermined time intervals, e.g. over two identical predetermined time intervals separated by a predetermined pause interval.

[0045] In step S8 of the method, a second sound program is determined. The second sound program is different from the first sound program and is adapted in accordance with the determined classification value in order to provide the optimal listening experience to the user based on the identified speaking activity and/or acoustic environment of the user. For example, in the second sound program the setting of the Noise Canceller and/or the Beamformer Strength are/is different than in the first sound program.

[0046] In step S10 of the method, the sound processor 22 is configured in accordance with the second sound program.

[0047] In step S12 of the method, a predetermined user input is received. The predetermined user input indicates that the user listening to the sound output does not agree with the configuration in accordance with the second sound program. The predetermined user input may be input via the input means 28 of the hearing device 12, an application of the connected user device 70, and/or a gesture detection, which may be carried out by the hearing device 12 and/or the connected user device 70.

[0048] In step S14 of the method, the sound processor 22 is reconfigured in accordance with the first sound program. However, if the predetermined user input is input by the user, although the first sound program has not been changed, the hearing device 12 may provide a predetermined output to the user, which informs the user that the sound program has not been changed.

[0049] In an optional step S16 of the method, a determination algorithm for determining, whether the first sound program is adapted to the determined classification value, is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing system 60 is able to learn the preferences of the user and to consider them in a future determination process. For example, an artificial intelligence may be integrated in the hearing system 60, which learns the preferences of the user in order to provide the optimal listening experience to the user.

[0050] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or controller or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.

LIST OF REFERENCE SYMBOLS



[0051] 
12
hearing device
15
part behind the ear
16
part in the ear
18
tube
20
sound input component
22
sound processor
24
sound output component
26
processor
28
input mean
30
memory
32
transceiver of hearing device
34
transceiver of connected user device
36
processor
38
memory
40
graphical user interface
42
display
44
control element, slider
48
classifier
50
further sensor
60
hearing system
70
connected user device



Claims

1. A method for configuring a hearing device (12), the hearing device (12) comprising at least one sound input component (20), at least one sound output component (24), and a sound processor (22), which is coupled to the sound output component (24) and which is configured in accordance with a first sound program for modifying a sound output of the hearing device (12), the method comprising:

receiving an audio signal from the at least one sound input component (20) and/or a sensor signal from the at least one further sensor (50) as a sound input;

determining at least one classification value characterizing the sound input by evaluating the audio signal and/or the sensor signal;

determining a second sound program, which is different from the first sound program and which is adapted in accordance with the determined classification value;

configuring the sound processor (22) in accordance with the second sound program such that the sound output is modified according to the second sound program;

receiving a predetermined user input indicating that a user listening to the sound output does not agree with the configuration in accordance with the second sound program; and

reconfiguring the sound processor (22) in accordance with the first sound program.


 
2. The method of claim 1, wherein
a determination algorithm for determining, whether the first sound program is adapted to the determined classification value, is adapted depending on the feedback of the user represented by the predetermined user input such that the hearing device (12) is able to learn the preferences of the user and to consider them in a future determination process.
 
3. The method of one of the previous claims, wherein
the predetermined user input is input via input means (28) of the hearing device (12), an application of a connected user device (70), and/or a gesture detection.
 
4. The method of one of the previous claims, wherein,
if the predetermined user input is input by the user, although the sound program has not been changed, the hearing device (12) provides a predetermined output to the user, which informs the user that the sound program has not been changed.
 
5. The method of one of the previous claims, wherein
the at least one classification value is determined by characterizing the user's speaking activity and/or the user's acoustic environment.
 
6. The method of claim 5, wherein
the at least one classification value is determined by identifying a predetermined state characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal, and by determining the at least one classification value depending on the identified state.
 
7. The method of one of the previous claims, wherein
the one or more predetermined states are one or more of the following:

Speech In Quiet;

Speech In Noise;

Being In Car;

Reverberant Speech;

Noise;

Music;

Quiet;

Speech In Loud Noise.


 
8. The method of one of the previous claims, wherein
two or more classification values characterizing the user's speaking activity and/or the user's acoustic environment by evaluating the audio signal and/or the sensor signal are determined; and
the second sound program is adapted to the corresponding determined classification values.
 
9. The method of one of the previous claims, wherein
the one or more predetermined classification values are identified based on the audio signal from the at least one sound input component (20) and/or the sensor signal from the at least one further sensor (50) received over a predetermined time interval.
 
10. The method of claim 9, wherein
the one or more predetermined classification values are identified based on the audio signal from the at least one sound input component (20) and/or the sensor signal from the at least one further sensor (50) received over two identical predetermined time intervals separated by a predetermined pause interval.
 
11. A computer program for configuring a hearing device (12) for a user, with the hearing device (12) comprising at least one sound input component (20), at least one sound output component (24), and a sound processor (22), which is coupled to the sound output component (24) and which is configured in accordance with a first sound program of the hearing device (12), wherein the program, when being executed by a processor (26, 36), is adapted to carry out the steps of the method of one of the previous claims.
 
12. A computer-readable medium, in which a computer program according to claim 11 is stored.
 
13. A controller for operating a hearing device (12), the controller comprising a processor (26, 36), which is adapted to carry out the steps of the method of one of claims 1 to 10.
 
14. A hearing system (60) comprising a hearing device (12) worn by a hearing device user and a connected user device (70), wherein the hearing system (60) comprises:

a sound input component (20);

a processor (26) for processing a signal from the sound input component (20);

a sound output component (24) for outputting the processed signal to an ear of the user of the hearing device (12);

a transceiver (32) for exchanging data with the connected user device (70);

at least one classifier (48) configured to identify one or more predetermined classification values based on a signal from the at least one sound input component (20) and/or from at least one further sensor (50); and

wherein the hearing system (60) is adapted for performing the method of one of claims 1 to 10.


 
15. Hearing system (60) in accordance with claim 14, further comprising a mobile device, which includes the classifier (48).
 




Drawing










Search report









Search report