<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE ep-patent-document PUBLIC "-//EPO//EP PATENT DOCUMENT 1.7//EN" "ep-patent-document-v1-7.dtd">
<!-- This XML data has been generated under the supervision of the European Patent Office -->
<ep-patent-document id="EP23191289A1" file="EP23191289NWA1.xml" lang="en" country="EP" doc-number="4510624" kind="A1" date-publ="20250219" status="n" dtd-version="ep-patent-document-v1-7">
<SDOBI lang="en"><B000><eptags><B001EP>ATBECHDEDKESFRGBGRITLILUNLSEMCPTIESILTLVFIROMKCYALTRBGCZEEHUPLSKBAHRIS..MTNORSMESMMAKHTNMD..........</B001EP><B005EP>J</B005EP><B007EP>0009012-RPUB02</B007EP></eptags></B000><B100><B110>4510624</B110><B120><B121>EUROPEAN PATENT APPLICATION</B121></B120><B130>A1</B130><B140><date>20250219</date></B140><B190>EP</B190></B100><B200><B210>23191289.0</B210><B220><date>20230814</date></B220><B250>en</B250><B251EP>en</B251EP><B260>en</B260></B200><B400><B405><date>20250219</date><bnum>202508</bnum></B405><B430><date>20250219</date><bnum>202508</bnum></B430></B400><B500><B510EP><classification-ipcr sequence="1"><text>H04R  25/00        20060101AFI20240119BHEP        </text></classification-ipcr></B510EP><B520EP><classifications-cpc><classification-cpc sequence="1"><text>H04R  25/505       20130101 FI20240112BHEP        </text></classification-cpc><classification-cpc sequence="2"><text>H04R  25/70        20130101 LI20240112BHEP        </text></classification-cpc><classification-cpc sequence="3"><text>H04R2225/39        20130101 LA20240112BHEP        </text></classification-cpc><classification-cpc sequence="4"><text>H04R2225/41        20130101 LA20240112BHEP        </text></classification-cpc></classifications-cpc></B520EP><B540><B541>de</B541><B542>VERFAHREN ZUR ERMÖGLICHUNG DER ANPASSUNG EINER HÖRGERÄTEKONFIGURATION FÜR EINEN BENUTZER</B542><B541>en</B541><B542>METHOD TO FACILITATE ADJUSTING A HEARING DEVICE CONFIGURATION FOR A USER</B542><B541>fr</B541><B542>PROCÉDÉ POUR FACILITER LE RÉGLAGE D'UNE CONFIGURATION DE DISPOSITIF AUDITIF POUR UN UTILISATEUR</B542></B540><B590><B598>1</B598></B590></B500><B700><B710><B711><snm>Sonova AG</snm><iid>101535993</iid><irf>E22072.EP</irf><adr><str>Laubisrütistrasse 28</str><city>8712 Stäfa</city><ctry>CH</ctry></adr></B711></B710><B720><B721><snm>Griepentrog, Sebastian</snm><adr><city>8712 Stäfa</city><ctry>CH</ctry></adr></B721><B721><snm>von Holten, Daniel</snm><adr><city>8634 Hombrechtikon</city><ctry>CH</ctry></adr></B721></B720></B700><B800><B840><ctry>AL</ctry><ctry>AT</ctry><ctry>BE</ctry><ctry>BG</ctry><ctry>CH</ctry><ctry>CY</ctry><ctry>CZ</ctry><ctry>DE</ctry><ctry>DK</ctry><ctry>EE</ctry><ctry>ES</ctry><ctry>FI</ctry><ctry>FR</ctry><ctry>GB</ctry><ctry>GR</ctry><ctry>HR</ctry><ctry>HU</ctry><ctry>IE</ctry><ctry>IS</ctry><ctry>IT</ctry><ctry>LI</ctry><ctry>LT</ctry><ctry>LU</ctry><ctry>LV</ctry><ctry>MC</ctry><ctry>ME</ctry><ctry>MK</ctry><ctry>MT</ctry><ctry>NL</ctry><ctry>NO</ctry><ctry>PL</ctry><ctry>PT</ctry><ctry>RO</ctry><ctry>RS</ctry><ctry>SE</ctry><ctry>SI</ctry><ctry>SK</ctry><ctry>SM</ctry><ctry>TR</ctry></B840><B844EP><B845EP><ctry>BA</ctry></B845EP></B844EP><B848EP><B849EP><ctry>KH</ctry></B849EP><B849EP><ctry>MA</ctry></B849EP><B849EP><ctry>MD</ctry></B849EP><B849EP><ctry>TN</ctry></B849EP></B848EP></B800></SDOBI>
<abstract id="abst" lang="en">
<p id="pa01" num="0001">The disclosure relates to a method of adjusting a configuration of a hearing device configured to be worn at an ear of a user to the individual needs of the user, wherein the hearing device (110, 210) is communicatively coupled to a communication device (150, 410, 510), the method comprising<br/>
- initiating querying of a user command to be entered by the user via a user interface (157, 428) included in the communication device (150, 410, 510), the user command indicative of an adjustment desired by the user of at least one configuration parameter indicative of a current configuration of the hearing device (110, 210); and<br/>
- adjusting, depending on the user command, the configuration parameter.</p>
<p id="pa02" num="0002">The disclosure further relates to a hearing system configured to perform the method.</p>
<p id="pa03" num="0003">To provide for a user-friendlier adjustability of a current configuration of the hearing device, the disclosure proposes that the method further comprises<br/>
- receiving image information (711) representative of a facial expression (621, 622) of the user; and<br/>
- initiating presenting, depending on the facial expression (621, 622), an input support (613, 662, 671, 672) to the user facilitating inputting of the user command.
<img id="iaf01" file="imgaf001.tif" wi="133" he="90" img-content="drawing" img-format="tif"/></p>
</abstract>
<description id="desc" lang="en"><!-- EPO <DP n="1"> -->
<heading id="h0001">TECHNICAL FIELD</heading>
<p id="p0001" num="0001">The disclosure relates to method of adjusting a configuration of a hearing device configured to be worn at an ear of a user to the individual needs of the user, according to the preamble of claim 1. The disclosure further relates to a system for fitting a hearing device configured to be worn at an ear of a user to the individual needs of a user, according to the preamble of claim 15.</p>
<heading id="h0002">BACKGROUND</heading>
<p id="p0002" num="0002">Hearing devices may be used to improve the hearing capability or communication capability of a user, for instance by compensating a hearing loss of a hearing-impaired user, in which case the hearing device is commonly referred to as a hearing instrument such as a hearing aid, or hearing prosthesis. A hearing device may also be used to output sound based on an audio signal which may be communicated by a wire or wirelessly to the hearing device. A hearing device may also be used to reproduce a sound in a user's ear canal detected by an input transducer such as a microphone or a microphone array. The reproduced sound may be amplified to account for a hearing loss, such as in a hearing instrument, or may be output without accounting for a hearing loss, for instance to provide for a faithful reproduction of detected ambient sound and/or to add audio features of an augmented reality in the reproduced ambient sound, such as in a hearable. A hearing device may also provide for a situational enhancement of an acoustic scene, e.g. beamforming and/or active noise cancelling (ANC), with or without amplification of the reproduced sound. A hearing device may also be implemented as a hearing protection device, such as an earplug, configured to protect the user's hearing. Different types of hearing devices configured to be be worn at an ear include earbuds, earphones, hearables, and hearing instruments such as receiver-in-the-canal (RIC) hearing aids, behind-the-ear (BTE) hearing aids, in-the-ear (ITE) hearing aids, invisible-in-the-canal (IIC) hearing aids, completely-in-the-canal (CIC) hearing aids, cochlear implant systems configured to provide electrical stimulation representative of audio content to a user, a bimodal hearing system configured to provide both amplification and electrical stimulation representative of audio content to a user, or any other suitable hearing prostheses. A hearing system comprising two hearing devices configured to be worn at<!-- EPO <DP n="2"> --> different ears of the user is sometimes also referred to as a binaural hearing device. A hearing system may also comprise a hearing device, e.g., a single monaural hearing device or a binaural hearing device, and a user device, e.g., a smartphone and/or a smartwatch, communicatively coupled to the hearing device.</p>
<p id="p0003" num="0003">Hearing devices are often employed in conjunction with communication devices, such as smartphones or tablets, for instance when listening to sound data processed by the communication device and/or during a phone conversation operated by the communication device. More recently, communication devices have been integrated with hearing devices such that the hearing devices at least partially comprise the functionality of those communication devices. A hearing system may comprise, for instance, a hearing device and a communication device.</p>
<p id="p0004" num="0004">In recent times, some hearing devices are also increasingly equipped with different sensor types. Traditionally, those sensors often include an input transducer to detect a sound, e.g., a sound detector such as a microphone or a microphone array. An amplified and/or signal processed version of the detected sound may then be outputted to the user by an output transducer, e.g., a receiver, loudspeaker, or electrodes to provide electrical stimulation representative of the outputted signal. In an effort to provide the user with even more information about himself and/or the ambient environment, various other sensor types are progressively implemented, in particular sensors which are not directly related to the sound reproduction and/or amplification function of the hearing device. Those sensors include inertial sensors, such as accelerometers, allowing to monitor the user's movements. Physiological sensors, such as optical sensors and bioelectric sensors, are mostly employed for monitoring the user's health.</p>
<p id="p0005" num="0005">When a hearing device is initially provided to a user, and during follow-up tests and checkups thereafter, it is usually necessary to "fit" the hearing device to the user. Traditionally, fitting of a hearing device to a user is typically performed by an audiologist, health care professional (HCP), or the like who presents, e.g., during a during a hearing device fitting session, various stimuli having different loudness levels, e.g., at different frequencies, to the user. The audiologist relies on subjective feedback from the user as to how such stimuli are perceived. The subjective feedback may then be used to generate an audiogram that indicates individual hearing thresholds and loudness comfort levels of the<!-- EPO <DP n="3"> --> user. Depending on the audiogram, a current configuration of the hearing device can be adjusted, e.g., to provide for an amplification of sound compensating an individual hearing loss of the user. Additionally or alternatively, a fitting of the hearing device to the individual needs of the user can provide for an adjustment of a current configuration of the hearing device in various other aspects including, e.g., adjusting of a gain model, frequency and/or gain compression, feedback control, beamforming, noise suppression, communication properties such as wireless communication, speech enhancement, an enhancement of a music content in the audio signal and/or other audio signal processing algorithms executed by the hearing device.</p>
<p id="p0006" num="0006">In more recent times, the user has been enabled to handle at least part of the aspects required for the fitting of the hearing device on his own. E.g., when the user is not fully content with the fitting of his hearing device performed by the HCP, the user may perform a readjustment and/or fine tuning of one or more configuration parameters indicative of the current configuration of the hearing device. As another example, some hearing devices which can be purchased over the counter (OTC) may be fitted by the user himself, e.g., with regard to a desired amplification characteristics and/or other configuration parameters of the hearing device without requiring an additional assistance of an HCP. Furthermore, other configuration parameters of the hearing device such as a control of volume, noise reduction, beamforming, spectral composition and/or the like can be individually adjusted by the user himself.</p>
<p id="p0007" num="0007">To this end, a computer implemented program, such as an App running on a smartphone, may be provided to the user allowing the user to enter a user command indicative of an adjustment desired by the user of at least one configuration parameter of the hearing device. The program may provide for a graphical user interface which may be displayed, e.g., on a screen of a smartphone. The user interface may include one or more input interfaces each allowing to input a respective user command. However, such interaction can be rather complex or tedious, e.g., in cases where there are numerous fitting options to be addressed. Moreover, with such graphical user interfaces, it can be difficult for the user to easily identify and address all of the possible fitting options or finding one of the possible fitting options addressing his particular needs.<!-- EPO <DP n="4"> --></p>
<p id="p0008" num="0008">To mitigate those disadvantages, an input support may be provided to the user which can facilitate entering of the user command for the user. For instance, when multiple input interfaces for entering different user commands are displayed, one of the input interfaces could be highlighted to attract the users attention, or an input option of the user command representing a possible adjustment of the configuration parameter could be presented to the user, or a support message could be outputted to the user. Such an input support, however, could also have negative side effects. E.g., when the user is rather experienced in the fitting process or currently exploring a desired adjustment by entering a dedicated user command, he may feel distracted or confused by the input support. Generally, in some situations, the input support may be perceived as helpful and, in other situations, it may also be perceived as useless. In particular, providing additional information about the fitting as an input support may only facilitate the fitting process when the user is stuck or overtaxed by the fitting procedure.</p>
<heading id="h0003">SUMMARY</heading>
<p id="p0009" num="0009">It is an object of the present disclosure to avoid at least one of the above mentioned disadvantages and to provide for a user-friendlier adjustability of a current configuration of a hearing device, in particular with regard to the individual needs of the user. It is a further object to not overload the user with potentially needless and/or misleading information during an adjustment of the hearing device configuration and/or to provide input support only in situations in which it would be helpful and/or desired by the user. It is another object to provide a user interface for hearing device configuration adjustment optimized for the user's individual needs when it comes to assisting the user in performing the adjustment. It is yet another object to provide a hearing system which is configured to operate in such a manner.</p>
<p id="p0010" num="0010">At least one of these objects can be achieved by a method of adjusting a configuration of a hearing device comprising the features of claim 1 and/or a hearing system comprising the features of claim 15. Advantageous embodiments of the invention are defined by the dependent claims and the following description.</p>
<p id="p0011" num="0011">Accordingly, the present disclosure proposes a method of adjusting a configuration of a hearing device configured to be worn at an ear of a user to the individual needs of the<!-- EPO <DP n="5"> --> user, wherein the hearing device is communicatively coupled to a communication device, the method comprising
<ul id="ul0001" list-style="dash">
<li>initiating querying of a user command to be entered by the user via a user interface included in the communication device, the user command indicative of an adjustment desired by the user of at least one configuration parameter indicative of a current configuration of the hearing device;</li>
<li>adjusting, depending on the user command, the configuration parameter;</li>
<li>receiving image information representative of a facial expression of the user; and</li>
<li>initiating presenting, depending on the facial expression, an input support to the user facilitating entering of the user command.</li>
</ul></p>
<p id="p0012" num="0012">In this way, by taking into account the facial expression of the user when performing the fitting, the input support can be presented in suitable situations, e.g., when the facial expression indicates a certain frustration and/or bafflement and/or confusion and/or astonishment and/or helplessness and/or stress level and/or insecurity of the user. In particular, the facial expression of the user may be taken as an indicator of a cognitive or mental load of the user when operating the user interface. Restricting a presenting of the input support to those situations can thus improve an ease of operation and/or handling and/or user friendliness of the user interface.</p>
<p id="p0013" num="0013">Independently, the present disclosure also proposes a non-transitory computer-readable medium storing instructions that, when executed by a processor, cause a hearing device to perform operations of the method.</p>
<p id="p0014" num="0014">Independently, the present disclosure also proposes a system for adjusting a configuration of a hearing device configured to be worn at an ear of a user to the individual needs of a user, the system comprising a hearing device configured to be worn at an ear of the user and a communication device communicatively coupled to the hearing device, wherein the hearing device and/or the communication device comprises a processor configured to
<ul id="ul0002" list-style="dash">
<li>initiate querying of a user command to be entered by the user via a user interface included in the communication device, the user command indicative of an adjustment<!-- EPO <DP n="6"> --> desired by the user of at least one configuration parameter indicative of a current configuration of the hearing device;</li>
<li>adjust, depending on the user command, the configuration parameter;</li>
<li>receive image information representative of a facial expression of the user; and</li>
<li>initiate presenting, depending on the facial expression, an input support to the user facilitating entering of the user command.</li>
</ul></p>
<p id="p0015" num="0015">Subsequently, additional features of some implementations of the method and/or the hearing system are described. Each of those features can be provided solely or in combination with at least another feature. The features can be correspondingly provided in some implementations of the method and/or the hearing system.</p>
<p id="p0016" num="0016">In some implementations, the facial expression comprises at least one of
<ul id="ul0003" list-style="dash">
<li>a position and/or orientation of the user's eyebrows, e.g. a raising and/or narrowing of the eyebrows, and/or the like;</li>
<li>a dilation and/or position and/or movement of the user's pupils;</li>
<li>a wrinkling of the user's forehead, e.g. a frowning between the eyebrows, and/or the like; and</li>
<li>a shape of the user's mouth.</li>
</ul></p>
<p id="p0017" num="0017">In some implementations, the input support comprises at least one of
<ul id="ul0004" list-style="dash">
<li>modifying, on the user interface, at least one input interface for inputting the user command, e.g., from a plurality of input interfaces for inputting different user commands;</li>
<li>adding, on the user interface, at least one input interface for inputting the user command;</li>
<li>presenting, on the user interface, an input option of the user command representing a possible adjustment of the configuration parameter;</li>
<li>changing, on the user interface, a layout on which at least one input interface for inputting the user command is presented to the user; and</li>
<li>outputting a support message to the user.</li>
</ul><!-- EPO <DP n="7"> --></p>
<p id="p0018" num="0018">In some implementations, the input option may be presented as a proposal for a user command representing a possible adjustment of the configuration parameter.</p>
<p id="p0019" num="0019">In some implementations, the modifying the input interface may comprise at least of
<ul id="ul0005" list-style="dash">
<li>highlighting, on the user interface, at least one input interface for inputting the user command, e.g., from a plurality of input interfaces for inputting different user commands; and/or</li>
<li>masking and/or removing, on the user interface, at least one input interface for inputting the user command of a plurality of input interfaces.</li>
</ul></p>
<p id="p0020" num="0020">In some implementations, the method further comprises
<ul id="ul0006" list-style="dash">
<li>selecting at least one input interface for inputting the user command, e.g., from a plurality of input interfaces for inputting different user commands, wherein said presenting the input support comprises a modifying of the selected input interface; and/or</li>
<li>determining an input option of the user command representing a possible adjustment of the configuration parameter, wherein said presenting the input support comprises presenting of the determined input option.</li>
</ul></p>
<p id="p0021" num="0021">In some implementations, the at least one input interface is selected from a plurality of input interfaces for inputting different user commands. In some implementations, the input option is determined as a proposal for a user command representing a possible adjustment of the configuration parameter.</p>
<p id="p0022" num="0022">In some implementations, the at least one input interface is selected and/or the input option is determined depending on at least one of sensor data, an audio signal, and at least one user command previously entered by the user and/or at least one input interface previously employed by the user to enter the user command.</p>
<p id="p0023" num="0023">In some implementations, the method further comprises receiving sensor data from a sensor including at least one of
<ul id="ul0007" list-style="none">
<li>an input transducer configured to provide at least part of the sensor data as an audio signal indicative of sound detected in the environment of the user;<!-- EPO <DP n="8"> --></li>
<li>a displacement sensor configured to provide at least part of the sensor data as displacement data indicative of a displacement of the hearing device;</li>
<li>a location sensor configured to provide at least part of the sensor data as location data indicative of a current location of the user;</li>
<li>a clock configured to provide at least part of the sensor data as time data indicative of a current time;</li>
<li>a physiological sensor configured to provide at least part of the sensor data as physiological data indicative of a physiological property of the user; and</li>
<li>an environmental sensor configured to provide at least part of the sensor data as environmental data indicative of a property of the environment of the user,</li>
<li>wherein the input support is presented depending on the sensor data.</li>
</ul></p>
<p id="p0024" num="0024">In some implementations, an input interface for the user command may be selected and/or added and/or modified depending on the sensor data and/or an input option of the user command representing a possible adjustment of the configuration parameter may be determined and/or presented depending on the sensor data.</p>
<p id="p0025" num="0025">In some implementations, the method further comprises
<ul id="ul0008" list-style="dash" compact="compact">
<li>logging, in a memory, the sensor data,</li>
</ul>
wherein an input option of the user command representing a possible adjustment of the configuration parameter is predicted based on the logged sensor data; and/or an input interface is predicted based on the logged sensor data. In some implementations, the predicted input option may be determined to be comprised in the input support and/or the predicted input interface may be selected to be comprised in the input support.</p>
<p id="p0026" num="0026">In some implementations, the method further comprises
<ul id="ul0009" list-style="dash" compact="compact">
<li>determining an interaction time of the user with the user interface, wherein the input support is presented depending on the interaction time. E.g., when the interaction time exceeds a predetermined threshold, the input support may be presented.</li>
</ul></p>
<p id="p0027" num="0027">In some implementations, the method further comprises<!-- EPO <DP n="9"> -->
<ul id="ul0010" list-style="dash" compact="compact">
<li>receiving, from an audio input unit, an audio signal, wherein the input support is presented depending on the audio signal. In some implementations, the audio input unit is a input transducer and/or audio signal receiver.</li>
</ul></p>
<p id="p0028" num="0028">In some implementations, an input interface for the user command may be selected and/or added and/or modified depending on the audio signal and/or an input option of the user command representing a possible adjustment of the configuration parameter may be determined and/or presented depending on the audio signal.</p>
<p id="p0029" num="0029">In some implementations, the method further comprises
<ul id="ul0011" list-style="dash" compact="compact">
<li>logging, in a memory, the audio signal,</li>
</ul>
wherein an input option of the user command representing a possible adjustment of the configuration parameter is predicted based on the logged audio signal; and/or an input interface is predicted based on the logged audio signal. In some implementations, the predicted input option may be determined to be comprised in the input support and/or the predicted input interface may be selected to be comprised in the input support.</p>
<p id="p0030" num="0030">In some implementations, the method further comprises
<ul id="ul0012" list-style="dash">
<li>logging, in a memory, one or more user commands previously entered by the user,<br/>
wherein an input option of the user command representing a possible adjustment of the configuration parameter is predicted based on the logged user commands; and/or</li>
<li>logging, in a memory, one or more input interfaces for inputting the user command which have been previously used by the user to enter the user command,</li>
</ul>
wherein an input interface is predicted based on the logged user commands.</p>
<p id="p0031" num="0031">In some implementations, the predicted input option may be determined to be comprised in the input support and/or the predicted input interface may be selected to be comprised in the input support.</p>
<p id="p0032" num="0032">In some implementations, the method further comprises
<ul id="ul0013" list-style="dash" compact="compact">
<li>logging, in a memory, the audio signal, one or more user commands previously entered by the user, wherein an input option of the user command representing a possible<!-- EPO <DP n="10"> --> adjustment of the configuration parameter is predicted based on the logged user commands; and/or</li>
<li>logging, in a memory, one or more input interfaces for inputting the user command which have been previously used by the user to enter the user command, wherein an input interface is predicted based on the logged user commands. In some implementations, the predicted input option may be determined to be comprised in the input support and/or the predicted input interface may be selected to be comprised in the input support.</li>
</ul></p>
<p id="p0033" num="0033">In some implementations, the configuration parameter comprises at least one of
<ul id="ul0014" list-style="dash" compact="compact">
<li>an amplification, e.g., gain, of an audio signal outputted by the hearing device, e.g., an audio signal received by an input transducer;</li>
<li>a control of a feedback of an audio signal outputted by the hearing device;</li>
<li>a property of a beamforming algorithm executed by the hearing device;</li>
<li>a property of a noise suppression algorithm executed by the hearing device;</li>
<li>a property of a communication port included in the hearing device;</li>
<li>a selection of an audio processing algorithm executed by the hearing device, e.g., from a plurality of different audio processing algorithms;</li>
<li>an enhancement of a speech content in an audio signal outputted by the hearing device; and</li>
<li>an enhancement of a music content in an audio signal outputted by the hearing device.</li>
</ul></p>
<p id="p0034" num="0034">In some implementations, the user interface comprises at least one of a slider, a touch screen, a push button, and a text and/or numerical input field allowing to input the adjustment desired by the user.</p>
<p id="p0035" num="0035">In some implementations, the image information is provided by an optical sensor included in the communication device. In some implementations, the communication device comprises at least one of a mobile phone; a tablet; a smartwatch; and goggles.</p>
<p id="p0036" num="0036">In some implementations, the method further comprises
<ul id="ul0015" list-style="dash" compact="compact">
<li>relating the image information to previously recorded image information of the user's face and/or to previously recorded image information of people different from the user.</li>
</ul><!-- EPO <DP n="11"> --></p>
<p id="p0037" num="0037">In some implementations, the communication device comprises a display and the input support is displayed on the display.</p>
<p id="p0038" num="0038">In some implementations, the input support comprises a voice message outputted to the user by an output transducer included in the hearing device.</p>
<p id="p0039" num="0039">In some implementations, the hearing device comprises a processor configured to process an audio signal to generate a processed audio signal; and an output transducer configured to output an output audio signal based on the processed audio signal so as to stimulate the user's hearing. In some implementations, the hearing device further comprises an audio input unit configured to provide for the audio signal. In some implementations, the audio input unit comprises an input transducer configured to provide an audio signal indicative of a sound detected in the environment of the user. In some implementations, the audio input unit comprises an audio signal receiver configured to receive the audio signal from a remote location, e.g., as a radio frequency (RF) signal.</p>
<heading id="h0004">BRIEF DESCRIPTION OF THE DRAWINGS</heading>
<p id="p0040" num="0040">Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. The drawings illustrate various embodiments and are a part of the specification. The illustrated embodiments are merely examples and do not limit the scope of the disclosure. Throughout the drawings, identical or similar reference numbers designate identical or similar elements. In the drawings:
<dl id="dl0001">
<dt>Fig. 1</dt><dd>schematically illustrates a hearing system comprising an exemplary hearing device and an exemplary communication device;</dd>
<dt>Fig. 2</dt><dd>schematically illustrates an exemplary sensor unit comprising one or more sensors which may be implemented in the hearing device illustrated in <figref idref="f0001">Fig. 1</figref>;</dd>
<dt>Fig. 3</dt><dd>schematically illustrates an embodiment of the hearing device illustrated in <figref idref="f0001">Fig. 1</figref> as a RIC hearing aid;</dd>
<dt>Figs. 4, 5</dt><dd>schematically illustrate exemplary communication devices;</dd>
<dt>Figs. 6A, 6B</dt><dd>schematically illustrate a user interacting with a user interface included in a communication device;<!-- EPO <DP n="12"> --></dd>
<dt>Fig. 7</dt><dd>schematically illustrates a communication device querying a user command to be entered by the user;</dd>
<dt>Figs. 8A - 8C</dt><dd>schematically illustrate the communication device illustrated in <figref idref="f0003">Fig. 7</figref>, wherein an input support is presented to the user; and</dd>
<dt>Figs. 9-12</dt><dd>schematically illustrate exemplary methods of adjusting a configuration of a hearing device according to principles described herein.</dd>
</dl></p>
<heading id="h0005">DETAILED DESCRIPTION OF THE DRAWINGS</heading>
<p id="p0041" num="0041"><figref idref="f0001">FIG. 1</figref> illustrates an exemplary hearing system 101 comprising a hearing device 110 and a communication device 150. Hearing device 110 is configured to be worn at an ear of a user. Hearing device 110 may be implemented by any type of hearing device configured to enable or enhance hearing or a listening experience of a user wearing hearing device 110. For example, hearing device 110 may be implemented by a hearing aid configured to provide an amplified version of audio content to a user, a sound processor included in a cochlear implant system configured to provide electrical stimulation representative of audio content to a user, a sound processor included in a bimodal hearing system configured to provide both amplification and electrical stimulation representative of audio content to a user, or any other suitable hearing prosthesis, or an earbud or an earphone or a hearable.</p>
<p id="p0042" num="0042">Different types of hearing device 110 can also be distinguished by the position at which they are worn at the ear. Some hearing devices, such as behind-the-ear (BTE) hearing aids and receiver-in-the-canal (RIC) hearing aids, typically comprise an earpiece configured to be at least partially inserted into an ear canal of the ear, and an additional housing configured to be worn at a wearing position outside the ear canal, in particular behind the ear of the user. Some other hearing devices, as for instance earbuds, earphones, hearables, in-the-ear (ITE) hearing aids, invisible-in-the-canal (IIC) hearing aids, and completely-in-the-canal (CIC) hearing aids, commonly comprise such an earpiece to be worn at least partially inside the ear canal without an additional housing for wearing at the different ear position.</p>
<p id="p0043" num="0043">Communication device 150 may be implemented by any type of communication device configured to communicate data with hearing device 110. For instance, communication device 150 may be implemented as a wearable device configured to be worn<!-- EPO <DP n="13"> --> be the user, e.g., smart glasses or a smart watch, or as a portable device configured to be ported by the user, e.g., a smart phone, a tablet, or a laptop. Communication device 150 may also be implemented as a stationary device, e.g., a desktop computer.</p>
<p id="p0044" num="0044">As shown, communication device 150 includes a communication port 159 which can be communicatively coupled to a communication port 119 included in hearing device 110. Communication ports 119, 159 may be implemented by any suitable data transmitter and/or data receiver and/or data transducer configured to exchange data with another device. Communication ports 119, 159 may be configured for wired and/or wireless data communication, e.g., via a communication link 122. For instance, data may be communicated in accordance with a Bluetooth<sup>TM</sup> protocol and/or by any other type of radio frequency (RF) communication.</p>
<p id="p0045" num="0045">Communication device 150 further comprises a user interface 157. User interface 157 may be implemented by any suitable interface allowing a user to enter a user command. E.g., the user command may be provided as interaction data indicative of an interaction of the user with user interface 157. For instance, user interface 157 may be implemented as a touch sensor, e.g., a touch screen, and/or a push button and/or a slide and/or a toggle and/or a displacement sensor such as an accelerometer and/or a keyboard and/or a mouse and/or a speech detector configured to recognize speech and/or transform speech into a user command. Communication device 150 may further include a processor 152 communicatively coupled to communication port 159 and user interface 157. Communication device 150 may further include a memory communicatively coupled to processor 152. Communication device 150 may include additional or alternative components as may serve a particular implementation.</p>
<p id="p0046" num="0046">As shown, hearing system 101 further comprises an optical sensor 154. Optical sensor 154 may be implemented by any sensor configured to capture image data of the user, e.g., from the user's face. For instance, optical sensor 154 may be implemented as a camera, e.g., a video camera, a digital camera, a CCD camera, a framing camera, a selfie camera, and/or the like. As illustrated, optical sensor 154 may comprise an internal optical sensor 155 which may be included in communication device 150 and/or an external optical sensor 165 which may be provided externally from communication device 150. In some implementations, optical sensor 154 may comprise an optical sensor included in hearing device 110. Image data provided by external optical sensor 165 and/or other information related to the image<!-- EPO <DP n="14"> --> data, such as information about a facial expression of the user, may be transmitted to communication device 150. For instance, e.g., when communication device 150 is implemented as a smartphone, smart glasses or a computer, internal optical sensor 155 may be implemented as a camera included in communication device 150. Additionally or alternatively, e.g., when communication device 150 is implemented as a smartwatch, external optical sensor 165 may be implemented as a camera included in another communication device communicatively coupled to communication device 150.</p>
<p id="p0047" num="0047">As shown, hearing device 110 includes a processor 112 communicatively coupled to communication port 119, a memory 113, an audio input unit 114, and an output transducer 117. Audio input unit 114 may comprise at least one input transducer 115 and/or an audio signal receiver 116 configured to provide an input audio signal. Hearing device 110 may further include a sensor unit 118 communicatively coupled to processor 112. Hearing device 110 may include additional or alternative components as may serve a particular implementation. Input transducer 115 may be implemented by any suitable device configured to detect sound in the environment of the user and to provide an input audio signal indicative of the detected sound, e.g., a microphone or a microphone array. Output transducer 117 may be implemented by any suitable audio transducer configured to output an output audio signal to the user, for instance a receiver of a hearing aid, an output electrode of a cochlear implant system, or a loudspeaker of an earbud.</p>
<p id="p0048" num="0048">A processor of hearing system 101, which may be configured to execute on or more of the operations described above and below, may be implemented as a single processing device, e.g., processor 112 of hearing device 110 or processor 152 of communication device 150, or may be implemented as a processor comprising multiple processing units. The processing units may cooperate as a distributed processing system and/or in a master-slave configuration and/or may perform different processing tasks independently from one another. E.g., processor 112 of hearing device 110 may be a first processing unit, and processor 152 of communication device 150 may be a second processing unit. Another processing unit may be implemented in optical sensor 154. E.g., the processing units of processor 112, 152 may communicate data via communication ports 119, 159. Processor 112, 152 may be communicatively coupled to optical sensor 154, e.g., via a fixed connection and/or via communication ports 119, 159.<!-- EPO <DP n="15"> --></p>
<p id="p0049" num="0049">Processor 112, 152 is configured to initiate querying of a user command to be entered by the user via user interface 157, the user command indicative of an adjustment desired by the user of at least one configuration parameter indicative of a current configuration of hearing device 100; to adjust, depending on the user command, the configuration parameter; to receive image information representative of a facial expression of the user, which may be captured by optical sensor 154; and initiate presenting, depending on the facial expression, an input support to the user facilitating entering of the user command. These and other operations, which may be performed by processor 112, 152 are described in more detail in the description that follows.</p>
<p id="p0050" num="0050">Memory 113 may be implemented by any suitable type of storage medium and is configured to maintain, e.g. store, data controlled by processor 112, 152 in particular data generated, accessed, modified and/or otherwise used by processor 112, 152. For example, memory 113 may be configured to store one or more configuration parameters indicative of a current configuration of hearing device 110. The configuration parameters may be adjusted, e.g., after being accessed by processor 112, 152. The adjusted configuration parameters may be stored, e.g., by overwriting previously stored configuration parameters, in memory 113.</p>
<p id="p0051" num="0051">As another example, memory 113 may be configured to store instructions used by processor 112, 152 to modify the audio signal received from audio input unit 114, e.g., audio processing instructions in the form of one or more audio processing algorithms. The audio processing algorithms may comprise different audio processing instructions of processing the input audio signal received from input transducer 115 and/or audio signal receiver 116. For instance, the audio processing algorithms may provide for at least one of a gain model (GM) defining an amplification characteristic, a noise cancelling (NC) algorithm, a wind noise cancelling (WNC) algorithm, a reverberation cancelling (RevC) algorithm, a feedback cancelling (FC) algorithm, a speech enhancement (SE) algorithm, a gain compression (GC) algorithm, a noise cleaning algorithm, a binaural synchronization (BS) algorithm, a beamforming (BF) algorithm, in particular static and/or adaptive beamforming, and/or the like. A plurality of the audio processing algorithms may be executed by processor 112, 152 in a sequence and/or in parallel to generate a processed audio signal.</p>
<p id="p0052" num="0052">Memory 113 may comprise a non-volatile memory from which the maintained data may be retrieved even after having been power cycled, for instance a flash memory and/or a<!-- EPO <DP n="16"> --> read only memory (ROM) chip such as an electrically erasable programmable ROM (EEPROM). A non-transitory computer-readable medium may thus be implemented by memory 113. Memory 113 may further comprise a volatile memory, for instance a static or dynamic random access memory (RAM). A corresponding memory may be implemented in communication device 150. Processor 112, 152 may be configured to access memory 113 included in hearing device 110 and/or the memory included in communication device 150.</p>
<p id="p0053" num="0053">As illustrated, hearing device 110 may comprise an input transducer 115. Input transducer 115 may be implemented by any suitable device configured to detect sound in the environment of the user, e.g., a microphone or a microphone array, and/or to detect sound in the inside the ear canal of the user, e.g., an ear canal microphone, and to provide an audio signal indicative of the detected sound. As illustrated, hearing device 110 may comprise an audio signal receiver 116. Audio signal receiver 116 may be implemented by any suitable data receiver and/or data transducer configured to receive an input audio signal from a remote audio source. For instance, the remote audio source may be a wireless microphone, such as a table microphone, a clip-on microphone and/or the like, and/or a portable device, such as a smartphone, smartwatch, tablet and/or the like, and/or any another data transceiver configured to transmit the input audio signal to audio signal receiver 116. E.g., the remote audio source may be a streaming source configured for streaming the input audio signal to audio signal receiver 116. Audio signal receiver 116 may be configured for wired and/or wireless data reception of the input audio signal. For instance, the input audio signal may be received in accordance with a Bluetooth<sup>™</sup> protocol and/or by any other type of radio frequency (RF) communication.</p>
<p id="p0054" num="0054">As illustrated, hearing device 110 may comprise a sensor unit 118 comprising at least one sensor communicatively coupled to processor 112, 152, e.g., in addition to input transducer 115. Some examples of a sensor which may be implemented in sensor unit 118 are illustrated in <figref idref="f0001">Fig. 2</figref>. Alternatively or additionally, sensor unit 118 may be included in communication device 150 and/or an auxiliary device communicatively coupled with hearing device 110 and/or communication device 150.</p>
<p id="p0055" num="0055">As illustrated in <figref idref="f0001">FIG. 2</figref>, sensor unit 118 may include at least one environmental sensor configured to provide environmental data indicative of a property of the environment of the user, e.g., in addition to the audio signal provided by input transducer 115, for example an<!-- EPO <DP n="17"> --> optical sensor 130 configured to detect light in the environment, e.g., a camera configured to provide image information from the user's environment, and/or a barometric sensor 131 and/or an ambient temperature sensor 132. Sensor unit 118 may include at least one physiological sensor configured to provide physiological data indicative of a physiological property of the user, for example an optical sensor 133 and/or a bioelectric sensor 134 and/or a body temperature sensor 135. Optical sensor 133 may be configured to emit the light at a wavelength absorbable by an analyte contained in blood such that the physiological sensor data comprises information about the blood flowing through tissue at the ear. E.g., optical sensor 133 can be configured as a photoplethysmography (PPG) sensor such that the physiological sensor data comprises PPG data, e.g. a PPG waveform. Bioelectric sensor 134 may be implemented as a skin impedance sensor and/or an electrocardiogram (ECG) sensor and/or an electroencephalogram (EEG) sensor and/or an electrooculography (EOG) sensor.</p>
<p id="p0056" num="0056">Sensor unit 118 may include a movement sensor 136 configured to provide movement data indicative of a movement of the user, for example an accelerometer and/or a gyroscope and/or a magnetometer. Sensor unit 118 may include at least one location sensor 138 configured to provide location data indicative of a current location of the user, for instance a GPS sensor. Sensor unit 118 may include at least one clock 139 configured to provide time data indicative of a current time. Context data may be defined as data indicative of a local and/or temporal context of the data provided by other sensors 115, 131 - 137. Context data may comprise the location data and/or the time data provided by location sensor 138 and/or clock 139. Context data may also be received from an external device via communication port 119, e.g., from communication device 150. E.g., one or more of sensors 115, 131 - 137 may then be included in communication device 150. Sensor unit 118 may include further sensors providing sensor data indicative of a property of the user and/or the environment and/or the context.</p>
<p id="p0057" num="0057"><figref idref="f0001">FIG. 3</figref> illustrates an exemplary implementation of hearing device 110 as a RIC hearing aid 210. RIC hearing aid 210 comprises a BTE part 220 configured to be worn at an ear at a wearing position behind the ear, and an ITE part 240 configured to be worn at the ear at a wearing position at least partially inside an ear canal of the ear. BTE part 220 comprises a BTE housing 221 configured to be worn behind the ear. BTE housing 221 accommodates processor 112 communicatively coupled to input transducer 115 and audio signal receiver<!-- EPO <DP n="18"> --> 116. BTE part 220 further includes a battery 227 as a power source. BTE part 220 may further include a user interface 257, which may be implemented, e.g., at a surface of BTE housing 221. ITE part 240 is an earpiece comprising an ITE housing 241 at least partially insertable in the ear canal. ITE housing 241 accommodates output transducer 117. ITE part 240 may further include another input transducer as an in-the-ear input transducer 145, e.g., an ear canal microphone, configured to detect sound inside the ear canal and to provide an in-the-ear audio signal indicative of the detected sound. BTE part 220 and ITE part 240 are interconnected by a cable 251. Processor 112 is communicatively coupled to output transducer 117 and to in-the-ear input transducer 145 of ITE part 240 via cable 251 and cable connectors 252, 253 provided at BTE housing 221 and ITE housing 241. In some implementations, at least one of sensors 130 - 139 is included in BTE part 220 and/or ITE part 240.</p>
<p id="p0058" num="0058"><figref idref="f0002">FIG. 4</figref> illustrates exemplary implementations of a communication device 410 which may be communicatively coupled to hearing device 110, 210, e.g., via communication port 119. For example, communication device 410 may be a portable device configured to be worn stationary with the user and operable at a position remote from the ear at which hearing device 110, 210 is worn. As illustrated, portable device 410 comprises a portable housing 411 which may be configured, e.g., to be worn by the user on the user's body at a position remote from the ear at which hearing device 110 is worn. E.g., portable device 410 may be implemented as a smartphone, a tablet, and/or the like.</p>
<p id="p0059" num="0059">Portable device 410 further comprises a user interface 428 implemented as a touch sensor allowing the user to enter a user command which can be received by processor 112 of hearing device 110 and/or a processor of the communication device 410 as user control data. For instance, as illustrated, user interface 428 may be implemented as a touch screen operable to display information to the user. Querying of a user command to be entered by the user via user interface 428 may be implemented by displaying a corresponding query on the touch screen, e.g., in the form of a text, symbol, and/or other visual signs. In other examples, user interface 428 may be implemented by speech recognition allowing the user to enter a user command with his voice. In other examples, querying of a user command to be entered by the user via user interface 428 may be implemented by outputting a voice message via output transducer 117.<!-- EPO <DP n="19"> --></p>
<p id="p0060" num="0060">Portable device 410 further comprises an optical sensor 455. In the illustrated example, optical sensor 455 is a camera facing the same direction as user interface 428. Thus, when the user is manipulating user interface 428, optical sensor 455 is configured to face the user's face and/or to capture image information from the user's face. E.g., when portable device 410 is implemented as a smartphone, optical sensor 455 may be implemented as a front camera and/or a selfie camera.</p>
<p id="p0061" num="0061"><figref idref="f0002">FIG. 5</figref> illustrates further exemplary implementations of a communication device 510 which may be communicatively coupled to hearing device 110, 210, e.g., via communication port 119. For example, communication device 510 may be a wearable device configured to be worn by the user, e.g., on his body, and operable at a position remote from the ear at which hearing device 110, 210 is worn. As illustrated, wearable device 510 comprises a wearable housing 511 which may be configured, e.g., to be worn by the user on the user's body at a position remote from the ear at which hearing device 110 is worn. E.g., wearable device 510 may be implemented as a smartwatch, smart glasses, and/or the like. In the illustrated example, wearable device 510 is implemented as smart glasses, wherein wearable housing 511 comprises an eyeglass frame surrounding eyeglasses 512, 513.</p>
<p id="p0062" num="0062">Wearable device 510 further comprises an optical sensor 555, 556. In the illustrated example, optical sensor 555, 556 is a pair of cameras facing the user's face. E.g., optical sensor 555, 556 may be implemented in front of and/or behind eyeglasses 512, 513. Thus, when the user is manipulating user interface 157, 428, optical sensor 555, 556 is configured to face the user's face, in particular the user's eyes, and/or to capture image information from the user's face, in particular from the user's eyes.</p>
<p id="p0063" num="0063">In some implementations, a communication device may be implemented as two or more communications devices, e.g., at least one portable device 410 and/or at least one wearable device 510, communicatively coupled to each other. For example, a user interface of the communication device may include touch screen 428 of portable device 410 and an optical sensor of the communication device may include pair of cameras 555, 556 of wearable device 510.</p>
<p id="p0064" num="0064"><figref idref="f0002">FIGS. 6A and 6B</figref> schematically illustrate situations in which a user 611 interacts with user interface 428 of communication device 410 to enter a user command. During the user<!-- EPO <DP n="20"> --> interaction, image information representative of a facial expression 621, 622 of user 611 is captured by camera 455. The image information may indicate at least one of a position and/or orientation of the user's eyebrows, e.g., a frowning, raising and/or narrowing of the eyebrows; a dilation and/or position and/or movement of the user's pupils; a wrinkling of the user's forehead; and a shape of the user's mouth. To this end, the image information may be evaluated, e.g., by processor 112, 152, to extract and/or verify a presence and/or a magnitude of at least one of those features in facial expression 621, 622.</p>
<p id="p0065" num="0065">In the situation illustrated in <figref idref="f0002">FIG. 6A</figref>, facial expression 621 of user 611 comprises features indicating an elevated cognitive effort and/or frustration and/or bafflement and/or confusion and/or helplessness and/or elevated stress level and/or insecurity of the user when interacting with user interface 428. Those features may include narrowed and/or angled and/or raised eyebrows and/or a wrinkling of the user's forehead, e.g., a frowning between the eyebrows. Those features may further include a shape of the user's mouth, e.g., lowered and/or rather straight corners of the mouth. Those features may also include a property of the user's pupils. E.g., a dilation of the user's pupils, i.e., the pupils being larger than usual, and/or a position of the pupils facing away from user interface 428 can indicate a large cognitive effort and/or an elevated stress level and/or frustration of the user.</p>
<p id="p0066" num="0066">In the situation illustrated in <figref idref="f0002">FIG. 6B</figref>, facial expression 622 of user 611 comprises features indicating a small cognitive effort and/or confidence and/or calmness and/or relaxation and/or low stress level and/or security of the user when interacting with user interface 428. Those features may include rather straight and/or lowered eyebrows and/or an absence of a wrinkling on the user's forehead. Those features may further include a shape of the user's mouth, e.g., raised corners of the mouth and/or a smile. Those features may also include a property of the user's pupils. E.g., an absence of a dilation of the user's pupils can indicate a small cognitive effort and/or a low stress level.</p>
<p id="p0067" num="0067">To identify and/or classify one or more of such features in facial expression 622 of user 611, an image recognition algorithm may be applied on the image information provided by optical sensor 154, 155, 165, 455, 555, 556. E.g., the image recognition algorithm may be executed by processor 112, 152 and/or a computing device communicatively coupled to communication device 150 and/or hearing device 110, e.g., a remote server which may be accessed via an internet connection. E.g., the image recognition algorithm can be configured<!-- EPO <DP n="21"> --> to relate the image information to previously recorded image information of the user's face and/or to previously recorded image information of people different from the user. Image recognition algorithms which are enabled to perform such a task, e.g., by a machine learning (ML) algorithm such as a (deep) neural network, are known in the art. E.g., an algorithm as disclosed in <nplcit id="ncit0001" npl-type="s"><text>Front. Psychol. 12:759485 (2021) by Song, Z., entitled "Facial Expression Emotion Recognition Model Integrating Philosophy and Machine Learning Theory ", doi: 10.3389/fpsyg.2021.759485</text></nplcit><i>,</i> and/or in <nplcit id="ncit0002" npl-type="s"><text>BioMed Eng OnLine 8, 16 (2009) by Kulkarni, S.S., Reddy, N.P., and Hariharan, S., entitled "Facial expression (mood) recognition from facial images using committee neural networks</text></nplcit><i>"</i> and/or in the references cited therein may be employed.</p>
<p id="p0068" num="0068"><figref idref="f0003">FIG. 7</figref> illustrates embodiments of an exemplary query of a user command to be entered by user 611 via user interface 428 included in communication device 410. In the illustrated example, the query is displayed on a display of the communications device, e.g., on touch screen 428 of communication device 410. In other examples, the query may be outputted by output transducer 117, e.g., as a voice message.</p>
<p id="p0069" num="0069">In the illustrated example, the query is displayed in the form of one or more texts 611, 612, 613, 614, 651, 652 indicative of a respective configuration parameter of hearing device 110 and/or one or more symbolic representations 616, 617, 618, 619 of the configuration parameter. Further displayed are one or more input interfaces 631, 632, 633, 634, 641, 642 allowing the user to enter a respective user command indicating an adjustment desired by user 611 of the respective configuration parameter. Input interface 641, 642 is implemented as a push button allowing the user to enter the user command by pushing the button. Input interface 631 - 634 is implemented as a slider allowing the user to enter the user command by moving the slider. Further, a graphical boundary and/or limit 621, 622, 623, 624 for entering the user command via slider 631 - 634 is displayed.</p>
<p id="p0070" num="0070">Each input interface 631 - 634, 641, 642 relates to at least one configuration parameter adjustable depending on the user command entered by user 611 via input interface 631 - 634, 641, 642. In the illustrated example, input interface 631 relates to a volume control and/or input interface 632 relates to a beamforming adjustment and/or input interface 631<!-- EPO <DP n="22"> --> relates to a noise reduction adjustment and/or input interface 632 relates to a spectral balance modification.</p>
<p id="p0071" num="0071">The volume control can be configured to adjust a volume of an audio signal processed by hearing device 110 so as to change a level of an output audio signal output by output transducer 117 so as to stimulate the user's hearing. For example, the volume may be adjusted during an audio signal processing performed by an audio signal processor, e.g., by adjusting an amplitude of the audio signal, and/or during an amplification of the audio signal performed by an audio signal amplifier, e.g., by adjusting a gain provided by the amplifier.</p>
<p id="p0072" num="0072">The beamforming adjustment can be configured to adjust a property of a beamforming applied on the audio signal, e.g., during an audio signal processing performed by an audio signal processor. For instance, the adjustment of the beamforming may comprise at least one of turning the beamforming on or off and/or changing a beam width of the beamforming and/or changing a directivity of the beamforming. E.g., when the directivity of the beam points toward the front of the user, the directivity may be adjusted to the side and/or back of the user.</p>
<p id="p0073" num="0073">The noise reduction adjustment can be configured to adjust a property of a noise reduction, e.g., a noise cancelling (NC), applied on an audio signal, e.g., during an audio signal processing performed by the audio signal processor. E.g., an audio signal processing may provide for a cancelling and/or suppression and/or cleaning of noise contained in the audio signal. For instance, the property of the NC, which may be adjusted by the noise cancelling adjustment, may include a type and/or strength of the NC. E.g., different types of the NC may include general noise and/or noise caused by a non-speech audio source and/or noise at a certain noise level and/or frequency range and/or noise emitted from a specific audio source, e.g., traffic noise, aircraft noise, construction site noise, etc. Different strengths of the NC may indicate a content of the noise in the modified audio signal, e.g., an amount of which noise is removed and/or still present in the modified audio signal.</p>
<p id="p0074" num="0074">The spectral balance modification can be configured to adjust a spectral balance of an audio signal and/or a spectral balance of a specific content in an audio signal. The spectral balance can be indicative of a frequency content of the audio signal. The frequency content may comprise a power of one or more frequencies and/or frequency bands, e.g., relative to a<!-- EPO <DP n="23"> --> power of one or more other frequencies and/or frequency bands. The frequency range of the frequency content may comprise, e.g., a range of audible frequencies, e.g., from 20 Hz to 20.000 Hz, and/or a range of inaudible frequencies. A specific content in the audio signal, for which the spectral balance may be modified, may include, e.g., a music content and/or a speech content, e.g., an own voice content and/or a voice content of another person and/or a significant other.</p>
<p id="p0075" num="0075">Further, in the illustrated example, input interfaces 641, 642 relate to different audio processing algorithms for the processing of an audio signal. E.g., one or more of the different audio processing algorithms may be activated and/or deactivated by pushing one or more of input interfaces 641, 642. As illustrated, at least one of the audio processing algorithms may be related to a clarity of sound outputted by output transducer 117 and/or at least one of the audio processing algorithms may be related to a listening comfort when sound is outputted by output transducer 117. E.g., when the audio processing algorithm related to the clarity of sound is activated, the audio processing may be performed in a way to provide an enhanced clarity, e.g., sharpness, of the outputted sound. Such a configuration of hearing device 110 may be beneficial, e.g., to provide for a better speech intelligibility. As another example, when the audio processing algorithm related to the listening comfort is activated, the audio processing may be performed in a way to provide for a more comfortable listening experience, which may be accompanied, e.g., by a reduced clarity and/or sharpness of the outputted sound. Such a configuration of hearing device 110 may be beneficial, e.g., to provide for a better acoustic atmosphere in daily situations, e.g., not involving social contacts.</p>
<p id="p0076" num="0076">In the illustrated example, when multiple input interfaces 631 - 634, 641, 642 are presented to the user, the input interfaces may be presented in a predetermined layout, e.g., in a predetermined order and/or size, on user interface 428. E.g., input interfaces 631 - 634, 641, 642 may be spatially and/or temporally separated in the predetermined order. In the illustrated example, the multiple input interfaces 631 - 634, 641, 642 are presented to the user on a single screen, e.g., on touch screen 428. In some examples, the multiple input interfaces 631 - 634, 641, 642 can then be spatially separated in the predetermined order by displaying adjustment options 631 - 634 subsequently in a defined direction, e.g., from the top of screen 428 to the bottom of screen 428. In other examples, at least two of the multiple input interfaces 631 - 634, 641, 642 can be presented to the user on a different screens, e.g.,<!-- EPO <DP n="24"> --> on touch screen 428. The different screens may by accessible to the user by entering a dedicated user command, e.g., on user interface 428, such as performing a manual gesture on user interface 428, e.g., swiping on user interface 428 with one or more fingers. In other examples, the input interfaces 631 - 634, 641, 642 may be presented to the user in the form of voice messages which may be outputted to the user in a temporally separated manner.</p>
<p id="p0077" num="0077"><figref idref="f0003 f0004">FIGS. 8A - 8C</figref> illustrate embodiments of an exemplary input support which may be presented to the user depending on image information about a facial expression of the user when interacting with user interface 157, 428 included in communication device 410. In particular, the input support may be presented in a case in which the image information is representative of at least one of the features of facial expression 621 indicating an elevated cognitive effort and/or frustration and/or bafflement and/or confusion and/or helplessness and/or elevated stress level and/or insecurity of user 611. In the illustrated example, the input support is displayed on a display of the communications device, e.g., on touch screen 428 of communication device 410. In other examples, the input support may be outputted by output transducer 117, e.g., as a voice message.</p>
<p id="p0078" num="0078">In the example illustrated in <figref idref="f0003">FIG. 8A</figref>, the input support is provided by adding and/or modifying e.g., highlighting, at least one input interface 631 - 634, 641, 642 for entering the user command. To this end, e.g., as illustrated, one or more texts 611 - 614, 651, 652 indicative of a respective configuration parameter adjustable by input interface 631 - 634, 641, 642 and/or one or more symbolic representations 616 - 619 of the configuration parameter may be emphasized, e.g., by increasing a font size and/or changing a color. In other examples, a size and/or color of input interface 631 - 634, 641, 642 may be changed and/or a layout on which input interface 631 - 634, 641, 642 is presented and/or masking and/or removing at least one input interface 631 - 634, 641, 642 which is not to be highlighted.</p>
<p id="p0079" num="0079">In some implementations, when the input support comprises adding and/or modifying e.g., highlighting, of the selected input interface, the input interface to be modified may be selected, e.g., from input interfaces 631 - 634, 641, 642. In some instances, the input interface to be modified is selected depending on sensor data. To this end, sensor data received from at least one of input transducer 115, displacement sensor 136, environmental sensor 130, 131, 132, physiological sensor 133, 134, 135, location sensor 138, and clock 139 may be employed.<!-- EPO <DP n="25"> --> In the illustrated example, input interface 633 for adjusting a configuration parameter related to noise reduction is highlighted.</p>
<p id="p0080" num="0080">For example, selecting input interface 633 to be added and/or modified may be based on sensor data received from input transducer 115. To illustrate, when the sensor data provided by input transducer 115, e.g., an audio signal indicative of a sound in the user's environment, is determined to include a rather low signal to noise ratio (SNR), input interface 633 may be selected to be highlighted. As another example, selecting input interface 633 to be added and/or modified may be based on sensor data received from environmental sensor 130 - 132. To illustrate, when the sensor data provided by environmental sensor 130 - 132 is indicative of a rather noisy environment and/or acoustic scene, input interface 633 may be selected to be highlighted. As another example, selecting input interface 633 to be added and/or modified may be based on sensor data received from physiological sensor 133 - 134. To illustrate, when the sensor data provided by physiological sensor 133 - 134 is indicative of a medical emergency of the user, input interface 633 may be selected to be highlighted. As another example, selecting input interface 633 to be added and/or modified may be based on sensor data received from displacement sensor 136. To illustrate, when the sensor data provided by displacement sensor 136 is indicative of the user resting in a rather static position, e.g., being in a calm state, input interface 633 may be selected to be highlighted. Other examples, in which input interface 633 can be selected to be highlighted, include a location of the user and/or time.</p>
<p id="p0081" num="0081">In some implementations, when the input support comprises presenting of a selected input interface, the input interface may be selected by predicting the input interface, e.g., based on logged user commands and/or based on sensor data, which may be provided by any of sensors 130 - 136, 138, 139, and/or based on an audio signal. For instance, previously entered user commands and/or sensor data and/or audio signals may be logged in memory 113 and/or in a memory of communication device 150. To illustrate, the user command may be predicted based on logged user commands and/or sensor data and/or audio signals, which have been collected in a database. In some implementations, the database is included a look-up table from which the predicted user command can be outputted. In some implementations, a machine learning (ML) algorithm, which outputs the predicted user command, may be trained with the database.<!-- EPO <DP n="26"> --></p>
<p id="p0082" num="0082">In the example illustrated in <figref idref="f0004">FIG. 8B</figref>, the input support is provided by presenting an input option of the user command. The selected input option may thus represent a possible adjustment of the configuration parameter. In particular, input option 662 may be presented as a possible position of slider 632, e.g., along graphical boundary 622. In other examples, the input option may be presented by adding and/or highlighting one or more push buttons 641, 642 corresponding to the possible adjustment. In other examples, the input option may presented by presenting suggestions of a number and/or text to be entered by the user.</p>
<p id="p0083" num="0083">In some implementations, when the input support comprises presenting an input option of the user command, the input option to be presented may be determined beforehand. In some instances, the input interface to be modified is determined depending on sensor data, which may be provided by any of sensors 130 - 136, 138, 139, and/or an audio signal, as described above. In some implementations, when the input support comprises presenting an input option of the user command, the input option may be determined by predicting the input option, e.g., based on logged user commands and/or based on logged sensor data, which may be provided by any of sensors 130 - 136, 138, 139, and/or based on logged audio signals. For instance, previously entered user commands and/or sensor data and/or audio signals may be logged in memory 113 and/or in a memory of communication device 150. E.g., the logged user commands and/or sensor data and/or audio signals may be collected in a database which may be included a look-up table and/or to train an ML algorithm, as described above.</p>
<p id="p0084" num="0084">In the example illustrated in <figref idref="f0004">FIG. 8C</figref>, the input support is provided by outputting one or more support messages 671, 672 to the user. Support message 671, 672 may provide additional information, e.g., about one or more configuration parameters, for entering the user command to the user. E.g., the additional information may include an explanation and/or illustration of the configuration parameter. In the illustrated example, support message 671, 672 is outputted in a text form, e.g., on a display of the communications device, e.g., on touch screen 428 of communication device 410. In other examples, support message 671, 672 may be outputted by output transducer 117, e.g., as a voice message.</p>
<p id="p0085" num="0085">Furthermore, as illustrated, the input support can be provided by highlighting at least one input interface 641, 642 by means of an allocation 673, 674 of support message 671, 672 to input interface 641, 642. As illustrated, the allocation may be outputted on display 428, e.g., as a graphic symbol such as an arrow 641, 642. In other examples, the allocation may<!-- EPO <DP n="27"> --> be outputted by output transducer 117, e.g., as a voice message referring to at least one input interface 641, 642. As further illustrated, highlighting of input interface 641, 642 may comprise masking and/or removing at least another input interface 631 - 634 from the user interface, e.g., display 428. The masked and/or removed input interface 631 - 634 may then be made accessible to the user, e.g., by a user command which may be entered via the user interface. E.g., as illustrated, the user command may be implemented as a manual gesture, e.g., swiping on the user interface. The user command may be indicated to the user, e.g., by prompting a text message 677 on the user interface and/or a voice message outputted by output transducer 117.</p>
<p id="p0086" num="0086"><figref idref="f0005">FIG. 9</figref> illustrates a block flow diagram for an exemplary method of adjusting a configuration of a hearing device 110, 210. The method may be executed by processor 112, 152 included in hearing device 110, 210 and/or communication device 150. At operation S11, querying of a user command to be entered by the user via a user interface included in the communication device is initiated. At operation S12, image information 711 representative of a facial expression of the user is received. At operation S15, depending on the facial expression, presenting of an input support to the user is initiated.</p>
<p id="p0087" num="0087">In particular, at operation S13, after receiving image information 711 at S12, it may be determined whether the facial expression indicated by image information 711 corresponds to one of at least a first type of facial expression, and a second type of facial expression. The first type of facial expression may be indicative of, e.g., a certain frustration and/or bafflement and/or confusion and/or astonishment and/or helplessness and/or stress level and/or insecurity of the user. The second type of facial expression may be indicative of a situation in which the user is in a mental state from the first type. As another example, the first type of facial expression may be indicative of a cognitive and/or mental load of the user above a threshold, and the second type of facial expression may be indicative of a cognitive and/or mental load of the user below the threshold. In a case in which the facial expression corresponds to the first type, presenting of the input support to the user is initiated at S15. In a case in which the facial expression corresponds to the second type, presenting of the input support to the user is not initiated. Instead, further image information may be received at S12.</p>
<p id="p0088" num="0088"><figref idref="f0005">FIG. 10</figref> illustrates a block flow diagram for another exemplary method of adjusting a configuration of a hearing device 110, 210. After querying of the user command at S11, a<!-- EPO <DP n="28"> --> user command 722 is received at operation S21. Subsequently, at operation S22, at least one configuration parameter of the hearing device is adjusted depending on the user command. In particular, receiving of user command 722 at S21 and adjusting the configuration parameter depending on user command 722 at S22 may be executed in parallel and/or independently from operations S12, S13, and S15.</p>
<p id="p0089" num="0089"><figref idref="f0005">FIG. 11</figref> illustrates a block flow diagram for another exemplary method of adjusting a configuration of a hearing device 110, 210. At operation S33, which is executed subsequent to receiving image information 711 at S12 and receiving user command 722 at S21, it is determined whether the facial expression indicated by image information 711 corresponds to the first or second type of facial expression. Subsequently, adjusting of the configuration parameter of the hearing device is only initiated at S22 in a case in which the facial expression corresponds to the second type. In a case in which it is determined at S33 that the facial expression corresponds to the first type, adjusting of the configuration parameter at S22 is not initiated. Instead, presenting of the input support to the user is initiated at S15.</p>
<p id="p0090" num="0090"><figref idref="f0005">FIG. 12</figref> illustrates a block flow diagram for another exemplary method of adjusting a configuration of a hearing device 110, 210. At operation S44, which may be executed subsequent to the determining whether the facial expression corresponds to the first or second type at S13, S33 or before, sensor data 733 is received. The input support, which is initiated to be presented at S15, can then also depend on the sensor data which may be provided by any sensor 130 - 136, 138, 139. For instance, any type and/or implementation of the input support may be determined based on the sensor data.</p>
<p id="p0091" num="0091">In some examples, an input interface is selected from a plurality of input interfaces depending on the sensor data. E.g., the input support which is presented at S15 may then comprise adding and/or modifying of the selected input interface. E.g., as illustrated in <figref idref="f0003">FIG. 8A</figref>, the input support which is presented at S15 may then comprise modifying the selected input interface 633 by a highlighting of the selected input interface.</p>
<p id="p0092" num="0092">In some examples, an input option of the user command representing a possible adjustment of the configuration parameter is determined depending on the sensor data. E.g., the input support which is presented at S15 may then comprise presenting of the determined<!-- EPO <DP n="29"> --> input option. E.g., as illustrated in <figref idref="f0004">FIG. 8B</figref>, the input support which is presented at S15 may then comprise presenting the determined input option 622 as a possible position of slider 632</p>
<p id="p0093" num="0093">In some implementations, an audio signal may be received in addition or in place of sensor data 733. In some implementations, the input interface to be selected and/or the input option to be determined is predicted based on previously entered user commands and/or previously received sensor data and/or previously received audio signals, which may be logged, e.g., in a database. The database may then be accessed in addition or in place of sensor data 733</p>
<p id="p0094" num="0094">While the principles of the disclosure have been described above in connection with specific devices and methods, it is to be clearly understood that this description is made only by way of example and not as limitation on the scope of the invention. The above described preferred embodiments are intended to illustrate the principles of the invention, but not to limit the scope of the invention. Various other embodiments and modifications to those preferred embodiments may be made by those skilled in the art without departing from the scope of the present invention that is solely defined by the claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or controller or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.</p>
</description>
<claims id="claims01" lang="en"><!-- EPO <DP n="30"> -->
<claim id="c-en-0001" num="0001">
<claim-text>A method of adjusting a configuration of a hearing device configured to be worn at an ear of a user to the individual needs of the user, wherein the hearing device (110, 210) is communicatively coupled to a communication device (150, 410, 510), the method comprising
<claim-text>- initiating querying of a user command to be entered by the user via a user interface (157, 428) included in the communication device (150, 410, 510), the user command indicative of an adjustment desired by the user of at least one configuration parameter indicative of a current configuration of the hearing device (110, 210); and</claim-text>
<claim-text>- adjusting, depending on the user command, the configuration parameter, <b>characterized by</b></claim-text>
<claim-text>- receiving image information (711) representative of a facial expression (621, 622) of the user; and</claim-text>
<claim-text>- initiating presenting, depending on the facial expression (621, 622), an input support (613, 662, 671, 672) to the user facilitating inputting of the user command.</claim-text></claim-text></claim>
<claim id="c-en-0002" num="0002">
<claim-text>The method of claim 1, wherein the facial expression (621, 622) comprises at least one of
<claim-text>- a position and/or orientation of the user's eyebrows;</claim-text>
<claim-text>- a dilation and/or position and/or movement of the user's pupils;</claim-text>
<claim-text>- a wrinkling of the user's forehead; and</claim-text>
<claim-text>- a shape of the user's mouth.</claim-text></claim-text></claim>
<claim id="c-en-0003" num="0003">
<claim-text>The method of any of the preceding claims, wherein the input support (621, 622) comprises at least one of
<claim-text>- modifying, on the user interface (157, 428), at least one input interface (631 - 634, 641, 642) for inputting the user command;</claim-text>
<claim-text>- adding, on the user interface (157, 428), at least one input interface (631 - 634, 641, 642) for inputting the user command;<!-- EPO <DP n="31"> --></claim-text>
<claim-text>- presenting, on the user interface (157, 428), an input option (662) of the user command representing a possible adjustment of the configuration parameter;</claim-text>
<claim-text>- changing, on the user interface (157, 428), a layout on which at least one input interface (631 - 634, 641, 642) for inputting the user command is presented to the user; and</claim-text>
<claim-text>- outputting a support message (671, 672) to the user.</claim-text></claim-text></claim>
<claim id="c-en-0004" num="0004">
<claim-text>The method of any of the preceding claims, further comprising
<claim-text>- selecting at least one input interface (631 - 634, 641, 642) for inputting the user command, wherein said presenting the input support comprises a modifying and/or adding of the selected input interface (631 - 634, 641, 642); and/or</claim-text>
<claim-text>- determining an input option (662) of the user command representing a possible adjustment of the configuration parameter, wherein said presenting the input support comprises presenting of the determined input option (662).</claim-text></claim-text></claim>
<claim id="c-en-0005" num="0005">
<claim-text>The method of any of the preceding claims, further comprising
<claim-text>- receiving sensor data (733) from a sensor (115, 118, 130 - 136, 138, 139)</claim-text>
including
<claim-text>an input transducer (115) configured to provide at least part of the sensor data (733) as an audio signal indicative of sound detected in the environment of the user; and/or</claim-text>
<claim-text>a displacement sensor (136) configured to provide at least part of the sensor data (733) as displacement data indicative of a displacement of the hearing device; and/or</claim-text>
<claim-text>a location sensor (138) configured to provide at least part of the sensor data (733) as location data indicative of a current location of the user; and/or</claim-text>
<claim-text>a clock (139) configured to provide at least part of the sensor data (733) as time data indicative of a current time; and/or</claim-text>
<claim-text>a physiological sensor (133 - 135) configured to provide at least part of the sensor data (733) as physiological data indicative of a physiological property of the user; and/or</claim-text>
<claim-text>an environmental sensor (130 - 132) configured to provide at least part of the sensor data (733) as environmental data indicative of a property of the environment of the user,<!-- EPO <DP n="32"> --></claim-text>
<claim-text>wherein the input support (613, 662, 671, 672) is presented depending on the sensor data (733).</claim-text></claim-text></claim>
<claim id="c-en-0006" num="0006">
<claim-text>The method of any of the preceding claims, further comprising
<claim-text>- determining an interaction time of the user with the user interface (157, 428),</claim-text>
wherein the input support (613, 662, 671, 672) is presented depending on the interaction time.</claim-text></claim>
<claim id="c-en-0007" num="0007">
<claim-text>The method of any of the preceding claims, further comprising
<claim-text>- receiving, from an audio input unit (114 - 116), an audio signal,</claim-text>
wherein the input support (613, 662, 671, 672) is presented depending on the audio signal.</claim-text></claim>
<claim id="c-en-0008" num="0008">
<claim-text>The method of any of the preceding claims, further comprising
<claim-text>- logging, in a memory (113), one or more user commands previously entered by the user,<br/>
wherein an input option (662) of the user command representing a possible adjustment of the configuration parameter is predicted based on the logged user commands; and/or</claim-text>
<claim-text>- logging, in a memory, one or more input interfaces (631 - 634, 641, 642) for inputting the user command which have been previously used by the user to enter the user command,</claim-text>
wherein an input interface (631 - 634, 641, 642) is predicted based on the logged user commands.</claim-text></claim>
<claim id="c-en-0009" num="0009">
<claim-text>The method of any of the preceding claims, wherein the configuration parameter comprises at least one of
<claim-text>- an amplification of an audio signal outputted by the hearing device (110, 210);</claim-text>
<claim-text>- a control of a feedback of an audio signal outputted by the hearing device (110, 210);<!-- EPO <DP n="33"> --></claim-text>
<claim-text>- a property of a beamforming algorithm executed by the hearing device (110, 210);</claim-text>
<claim-text>- a property of a noise suppression algorithm executed by the hearing device (110, 210);</claim-text>
<claim-text>- a property of a communication port included in the hearing device (110, 210);</claim-text>
<claim-text>- a selection of an audio processing algorithm executed by the hearing device (110, 210);</claim-text>
<claim-text>- an enhancement of a speech content in an audio signal outputted by the hearing device (110, 210); and</claim-text>
<claim-text>- an enhancement of a music content in an audio signal outputted by the hearing device (110, 210).</claim-text></claim-text></claim>
<claim id="c-en-0010" num="0010">
<claim-text>The method of any of the preceding claims, wherein the user interface (157, 428) comprises at least one of a slider, a touch screen, a push button, and a text and/or numerical input field allowing to input the adjustment desired by the user.</claim-text></claim>
<claim id="c-en-0011" num="0011">
<claim-text>The method of any of the preceding claims, wherein the image information (711) is provided by an optical sensor (154, 155, 165, 455, 555, 556) included in the communication device (150).</claim-text></claim>
<claim id="c-en-0012" num="0012">
<claim-text>The method of any of the preceding claims, further comprising
<claim-text>- relating the image information (711) to previously recorded image information of the user's face and/or to previously recorded image information of people different from the user.</claim-text></claim-text></claim>
<claim id="c-en-0013" num="0013">
<claim-text>The method of any of the preceding claims, wherein the communication device (150, 410, 510) comprises a display (428) and the input support (613, 662, 671, 672) is displayed on the display (428).</claim-text></claim>
<claim id="c-en-0014" num="0014">
<claim-text>The method of any of the preceding claims, wherein the input support (613, 662, 671, 672) comprises a voice message outputted to the user by an output transducer (117) included in the hearing device (110, 210).</claim-text></claim>
<claim id="c-en-0015" num="0015">
<claim-text>A system for adjusting a configuration of a hearing device configured to be worn at an ear of a user to the individual needs of a user, the system comprising a hearing device (110,<!-- EPO <DP n="34"> --> 210) configured to be worn at an ear of the user and a communication device (150, 410, 510) communicatively coupled to the hearing device (110, 210), wherein the hearing device (110, 210) and/or the communication device (150, 410, 510) comprises a processor (112, 152) configured to
<claim-text>- initiate querying of a user command to be entered by the user via a user interface (157, 428) included in the communication device (150, 410, 510), the user command indicative of an adjustment desired by the user of at least one configuration parameter indicative of a current configuration of the hearing device (110, 210); and</claim-text>
<claim-text>- adjust, depending on the user command, the configuration parameter, <b>characterized in that</b> the processor (112, 152) is further configured to</claim-text>
<claim-text>- receive image information (711) representative of a facial expression (621, 622) of the user; and</claim-text>
<claim-text>- initiate presenting, depending on the facial expression (621, 622), an input support (613, 662, 671, 672) to the user facilitating inputting of the user command.</claim-text></claim-text></claim>
</claims>
<drawings id="draw" lang="en"><!-- EPO <DP n="35"> -->
<figure id="f0001" num="1,2,3"><img id="if0001" file="imgf0001.tif" wi="131" he="232" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="36"> -->
<figure id="f0002" num="4,5,6A,6B"><img id="if0002" file="imgf0002.tif" wi="165" he="195" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="37"> -->
<figure id="f0003" num="7,8A"><img id="if0003" file="imgf0003.tif" wi="107" he="218" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="38"> -->
<figure id="f0004" num="8B,8C"><img id="if0004" file="imgf0004.tif" wi="107" he="221" img-content="drawing" img-format="tif"/></figure><!-- EPO <DP n="39"> -->
<figure id="f0005" num="9,10,11,12"><img id="if0005" file="imgf0005.tif" wi="161" he="207" img-content="drawing" img-format="tif"/></figure>
</drawings>
<search-report-data id="srep" lang="en" srep-office="EP" date-produced=""><doc-page id="srep0001" file="srep0001.tif" wi="161" he="240" type="tif"/><doc-page id="srep0002" file="srep0002.tif" wi="160" he="240" type="tif"/></search-report-data><search-report-data date-produced="20240115" id="srepxml" lang="en" srep-office="EP" srep-type="ep-sr" status="n"><!--
 The search report data in XML is provided for the users' convenience only. It might differ from the search report of the PDF document, which contains the officially published data. The EPO disclaims any liability for incorrect or incomplete data in the XML for search reports.
 -->

<srep-info><file-reference-id>E22072.EP</file-reference-id><application-reference><document-id><country>EP</country><doc-number>23191289.0</doc-number></document-id></application-reference><applicant-name><name>Sonova AG</name></applicant-name><srep-established srep-established="yes"/><srep-invention-title title-approval="yes"/><srep-abstract abs-approval="yes"/><srep-figure-to-publish figinfo="by-applicant"><figure-to-publish><fig-number>1</fig-number></figure-to-publish></srep-figure-to-publish><srep-info-admin><srep-office><addressbook><text>DH</text></addressbook></srep-office><date-search-report-mailed><date>20240124</date></date-search-report-mailed></srep-info-admin></srep-info><srep-for-pub><srep-fields-searched><minimum-documentation><classifications-ipcr><classification-ipcr><text>H04R</text></classification-ipcr><classification-ipcr><text>G06F</text></classification-ipcr><classification-ipcr><text>A61B</text></classification-ipcr></classifications-ipcr></minimum-documentation></srep-fields-searched><srep-citations><citation id="sr-cit0001"><patcit dnum="EP3614695A1" id="sr-pcit0001" url="http://v3.espacenet.com/textdoc?DB=EPODOC&amp;IDX=EP3614695&amp;CY=ep"><document-id><country>EP</country><doc-number>3614695</doc-number><kind>A1</kind><name>OTICON AS [DK]</name><date>20200226</date></document-id></patcit><category>X</category><rel-claims>1-15</rel-claims><rel-passage><passage>* paragraph [0006] - paragraph [0038]; claims 1-7; figures 1-3 *</passage></rel-passage></citation><citation id="sr-cit0002"><patcit dnum="EP3456259A1" id="sr-pcit0002" url="http://v3.espacenet.com/textdoc?DB=EPODOC&amp;IDX=EP3456259&amp;CY=ep"><document-id><country>EP</country><doc-number>3456259</doc-number><kind>A1</kind><name>OTICON AS [DK]</name><date>20190320</date></document-id></patcit><category>X</category><rel-claims>1-15</rel-claims><rel-passage><passage>* paragraph [0008] - paragraph [0094]; claims 1-18; figures 1-6 *</passage></rel-passage></citation><citation id="sr-cit0003"><patcit dnum="US2019265802A1" id="sr-pcit0003" url="http://v3.espacenet.com/textdoc?DB=EPODOC&amp;IDX=US2019265802&amp;CY=ep"><document-id><country>US</country><doc-number>2019265802</doc-number><kind>A1</kind><name>PARSHIONIKAR UDAY [US]</name><date>20190829</date></document-id></patcit><category>A</category><rel-claims>1-15</rel-claims><rel-passage><passage>* paragraph [0007] - paragraph [0236]; claims 1-24; figures 1-27 *</passage></rel-passage></citation><citation id="sr-cit0004"><patcit dnum="US2021235203A1" id="sr-pcit0004" url="http://v3.espacenet.com/textdoc?DB=EPODOC&amp;IDX=US2021235203&amp;CY=ep"><document-id><country>US</country><doc-number>2021235203</doc-number><kind>A1</kind><name>ANDERSEN O; BENDSEN H ET AL.</name><date>20210729</date></document-id></patcit><category>A</category><rel-claims>1-15</rel-claims><rel-passage><passage>* paragraph [0001] - paragraph [0222]; claims 1-18; figures 1-15 *</passage></rel-passage></citation></srep-citations><srep-admin><examiners><primary-examiner><name>Timms, Olegs</name></primary-examiner></examiners><srep-office><addressbook><text>The Hague</text></addressbook></srep-office><date-search-completed><date>20240115</date></date-search-completed></srep-admin><!--							The annex lists the patent family members relating to the patent documents cited in the above mentioned European search report.							The members are as contained in the European Patent Office EDP file on							The European Patent Office is in no way liable for these particulars which are merely given for the purpose of information.							For more details about this annex : see Official Journal of the European Patent Office, No 12/82						--><srep-patent-family><patent-family><priority-application><document-id><country>EP</country><doc-number>3614695</doc-number><kind>A1</kind><date>20200226</date></document-id></priority-application><text>NONE</text></patent-family><patent-family><priority-application><document-id><country>EP</country><doc-number>3456259</doc-number><kind>A1</kind><date>20190320</date></document-id></priority-application><family-member><document-id><country>CN</country><doc-number>109729485</doc-number><kind>A</kind><date>20190507</date></document-id></family-member><family-member><document-id><country>EP</country><doc-number>3456259</doc-number><kind>A1</kind><date>20190320</date></document-id></family-member><family-member><document-id><country>US</country><doc-number>2019090073</doc-number><kind>A1</kind><date>20190321</date></document-id></family-member></patent-family><patent-family><priority-application><document-id><country>US</country><doc-number>2019265802</doc-number><kind>A1</kind><date>20190829</date></document-id></priority-application><text>NONE</text></patent-family><patent-family><priority-application><document-id><country>US</country><doc-number>2021235203</doc-number><kind>A1</kind><date>20210729</date></document-id></priority-application><family-member><document-id><country>US</country><doc-number>2021235203</doc-number><kind>A1</kind><date>20210729</date></document-id></family-member><family-member><document-id><country>US</country><doc-number>2022141601</doc-number><kind>A1</kind><date>20220505</date></document-id></family-member></patent-family></srep-patent-family></srep-for-pub></search-report-data>
<ep-reference-list id="ref-list">
<heading id="ref-h0001"><b>REFERENCES CITED IN THE DESCRIPTION</b></heading>
<p id="ref-p0001" num=""><i>This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.</i></p>
<heading id="ref-h0002"><b>Non-patent literature cited in the description</b></heading>
<p id="ref-p0002" num="">
<ul id="ref-ul0001" list-style="bullet">
<li><nplcit id="ref-ncit0001" npl-type="s"><article><author><name>SONG, Z.</name></author><atl>Facial Expression Emotion Recognition Model Integrating Philosophy and Machine Learning Theory</atl><serial><sertitle>Front. Psychol.</sertitle><pubdate><sdate>20210000</sdate><edate/></pubdate><vid>12</vid></serial><location><pp><ppf>759485</ppf><ppl/></pp></location></article></nplcit><crossref idref="ncit0001">[0067]</crossref></li>
<li><nplcit id="ref-ncit0002" npl-type="s"><article><author><name>KULKARNI, S.S.</name></author><author><name>REDDY, N.P.</name></author><author><name>HARIHARAN, S.</name></author><atl>Facial expression (mood) recognition from facial images using committee neural networks</atl><serial><sertitle>BioMed Eng OnLine</sertitle><pubdate><sdate>20090000</sdate><edate/></pubdate><vid>8</vid></serial><location><pp><ppf>16</ppf><ppl/></pp></location></article></nplcit><crossref idref="ncit0002">[0067]</crossref></li>
</ul></p>
</ep-reference-list>
</ep-patent-document>
