(19)
(11)EP 3 370 442 B1

(12)EUROPEAN PATENT SPECIFICATION

(45)Mention of the grant of the patent:
13.05.2020 Bulletin 2020/20

(21)Application number: 18159777.4

(22)Date of filing:  02.03.2018
(51)Int. Cl.: 
H04S 7/00  (2006.01)
H04R 5/033  (2006.01)

(54)

HEARING DEVICE INCORPORATING USER INTERACTIVE AUDITORY DISPLAY

HÖRGERÄT MIT INTERAKTIVER AKUSTISCHER BENUTZERANZEIGE

DISPOSITIF AUDITIF COMPRENANT UN AFFICHAGE AUDITIF INTERACTIF D'UTILISATEUR


(84)Designated Contracting States:
AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

(30)Priority: 02.03.2017 US 201715447735

(43)Date of publication of application:
05.09.2018 Bulletin 2018/36

(73)Proprietor: Starkey Laboratories, Inc.
Eden Prairie, MN 55344 (US)

(72)Inventors:
  • HELWANI, Karim
    Eden Prairie, MN 55344 (US)
  • ZHANG, Tao
    Eden Prairie, MN 55344 (US)
  • CARLILE, Simon
    Berkeley, CA 94704 (US)

(74)Representative: Dentons UK and Middle East LLP 
One Fleet Place
London EC4M 7WS
London EC4M 7WS (GB)


(56)References cited: : 
EP-A2- 2 293 598
US-A1- 2011 153 044
WO-A1-2016/131064
US-A1- 2013 158 856
  
      
    Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art. 99(1) European Patent Convention).


    Description

    TECHNICAL FIELD



    [0001] This application relates generally to hearing devices, including hearing aids, personal amplification devices, and other hearables.

    BACKGROUND



    [0002] Hearing devices can incorporate a number of electromechanical switches and controls that allow a user to interact with the hearing device. Because the switches and controls are limited in number and are often out of sight while wearing the hearing devices, conventional approaches to interacting with the hearing device are cumbersome and limited in functionality.

    [0003] US2011/153044 A1 discloses technology for a user to interact with and control a portable media device through an audio user interface. The audio user interface includes one or more audible control nodes perceived by the user to be spatially located at different points about the user of the portable media device. A sensor in the portable media device senses a movement of the portable media device by the user toward one or more of the audible control nodes. The operation of the portable device is modified in accordance with the sensed movement of the portable media device.

    [0004] EP 2 293 598 A2 discloses an audial menu system, where sounds represent the options. For a certain implementation of the invention, choices may be made with head movements that are detected by motion sensors contained in a headset.

    SUMMARY



    [0005] According to a first aspect of the invention, there is provided a method implemented by a hearing device arrangement adapted to be worn by a wearer, the method comprising: generating, by the hearing device arrangement, a virtual auditory user interface comprising a sound field, a plurality of disparate sound field zones, and a plurality of quiet zones that provide acoustic contrast between the sound field zones, the sound field zones and the quiet zones remaining positionally stationary within the sound field, wherein each one of the plurality of quiet zones is generated at the location where one of the plurality of sound field zones is created and is configured to attenuate sound emanating from another one of the plurality of sound field zones; sensing an input from the wearer via one or more sensors at the hearing device arrangement; facilitating movement of the wearer within the sound field in response to a navigation input received from the one or more sensors; and selecting one of the sound field zones for playback to the wearer or actuating a function by the hearing device arrangement in response to a selection input received from the one or more sensors.

    [0006] According to a second aspect of the invention, there is provided an apparatus, comprising: a pair of hearing devices configured to be worn by a wearer, each hearing device comprising: a processor configured to generate a virtual auditory user interface comprising a sound field, a plurality of disparate sound field zones, and a plurality of quiet zones that provide acoustic contrast between the sound field zones, the sound field zones and the quiet zones remaining positionally stationary within the sound field, wherein each one of the plurality of quiet zones is generated at the location where one of the plurality of sound field zones is created and is configured to attenuate sound emanating from another one of the plurality of sound field zones; one or more sensors configured to sense a plurality of inputs from the wearer, the processor configured to facilitate movement of the wearer within the sound field in response to a navigation input received from the one or more sensors; and a speaker; wherein the processor is configured to select one of the sound field zones for playback via the speaker or actuate a hearing device function in response to a selection input received from the one or more sensors.

    [0007] The above summary is not intended to describe each disclosed embodiment or every implementation of the present disclosure. The figures and the detailed description below more particularly exemplify illustrative embodiments.

    BRIEF DESCRIPTION OF THE DRAWINGS



    [0008] Throughout the specification reference is made to the appended drawings wherein:

    Figure 1 illustrates an auditory display in time comprising three audio icons for the purpose of adjusting loudness in accordance with various embodiments;

    Figures 2A and 2B illustrate an auditory display in time comprising three audio icons for the purpose of selecting audio content for playback in accordance with various embodiments;

    Figure 3 illustrates an example sound field and quiet zone configuration for a virtual auditory display in accordance with various embodiments;

    Figure 4 is a flow diagram of a method implemented by an auditory display in accordance with various embodiments;

    Figure 5 is a flow diagram of a method implemented by an auditory display in accordance with various embodiments;

    Figures 6A-6D illustrate a process of generating sound field zones and quiet zones of an auditory display in accordance with various embodiments;

    Figure 7 is a flow diagram of a method implemented by an auditory display in accordance with various embodiments;

    Figure 8 illustrates a rendering setup for generating a sound field of an auditory display with a quiet zone in accordance with various embodiments;

    Figure 9 shows a representative synthesized sound field in accordance with various embodiments;

    Figure 10 shows the level distribution in dB of the synthesized sound field of Figure 9 ;

    Figure 11 is an illustration of a traditional setup for measuring head-related transfer functions (HRTFs);

    Figure 12 illustrates a strategy for obtaining a set of HRTFs taking into account the relative movement of wearer with respect to a virtual loudspeaker array in accordance with various embodiments;

    Figure 13 is a block diagram showing various components of a hearing device that can be configured to implement an auditory display in accordance with various embodiments; and

    Figure 14 is a block diagram showing various components of a hearing device that can be configured to implement an auditory display in accordance with various embodiments



    [0009] The figures are not necessarily to scale. Like numbers used in the figures refer to like components. However, it will be understood that the use of a number to refer to a component in a given figure is not intended to limit the component in another figure labeled with the same number;

    DETAILED DESCRIPTION



    [0010] It is understood that the embodiments described herein may be used with any hearing device without departing from the scope of this disclosure. The devices depicted in the figures are intended to demonstrate the subject matter, but not in a limited, exhaustive, or exclusive sense. It is also understood that the present subject matter can be used with a device designed for use in or on the right ear or the left ear or both ears of the wearer.

    [0011] Hearing devices, such as hearing aids and hearables (e.g., wearable earphones), typically include an enclosure, such as a housing or shell, within which internal components are disposed. Typical components of a hearing device can include a digital signal processor (DSP), memory, power management circuitry, one or more communication devices (e.g., a radio, a near-field magnetic induction device), one or more antennas, one or more microphones, and a receiver/speaker, for example. More advanced hearing devices can incorporate a long-range communication device, such as a Bluetooth® transceiver or other type of radio frequency (RF) transceiver.

    [0012] Hearing devices of the present disclosure can incorporate an antenna arrangement coupled to a high-frequency radio, such as a 2.4 GHz radio. The radio can conform to an IEEE 802.11 (e.g., WiFi®) or Bluetooth® (e.g., BLE, Bluetooth® 4. 2 or 5.0) specification, for example. It is understood that hearing devices of the present disclosure can employ other radios, such as a 900 MHz radio.

    [0013] Hearing devices of the present disclosure are configured to receive streaming audio (e.g., digital audio data or files) from an electronic or digital source. Representative electronic/digital sources (also referred to herein as accessory devices) include an assistive listening system, a TV streamer, a radio, a smartphone, a cell phone/entertainment device (CPED) or other electronic device that serves as a source of digital audio data or files. An electronic/digital source may also be another hearing device, such as a second hearing aid. Wireless assistive listening systems, for example, are useful in a variety of situations and venues where listening by persons with impaired hearing have difficulty discerning sound (e.g., a person speaking or an audio broadcast or presentation). Wireless assistive listening systems can be useful at venues such as theaters, museums, convention centers, music halls, classrooms, restaurants, conference rooms, bank teller stations or drive-up windows, point-of-purchase locations, and other private and public meeting places.

    [0014] The term hearing device refers to a wide variety of devices that can aid a person with impaired hearing. The term hearing device also refers to a wide variety of devices that can produce optimized or processed sound for persons with normal hearing. Hearing devices of the present disclosure include hearables (e.g., wearable earphones, headphones, earbuds, virtual reality headsets), hearing aids (e.g., hearing instruments), cochlear implants, and bone-conduction devices, for example. Hearing devices include, but are not limited to, behind-the-ear (BTE), in-the-ear (ITE), in-the-canal (ITC), invisible-in-canal (IIC), receiver-in-canal (RIC), receiver-in-the-ear (RITE) or completely-in-the-canal (CIC) type hearing devices or some combination of the above. Hearing devices can also be referred to as assistive listening devices in the context of assistive listening systems. Throughout this disclosure, reference is made to a "hearing device," which is understood to refer to a single hearing device or a pair of hearing devices.

    [0015] Embodiments of the disclosure are directed to hearing devices that incorporate a user interactive auditory display. The term "auditory display" refers to a system that synthesizes a sound field comprising spatial representations of separate audio signals. These spatial representations can be referred to as sound field zones. Some sound field zones are associated with a specified sound, such as speech or music. Some sound field zones are zones of quiet (e.g., zones lacking or substantially lacking sound). Some sound field zones, such as those associated with a specified sound, serve as sound icons. Sound icons can be activatable by a wearer of the hearing device, resulting in playback of a specified sound (e.g., a song) or actuation of a function performed by or in cooperation with the hearing device. An auditory display of the present disclosure incorporates one or more sensors that allow a wearer of a hearing device to interact with the sound field which can present an audio menu of sound icons. The wearer can navigate through the sound field and select between different sound icons presented in the sound field for purposes of playing back desired audio signals or actuating different functions of the hearing device.

    [0016] According to some embodiments, an auditory display of a hearing device arrangement is implemented as an on-demand interface. The auditory display can be activated and deactivated by the wearer of the hearing device as desired. Activation and deactivation of the auditory display can be implemented in response to a user input. For example, the hearing device can be programmed to listen for a specified voice command (e.g., "activate display," "deactivate display") and activate or deactivate the audio display in response thereto. One or more sensors of the hearing device can be used to detect an activate or deactivate input from the wearer. By way of further example, nodding of the head twice in a left direction in quick succession can be detected by an accelerometer of the hearing device as an activation input. Nodding of the head twice in a right direction in quick succession can be detected by the accelerometer as a deactivation input.

    [0017] A wearer of a hearing device often needs to adjust operation of the hearing device in the field. Existing hearing devices typically do not have any visual display and can only afford a couple of miniature mechanical or optical controls for user adjustment due to space limitations. In addition, manipulating such miniature controls on hearing devices without the ability to see the controls is challenging. As hearing device functionality becomes more complex and sophisticated, demands for more user options are increasing. Remote controls and smartphones have been offered to meet such demands. However, such approaches require the wearer to carry an extra device which adds cost and inconvenience. As a result, it is desirable to create a user interface for hearing devices that does not require an extra device, is easy to use, and supports the increasing needs for a more sophisticated user interface.

    [0018] In a traditional human-computer user interface, user options are presented as different visual icons on a computer screen. Here, a visual icon refers to one or more visual images grouped into different zones logically on the screen. In contrast to visual displays, an auditory display of the present disclosure presents user options in the form of sequential or simultaneous sound icons. A sound icon refers to one or more sounds that is/are associated with an independent spatial zone in a binaurally rendered sound field. A collection of spatially organized sound icons can be referred to as a soundscape. A soundscape comprising a collection of spatially organized sound icons represents an auditory display according to various embodiments.

    [0019] Figure 1 illustrates an auditory display in accordance with various embodiments. In Figure 1 , a wearer of a hearing device is presented with an auditory display 100 configured to facilitate user adjustment of the volume of the device. The auditory display 100 generates a sound field 101 that contains three different sound icons 102, 104, and 106. Sound icon 102 (louder), when selected, allows the wearer to increase the volume of the hearing device. Sound icon 104 (same), when selected, allows the wearer to maintain the same volume of the hearing device. Sound icon 106 (softer), when selected, allows the wearer to decrease the volume of the hearing device.

    [0020] Each of the sound icons 102, 104, and 106 is presented at a different spatial location (e.g. defined by a corresponding elevation, and azimuth) in the sound field 101 and at a different time. For example, the louder icon 102 is presented at spatial location (45°, -90°), such that the wearer will hear the word "louder" from an upper left direction at time instant 1. At time instant 2, the same icon 104 is presented at spatial location (45°, 0°), such that the wearer will hear the word "same" from an upper middle direction. At time instant 3, the softer icon 106 is presented at spatial location (45°, 90°), such that the wearer will hear the word "softer" from an upper right direction. This sequence of sound icon presentation is repeated until the wearer responds to one of the options. In response to selecting one of the sound icon options, the hearing device automatically adjusts the volume in accordance with the selected option.

    [0021] The sound icons populating a sound field of an auditory display can be organized and presented in a hierarchical manner. Some sound icons can trigger a subset of sound icons. For example, and with reference to Figures 2A and 2B , an auditory display 200 is presented allowing the wearer to select between a number of different sound icons. In Figure 2A , a top-level menu of sound icons is presented which allows the wearer to select between a music icon 202, a weather icon 204, and a news icon 206. As in the case of the embodiment shown in Figure 1 , each of the different sound icons 202, 204, 206 is presented at a different spatial location within the sound field 201 and at a different time. In this illustrative example, the wearer selects the music icon 202.

    [0022] In response to selecting the music icon 202, a subset of music icons 210, 212, and 214 is presented at different spatial locations within the sound field 201 and at different times. As shown in Figure 2B , the subset of music icons includes a jazz icon 210, a classical icon 212, and a rock icon 214. After selecting the jazz icon 210, for example, a subset of jazz icons can be presented in the sound field 201. For example, a number of different jazz artists or jazz genres can be displayed for selection and actuation by the wearer. Additional icon subsets for individual jazz songs can be presented in one or more subsequent menus presented in the sound field 201.

    [0023] A sound icon presented in a sound field can be any sound perceivable by the wearer. For example, a sound icon can be a natural voice or a synthetic voice from a familiar or preferred person. A sound icon can be a natural or synthetic sound, such as bird sounds, ocean wave sounds, stream sounds, or computer-generated sounds. A sound icon can also be music. It is understood that a sound icon is not limited to the sounds listed above. In some embodiments, the preferences, favorites, and feeds for the auditory display can be optionally synchronized with a different device that cooperates with the hearing device, such as a mobile phone or a PC with a dedicated application installed on such devices.

    [0024] The arrangement of sound icons in the sound field of an auditory display can be implemented in an adaptive manner, such that most frequently selected icons are closest in terms of spatial location to the "user attention resting point" within the sound field. This way, the average scroll effort of the wearer is minimized. The number of the sound icons rendered in the sound field can be optimized based on the cognitive load of the wearer. For example, the hearing device can incorporate an electroencephalographic (EEG) sensor that can sense an EEG signal from the wearer. The cognitive load of the wearer can be inferred from the EEG signal and the sound icons can be arranged based on the wearer's mood inferred from the wearer's voice based on emotion detection algorithms. For example, if the wearer is sad, a sound icon with the wearer's favorite melancholic music can be rendered next to the wearer's current position in the sound field.

    [0025] To improve efficiency, the sound icons within a sound field can be selected and arranged based on the wearer's intention via one of the following means: a keyword spoken by the wearer and recognized by the hearing device via automatic speech recognition; a keyword thought/imagined by the wearer and recognized by the hearing device via brain decoding including, but not limited to, use of an EEG signal. It is important to recognize that the above efficiency measure should be used judiciously as excessive use can result in confusions or a sense of being lost.

    [0026] In a traditional human-computer user interface, the user navigates through the user interface by visually browsing through the different visual icons on the computer screen either automatically or with the aid of mouse scrolling and clicking. In accordance with various embodiments, user navigation of an auditory display is based on analyzing one or more of a bioelectrical signal, biomechanical movement or voice command. For example, a wearer can navigate an auditory display by listening and recognizing the different sound icons in the soundscape either automatically with an adjustable speed or by evaluating one of the following wearer inputs. One wearer input involves detection of deliberate eye movement or eye lid movement using an electrooculogram (EOG) signal sensed by an EOG sensor in the hearing device. Another wearer input involves detection of deliberate head movement via one or more inertia sensors of the hearing device, such as an accelerometer or a gyroscope. A further wearer input involves recognition of a voice command from the wearer, via a microphone and voice recognition algorithm implemented by a processor of the hearing device. Another wearer input involves recognition of a command thought imagined by the wearer via brain decoding including, but not limited to, use of an EEG sensor of the hearing device.

    [0027] When navigating a wearer's actual or virtual movement through a sound field by evaluating a wearer command, it is possible to present more sound icons within the sound field in an organized way in order not to overwhelm the wearer with too many sound icons at a given time. For example, an eye movement from right to left from the wearer can trigger the presentation of another set of sound icons in the given context.

    [0028] In a traditional human-computer interface, the user indicates his or her selection by pressing a key on the keyboard or implementing a mouse click on a visual icon presented on the computer screen. The computer responds to the user selection by providing a visual change on the selected visual icon, a sound or both. According to various embodiments, the wearer selects an option presented by the auditory display by one of the following means. The wearer can utter a keyword which is recognized by the hearing device via automatic speech recognition. A keyword thought imagined by the wearer can be recognized by the hearing device via brain decoding including, but not limited to, use of an EEG sensor of the hearing device. Wearer selection of an option can be implemented by detection of a fixation dependent microsaccade pattern or intentional gazes in the EOG signal produced by an EOG sensor of the hearing device. Detection of a wearer's attention can be based on an EEG signal. More particularly, a wearer's EEG signal can be analyzed, frequency shifted, and filtered for equalization. The envelope of sound streams of each sound icon can be correlated with that of the equalized EEG signal. The sound stream with the highest correlation is selected.

    [0029] Embodiments of the disclosure are directed to auditory displays that provide more user options not only by representing the sound spatially, e.g., binaurally, but also in manners inspired by code-, time- or frequency-multiplexing, particularly in embodiments that use an EEG signal. For example, two sound icons within a sound field can have the same spatial location, but the sound content is distinguishable. This can be achieved via frequency-multiplexing such that the two sound icons have different distinct spectra (e.g., one icon is playing a male voice and another is playing a female voice). This can be achieved via code-multiplexing such that the two sound icons have different audio content (e.g., one icon is music while another is speech, or one icon is an English signal and another is a German signal). This can be achieved via time-multiplexing such that the two sound icons are placed at the same location but are time interleaved such that they never emit sound in the same time.

    [0030] Spatializing the sound in an auditory display has recently gained interest. However, existing spatialization approaches are based on a free field assumption. That is, existing spatialization approaches rely on the wearer's ability to solve the "cocktail party problem" in a binaurally rendered sound field, limit the number of the wearer options in the auditory display, and often lead to a confusing soundscape to the wearer. For existing binaurally rendered sound sources, the residuals of a source while facing another one is determined by the free field propagation of the sources and the head scattering.

    [0031] To ensure a clear and easy perception of different sound sources in the soundscape, it is important to control the spatial extension of these sound sources by rendering a multizone sound field and control the potential cross-talk among different sound field zones. Embodiments of the disclosure are directed to synthesizing a sound field with zones of quiet using an array of virtual loudspeakers, which is equivalent to synthesizing a sound field with hard sound boundaries. As a result, an adjustable crosstalk capability can be achieved by varying the admittance of the virtual boundaries of each sound field zone. Because only binaural rendering is feasible in a hearable device, the sound field synthesis can be accomplished in two steps according to various embodiments: (1) synthesize the sound field using virtual loudspeakers; and (2) filter the virtual loudspeakers signals with a set of head related transfer functions (HRTF). Details of these and other processes involving various embodiments of an auditory display are provided hereinafter.

    [0032] Turning now to Figure 3 , there is illustrated a representative sound field and quiet zone configuration for a virtual auditory display in accordance with various embodiments. More particularly, Figure 3 graphically illustrates various features of an auditory display that are sonically perceivable by a wearer of a hearing device. According to various embodiments, a hearing device arrangement is configured to generate a virtual auditory display 300 comprising a sound field 302. The sound field 302 includes a number of disparate sound field zones, sf1, sf2, and sf3. It is understood that three sound field zones are shown in Figure 3 for purposes of illustration and not of limitation. Each of the sound field zones, sf1, sf2, and sf3, is associated with a separate sound, such as any of the sounds described herein. For example, one or more of the sound field zones sf1, sf2, and sf3 can be associated with a separate audio stream (e.g., music, speech, natural sounds, synthesized sounds) received by the hearing device via a transceiver of the device.

    [0033] In addition to a number of different sound field zones sf1, sf2, and sf3, the sound field 302 includes a number of quiet zones (qzi,j), where i represents the ithquiet zone and j represent the jthsound field zone. In the representative embodiment shown in Figure 3 , the sound field 302 includes two quiet zones for each of the sound field zones sf1, sf2, and sf3. In general, a sound field 302 can include N disparate sound field zones and at least N-1 quiet zones. The quiet zones (qzi,j) provide increased acoustic contrast between the sound field zones sf1, sf2, and sf3. The quiet zones can be viewed as hard sound boundaries between the sound field zones sf1, sf2, and sf3. As will be described in greater detail, the sound field 302, which includes sound field zones and quiet zones, is synthesized by the hearing device using an array of virtual loudspeakers. A detailed description concerning the generation of sound field zones and quiet sounds of a sound field is provided below with reference to Figures 6A-6D .

    [0034] As is discussed above, Figure 3 is a visual representation of an auditory display of the present disclosure. The sound field zones sf1, sf2, and sf3 represent spatially non-interfering acoustic zones. In each sound field zone, a sound is synthesized with individual characteristics such as waveform propagation, (e.g., plane wave with a specific angle of incidence), frequency range or specific content (speech, music, tonal signals). Each sound field zone corresponds to specific audio content or a menu option. According to various embodiments, the sound field zones sf1, sf2, and sf3, and quiet zones (qzi,j) remain positionally stationary within the sound field 302. Rather than fixing the wearer at the center of the sound field 302, the auditory display is configured to facilitate free movement of the wearer within the sound field 302.

    [0035] One or more sensors of the hearing device sense a user input corresponding to a navigation input or a selection input. The wearer of the hearing device can navigate through the sound field 302 by appropriate gestures, voice commands or thought sensed by the sensors of the hearing device. The wearer may select a particular sound field zone by an appropriate gesture, voice command or thought, which activates the selected zone. The selected zone may be a menu option or a sound (e.g., a song or verbal podcast). Accordingly, the auditory display 300 does not necessarily place the listener in the center of the synthesized sound field 302 as is the case using conventional spatialization techniques. Rather, the wearer can effectively move through (virtually or actually) the perspective-free sound field 302. Through navigation, the wearer chooses his or her perspective.

    [0036] The auditory display 300 is implemented to expose the wearer of a hearing device to different spatial audio content with controlled acoustic contrast. The experience of the wearer when navigating the sound field 302 can be compared to the synthesis of the sound in a corridor of a music conservatorium comprising separate music rooms. In this illustrative scenario, the sound from each separated room is mixed with different level and character at the listener's ear. The listener can walk through the corridor and listen to the different played materials and finally choose the room he or she prefers to enter. Choosing a room in this regard is equivalent to choosing a menu option that can result in selecting a specific hearing device configuration or starting a specific activity with the hearing device, such as playing an audio book or listening to the news.

    [0037] In Figure 3 , the three sound field zones sf1, sf2, and sf3 can represent three different genres of music, such as jazz (sf1), classical (sf2), and rock (sf3). As is shown in Figure 3 , the wearer enters the sound field 302 at location A. At location A, the wearer can hear each of the sound fields sf1, sf2, and sf3 at different spatial locations of the sound field 302 and at different amplitudes based on the spatial distance of each zone from location A. The sounds emanating from the sound field zones sf1, sf2, and sf3 are spatially non-interfering with each other due to the presence of quiet zones (qzi,j) that provide acoustic contrast between the sound field zones sf1, sf2, and sf3. The three different genres of music can be played back to the wearer sequentially or simultaneously. From location A, the wearer can provide a user input to select the desired sound field zone, such as jazz sound field zone sf1.

    [0038] In response to selecting the sound field zone sf1, a subset of sound icons can replace the sound field zones sf1, sf2, and sf3 in the sound field 302. This subset of sound icons can represent different sub-genres of jazz, such as traditional, swing, smooth, West Coast, New Orleans, big band, and modern. The wearer can select a sound icon of a desired sub-genre of jazz, and another subset of sound icons representing different jazz artists can populate the sound field 302. The jazz artist sound icons may be implemented to play back the name of each jazz artist in a sequential or simultaneous manner. After selecting a desired jazz artist, sound icons for individual songs associated with the selected jazz artist can be presented in the sound field 302. The wearer may then select a specific song icon for playback.

    [0039] The functionality described hereinabove regarding the selection of desired music can be implemented for configuring the hearing device. For example, an initial set of sound icons for controlling different functions of the hearing device can be presented in the sound field 302. The wearer can provide a user input to select a desired function, such as volume adjustment. The sound field 302 can then be populated by sound icons that allow for the adjustment of volume, such as the icons shown in Figure 1 .

    [0040] In addition to the functionality described hereinabove, the auditory display 300 can be implemented to facilitate movement of the wearer within the sound field 302 via one or more sensors of the hearing device. Movement within the sound field 302 can be virtual or actual. As the wearer moves within the sound field 302, the wearer-perceived amplitude and directionality of sound emanating from the sound field zones sf1, sf2, and sf3 is adjusted in response to the wearer's movement. As the wearer moves from location A to location B, the sound emanating from sound field zone sf1 increases in amplitude relative to the sound emanating from sound field zones sf2 and sf3. As the wearer moves from location B to location C, the wearer perceives a diminishment in the amplitude of sound from sound field zone sf1, and an increase in the amplitude of sound from sound field zones sf2 and sf3. Moving from location C, the wearer decides to select the sound field zone sf2 (classical), which is indicated as location D of the wearer. Additional menus of sound icons can then be presented in the sound field 302 in response to selecting the sound field zone sf2.

    [0041] Figure 4 is a flow diagram of a method implemented by an auditory display in accordance with various embodiments. The processes shown in Figure 4 summarize those discussed above with reference to Figure 3 . The method of Figure 4 involves generating 402, by a wearable hearing device arrangement, a virtual audio display having a sound field comprising stationary sound field zones and stationary quiet zones. The method involves sensing 404 an input from the wearer via a sensor of the hearing device arrangement. The method also involves facilitating movement 406 of the wearer within the sound field of the audio display in response to a navigation input received from the sensor. The method further involves selecting 408 one of the sound field zones for playback to the wearer or actuation of a function in response to a selection input received from the sensor.

    [0042] Figure 5 is a flow diagram of a method implemented by an auditory display in accordance with other embodiments. The method shown in Figure 5 involves generating 502 a sound field of a virtual audio display for use by a hearing device. The method involves generating 504 N sound field zones of a virtual audio display, such that each sound field zone is associated with a separate sound or audio stream. The method also involves generating 506, for each sound field zone, at least N-1 quiet zones between the sound field zones. The method further involves co-locating 508 the quiet zones with their corresponding sound field zones.

    [0043] Figures 6A-6D illustrate a process of generating sound field zones and quiet zones of an auditory display in accordance with various embodiments. Figure 6A shows an auditory display 300 having a sound field 302. The sound field 302 is designed to include three sound field zones sf1, sf2, and sf3 at the specific locations shown in Figure 6A . Figures 6B-6D illustrate how each of the sound field zones and associated quiet zones are generated to create the three sound fields shown in Figure 6A . The sound field and quiet zones are generated by the hearing device using a virtual loudspeaker array and audio processing (via a sound processor or digital signal processor (DSP)) which will be described hereinbelow.

    [0044] Figure 6B shows details concerning the generation of sound field zone sf1. The sound field zone sf1 is generated at the location shown in Figure 6A . A first quiet zone, qz1,2, is generated at the location where the sound field zone sf2 will be created. The number 1 after the term qz in qz1,2 identifies the sound field zone whose sound is being attenuated, and the number 2 identifies the sound field zone where the quiet zone is located. As such, the first quiet zone, qz1,2 is located where the sound field zone sf2 will be created and is configured to attenuate sound emanating from the first sound field zone sf1. A second quiet zone, qz1,3, is generated at the location where the sound field zone sf3 will be created. The second quiet zone, qz1,3 is located where the sound field zone sf3 will be positioned and is configured to attenuate sound emanating from the first sound field zone sf1.

    [0045] Figure 6C shows details concerning the generation of sound field zone sf2. The sound field zone sf2 is generated at the location shown in Figure 6A . A third quiet zone, qz2,1 is generated at the location of the first sound field zone sf1. The third quiet zone, qz2,1 is configured to attenuate sound emanating from the second sound field zone sf2. A fourth quiet zone, qz2,3 is generated at the location of the third sound field zone sf3. The fourth quiet zone, qz2,3 is configured to attenuate sound emanating from the second sound field zone sf2.

    [0046] Figure 6D shows details concerning the generation of sound field zone sf3. The sound field zone sf3 is generated at the location shown in Figure 6A . A fifth quiet zone, qz3,1 is generated at the location of the first sound field zone sf1. The fifth quiet zone, qz3,1 is configured to attenuate sound emanating from the third sound field zone sf3. A sixth quiet zone, qz3,2 is generated at the location of the second sound field zone sf2. The sixth quiet zone, qz3,2 is configured to attenuate sound emanating from the third sound field zone sf3. The arrangement of quiet zones between the sound field zones provides acoustic contrast between the sound field zones of the sound field 302.

    [0047] Figure 7 is a flow diagram of a method implemented by an auditory display in accordance with various embodiments. The method shown in Figure 7 involves generating 702, by a wearable hearing device arrangement, a virtual audio display having a sound field comprising stationary sound field zones and stationary quiet sounds, such as those shown in Figure 6D . The method of Figure 7 involves synthesizing 704 binaural sound emanating from the sound field zones using a set of head-related transfer functions (HRTFs). The method involves sensing 706 an input from the wearer via a sensor of the hearing device arrangement. The method also involves facilitating 708 movement of the wearer within the sound field of the audio display in response to a navigation input received from the sensor. The method further involves adjusting 710 the acoustic contrast between the sound field zones and the quiet zones in response to wearer movement within the sound field. The method also involves selecting 712 one of the sound field zones for playback to the wearer or actuation of a function in response to a selection input received from the sensor. The method further involves by binauralizing 714 sound emanating from the selected sound field zone using a set of HRTFs.

    [0048] Figure 8 illustrates a rendering setup for generating a sound field of a virtual auditory display with a quiet zone in accordance with various embodiments. Sound field synthesis techniques implemented in accordance with the disclosure aim at controlling a sound field within a bounded region 802 using actuators 803 at the boundary of this region. The elements of an array that contribute to synthesize a sound field are referred to as secondary sources 803, e.g., virtual loudspeakers. Although shown partially encircling the rendering region 802, it is understood that secondary sources 803 are typically distributed around the entirety of the rendering region 802. The primary source is the target of the synthesis, e.g., a point source emitting a speech signal or an audio stream of music.

    [0049] A synthetic spatially diverse sound field in a certain 2- or 3-dimensional region that is bounded by a distribution of secondary sources 803, such as a circular array in the 2-dimensional case or a spherical loudspeaker array in the 3-dimensional case, can be described by finite impulse response (FIR) filters that determine together with the signal of the primary source by a convolution operation, the output signal of each loudspeaker signal. These FIR filters are called driving functions and are obtained analytically or numerically by deconvolving the desired sound field at a certain distribution of points by the Green's function describing the sound propagation in the rendering region 802 between the secondary sources 803 and the desired points. In Figure 8 , the region 802 bounded by the distribution of secondary sources 803 is referred to as the rendering region. The rendering region 802 can contain several zones bounded by closed boundaries. Each zone can be a superposition of the sound fields of several primary sources or a zone of quiet.

    [0050] A synthesized sound field can contain virtual rigid boundaries. Synthesizing a sound field under the conditions of a virtual rigid boundary within the rendering region 802 allows the creation of a zone of quiet 804. Practically, to synthesize a sound field with zones of quiet 804 within a rendering region 802, the pressure (P) and velocity (V) are controlled along the boundary of a desired zone of quiet 804. The velocity is described mathematically by the following equation: V → x → ω = - 1 / jωρ grad P x → ω,
    Image available on "Original document"
    where j denote the complex unity, ρ denotes the density of the propagation medium, and ω denotes the radial frequency.

    [0051] To control the velocity at a certain predefined boundary lying within the rendering region 802, an approximation can be made by considering the boundary as a two layers' boundary 806 and 808. This approximation allows the computation of the normal component of the velocity as a weighted finite difference of the pressure between the two layers 806 and 808 as depicted in Figure 8 .

    [0052] A virtual rigid boundary should fulfill the Neumann boundary conditions constraining the normal velocity to be zero. To optimally control the sound field in the non-quiet (bright) zones (e.g., a sound field zone), techniques such as local sound field synthesis can be applied. Preferably, a soft scatterer fulfilling the Dirichlet boundary condition is virtually emulated along the desired zone.

    [0053] As was discussed previously with reference to Figures 6A-6D , creating a sound field containing non-interfering zones (e.g., sound field zones) can be done iteratively, by creating one local bright zone jointly with zones of quiet which coincide with the locations of the desired other non-interfering bright zones. By exploiting the superposition principle, in a second iteration, the optimization goal is to create another bright zone coinciding with one of the already created quiet zones jointly with zones of quiet covering the other zones. This is repeated for each desired zone.

    [0054] It should be noted that the synthesis of zones of quiet using 2-dimensional arrays is limited to synthesizing either non-intersecting bright and quiet zones or zones of quiet which are entirely included in a bright zone. More flexibility can be achieved in a 3-dimensional rendering setup, as shown in Figures 9 and 10. Figure 9 shows a synthesized sound field and Figure 10 shows the level distribution in dB of the synthesized sound field of Figure 9 . In Figures 9 and 10 , at a plane of interest, the bright zone 904 (the leaf-shaped sound field zone) is included in a zone of quiet 902. This can be achieved by simulating a 3-dimensional virtual rigid scatterer with a shape of the desired zone of quiet 902 in the x-y-plane and a limit in height (extension in the z-axis) to a length which is smaller than the array dimensions.

    [0055] To auralize the synthesized diverse sound field according to various embodiments, a set of HRTFs are used, which are measured in an anechoic chamber. As is shown in Figure 11 , a traditional HRTF measurement is obtained by either placing a loudspeaker 1106 at a predefined distance from a dummy head 1103 and rotating the dummy head 1103 or by rotating the loudspeaker 1106 along a circle 1104 whose origin coincidences with the center of the dummy head 1103. For 3-dimensional measurements, two axes are typically used to obtain a virtual sphere of loudspeakers. This circle (sphere) is referred to as the measurement circle (sphere). Alternatively, HRTFs can be recorded from individual listeners or calculated from images of the listener's ear.

    [0056] For purposes of explanation, a 2-dimensional setup is described with reference to Figure 11 . The measurement circle 1104 can be thought as the distribution of the secondary sources (e.g., virtual loudspeakers) used for sound field synthesis. To allow the hearing device wearer to scan the synthesized sound field from different perspectives which are not just different due to a head rotation but also due to a translation in the space, several measurements of HRTFs having the dummy head not only in the center of the measurement circle but scanning the space is needed according to various embodiments. This can be accomplished with a single HRTF measurement with the dummy head 1103 in the center of the measurement circle 1104.

    [0057] Relaxing (untightening) the perspective of the wearer towards the soundscape from the perspective of the dummy head 1103 during the HRTF measurement can be achieved by exploiting the fact that the sound field synthesis is HRTF independent. For example, assume that a virtual sound field can be synthesized using an array of 360 virtual loudspeakers that encircle the sound field. A set of indexed HRTFs (e.g., a set of 360x2 filters {(h1,L, h1,R),..., (hP,L, hP,R)}) can describe the sound propagation from each loudspeaker on a circle indexed by {1,...,P} to the two microphones in the ears of the dummy head 1103, indexed by {L,R}, at a resolution of 1 degree. Having the array of 360 virtual loudspeakers provides for the synthesis of the sound of a virtual point source at any position in the space, which can be again achieved using a set of single-input/multiple output (SIMO) FIR filters. These filters can be obtained using sound field synthesis techniques such as wave-field synthesis or (near-field compensated) higher-order Ambisonics. Convolving the SIMO filters with the original HRTF dataset as a 360x2 multiple-input/multiple output (MIMO) FIR results in a 1x2 SIMO filter describing a new HRTF set that describes the propagation from the new virtual sound source to the ears of the dummy head 1103.

    [0058] As such, the soundscape synthesis can be termed HRTF independent. Moreover, a wearer movement is equivalent to a translation of the inertial system whose origin was the wearer at the initial point. An example of this translation is illustrated in Figure 12. Figure 12 illustrates the strategy of obtaining a set of HRTFs taking into account the relative movement of wearer (from location 1202 to 1202') with respect to the virtual loudspeaker array 1204. The wearer's movement is equivalent to a movement of the array 1204 in the opposite direction (from location 1206 to 1206'). The new position (1206') of the array 1204 is used to synthesize virtual loudspeakers as point sources.

    [0059] Facilitating movement (untightening the perspective) of the wearer within the synthesized sound field involves determining the HRTF between the old loudspeaker positions 1206 and the new wearer positions (e.g., 1202'). To do so, the old loudspeaker positions 1206 are synthesized as point sources in the translated inertial system single-input/multiple-output FIR filter, which can be expressed as a vector Dp of the dimension RLx1, where R is the HRTF dataset resolution and L the required filter length of the synthesis operator. The new set of HRTF for each ear is obtained as a convolution of each filter Dp for a loudspeaker P with the original HRTF. Similarly, head rotations of the wearer are equivalent to a rotation of the array 1204 in the opposite direction. Hence, to obtain the effect of rotating the head, the HRTF filters' indices are circularly shifted in the opposite direction to the head rotation.

    [0060] Figures 13 and 14 are block diagrams showing various components of a hearing device that can be configured to implement an auditory display in accordance with various embodiments. The block diagram of Figure 13 represents a generic hearing device for purposes of illustration. The circuity shown in Figure 13 can be implemented to incorporate the circuitry shown in Figure 14 . It is understood that a hearing device may exclude some of the components shown in Figure 13 and/or include additional components.

    [0061] The hearing device 1302 shown in Figure 13 includes several components electrically connected to a mother flexible circuit 1303. A battery 1305 is electrically connected to the mother flexible circuit 1303 and provides power to the various components of the hearing device 1302. One or more microphones 1306 are electrically connected to the mother flexible circuit 1303, which provides electrical communication between the microphones 1306 and a DSP 1304. Among other components, the DSP 1304 incorporates or is coupled to audio signal processing circuitry configured to implement an auditory display of the disclosure. In some embodiments, the DSP 1304 can incorporate the circuitry shown in Figure 14 . In other embodiments, the DSP 1304 is coupled to a processor or other circuitry that incorporates the circuitry shown in Figure 14 . One or more user switches 1308 (e.g., on/off, volume, mic directional settings) are electrically coupled to the DSP 1304 via the flexible mother circuit 1303. In some embodiments, some or all of the user switches 1308 can be excluded and their functions replaced by wearer interaction with an auditory display of the hearing device 1302.

    [0062] An audio output device 1310 is electrically connected to the DSP 1304 via the flexible mother circuit 1303. In some embodiments, the audio output device 1310 comprises a speaker (coupled to an amplifier). In other embodiments, the audio output device 1310 comprises an amplifier coupled to an external receiver 1312 adapted for positioning within an ear of a wearer. The hearing device 1302 may incorporate a communication device 1307 coupled to the flexible mother circuit 1303 and to an antenna 1309 directly or indirectly via the flexible mother circuit 1303. The communication device 1307 can be a Bluetooth® transceiver, such as a BLE (Bluetooth® low energy) transceiver or other transceiver (e.g., an IEEE 802.11 compliant device). The communication device 1307 can be configured to receive a multiplicity of audio streams that can serve as primary sources of a sound field synthesized in accordance with various embodiments.

    [0063] Figure 14 shows various components of a hearing device that cooperate to provide a user interactive auditory display in accordance with various embodiments. As was discussed previously, the components shown in Figure 14 can be integral to the DSP 1304 or coupled to the DSP 1304. In the example shown in Figure 14 , an auditory display 1402 is depicted which includes a sound field 1404 comprising two independent sound field zones 1403, sf1 and sf2. Although not explicitly shown in Figure 14 , the sound field 1404 includes a number of quiet zones that provide acoustic contrast between the two sound field zones 1403. The perspective (location) of the hearing device wearer 1406 within the sound field 1404 is also shown. The sound field 1404 is synthesized using an array of virtual loudspeakers 1408.

    [0064] The auditory display circuitry includes a set of synthesis FIR filters 1410 and a set of binauralizing FIR filters 1420. The synthesis FIR filters 1410 include a first filter set 1412 for N independent sound field zones 1403. The synthesis FIR filters 1410 include a second filter set 1414 for P virtual loudspeakers 1408. The binauralizing FIR filters 1420 include a left (L) filter set 1422 and a right (R) filter set 1424 for P virtual loudspeakers 1408. For simplicity, the number of synthesis FIR filters 1410 and binauralizing FIR filters 1420 shown in Figure 14 is based on connections to two of P virtual loudspeakers 1408 and two of N sound field zones 1403. Two inputs (z1 and zN) of the synthesis FIR filters 1410 are shown, each of which receives a different primary electronic/digital source input (e.g., a sound, an audio stream, a voice prompt). The input z1 corresponds to the sound field zone sf1, and the input zN corresponds to the sound field zone sf2 in this illustrative example. The circuitry also includes HRTF calculation circuitry 1430 and a memory of HRTF data sets 1432. A wearer is shown wearing a hearing device arrangement 1440 which includes left (L) and right (R) speakers. Although the hearing device arrangement 1440 is illustrated as a headset, it is understood that the hearing device arrangement 1440 represents any kind of hearable device (e.g., left and right hearing instruments or hearables).

    [0065] The target sound field 1404 has N independent sound field zones 1403 described by a MIMO system of the size of N inputs and P outputs. In each of the sound field zones 1403, specific audio material which is represented as a mono audio channel can be played independently of the other sound field zones 1403. The sound field 1404 describing the auditory display 1402 is synthesized by filtering the N mono channels with the MIMO filter of the size NxP obtaining P virtual loudspeaker channels. P is also the number of the virtual loudspeakers used during the HRTF measurement via the HRTF calculation circuit 1430. Hence, the HRTF data set 1432 is described by a MIMO system with P inputs and 2 outputs.

    [0066] According to the wearer's position in the space, which can be inferred using localization algorithms (e.g., audio based localization of the microphones in the hearing device), a new dataset of HRTF is calculated according to the methods described hereinabove. Additionally, according to the wearer's head orientation, which can be obtained by dedicated sensors (e.g., accelerometer and a gyroscope integrated in the hearing device), the indices of the synthesized HRTF is circularly shifted accordingly and the P original virtual loudspeaker channels are filtered with the new calculated HRTF offering 2 signals which are finally represented to the wearer as left and right ear signals via the hearing device arrangement 1440.


    Claims

    1. A method implemented by a hearing device arrangement adapted to be worn by a wearer, the method comprising:

    generating, by the hearing device arrangement, a virtual auditory user interface (300) comprising a sound field (302), a plurality of disparate sound field zones (sf1, sf2, sf3), and a plurality of quiet zones (qz2,1, qz3,1, qz1,2, qz3,2, qz1,3, qz2,3) that provide acoustic contrast between the sound field zones, the sound field zones and the quiet zones remaining positionally stationary within the sound field, wherein each one of the plurality of quiet zones is generated at the location where one of the plurality of sound field zones is created and is configured to attenuate sound emanating from another one of the plurality of sound field zones;

    sensing an input from the wearer via one or more sensors at the hearing device arrangement;

    facilitating movement of the wearer within the sound field in response to a navigation input received from the one or more sensors; and

    selecting one of the sound field zones for playback to the wearer or actuating a function by the hearing device arrangement in response to a selection input received from the one or more sensors.


     
    2. The method of claim 1, wherein facilitating wearer movement comprises facilitating virtual or actual movement of the wearer within the sound field.
     
    3. The method of claim 1 or claim 2, wherein generating the virtual auditory user interface comprises simultaneously or sequentially playing back sound from each of the sound field zones.
     
    4. The method of any preceding claim, comprising adjusting wearer-perceived amplitude and directionality of sound emanating from the sound field zones in response to movement of the wearer within the sound field.
     
    5. The method of any preceding claim, comprising binauralizing sound emanating from the sound field zones using a set of head related transfer functions (HRTFs).
     
    6. The method of any preceding claim, wherein the one or more sensors comprise one or both of an electrooculographic (EOG) sensor and an electroencephalographic (EEG) sensor.
     
    7. The method of any preceding claim, wherein at least some of the sound field zones are associated with a separate audio stream.
     
    8. An apparatus, comprising:
    a pair of hearing devices configured to be worn by a wearer, each hearing device comprising:

    a processor configured to generate a virtual auditory user interface (300) comprising a sound field (302), a plurality of disparate sound field zones (sf1, sf2, sf3), and a plurality of quiet zones (qz2,1, qz3,1, qz1,2, qz3,2, qz1,3, qz2,3) that provide acoustic contrast between the sound field zones, the sound field zones and the quiet zones remaining positionally stationary within the sound field, wherein each one of the plurality of quiet zones is generated at the location where one of the plurality of sound field zones is created and is configured to attenuate sound emanating from another one of the plurality of sound field zones;

    one or more sensors configured to sense a plurality of inputs from the wearer, the processor configured to facilitate movement of the wearer within the sound field in response to a navigation input received from the one or more sensors; and

    a speaker;

    wherein the processor is configured to select one of the sound field zones for playback via the speaker or actuate a hearing device function in response to a selection input received from the one or more sensors.


     
    9. The apparatus of claim 8, wherein the processor is configured to facilitate virtual or actual movement of the wearer within the sound field in response to the navigation input received from the one or more sensors.
     
    10. The apparatus of claim 8 or claim 9, wherein:

    the processor is operable in a navigation mode and a selection mode;

    the processor is configured to simultaneously or sequentially play back sound from each of the sound field zones via the speaker in the navigation mode; and

    the processor is configured to play back sound from the selected sound field zone via the speaker or actuate a hearing device function in the selection mode.


     
    11. The apparatus of any one of claims 8 to 10, wherein the processor is configured to adjust wearer-perceived amplitude and directionality of sound emanating from the sound field zones in response to movement of the wearer within the sound field.
     
    12. The apparatus of any one of claims 8 to 11, wherein the processor is configured to binauralize sound emanating from the sound field zones using a set of head related transfer functions (HRTFs).
     
    13. The apparatus of any one of claims 8 to 12, wherein the processor is configured to modify a set of filters for adjusting the acoustic contrast between the sound field zones and the quiet zones in response to movement of the wearer within the sound field.
     
    14. The apparatus of any one of claims 8 to 13, wherein the one or more sensors comprise one or both of an electrooculographic (EOG) sensor and an electroencephalographic (EEG) sensor.
     
    15. The apparatus of any one of claims 8 to 10, wherein at least some of the sound field zones are associated with a separate audio stream received by a wireless transceiver of the hearing device.
     


    Ansprüche

    1. Verfahren, das durch eine Hörgeräteanordnung implementiert wird, die von einem Träger getragen werden kann, wobei das Verfahren Folgendes umfasst:

    Erzeugen, durch die Hörgeräteanordnung, einer virtuellen auditiven Benutzerschnittstelle (300), die ein Schallfeld (302), mehrere disparate Schallfeldbereiche (sf1, sf2, sf3) und mehrere Ruhebereiche (qz2,1, qz3,1, qz1,2, qz3,2, qz1,3, qz2,3) umfasst, die einen akustischen Kontrast zwischen den Schallfeldbereichen bereitstellen, wobei die Schallfeldbereiche und die Ruhebereiche innerhalb des Schallfeldes ortsfest bleiben, wobei jeder der mehreren Ruhebereiche an dem Ort erzeugt wird, an dem einer der mehreren Schallfeldbereiche erzeugt wird, und so konfiguriert ist, dass er Schall dämpft, der von einem anderen der mehreren Schallfeldbereiche ausgeht;

    Erfassen einer Eingabe des Trägers über einen oder mehrere Sensoren an der Hörgeräteanordnung;

    Erleichtern der Bewegung des Trägers innerhalb des Schallfeldes als Reaktion auf eine Navigationseingabe, die von einem oder mehreren Sensoren empfangen wird; und

    Auswählen eines der Schallfeldbereiche für die Wiedergabe an den Träger oder Auslösen einer Funktion durch die Hörgeräteanordnung als Reaktion auf eine Auswahleingabe, die von einem oder mehreren Sensoren empfangen wird.


     
    2. Verfahren nach Anspruch 1, wobei das Erleichtern der Bewegung des Trägers das Erleichtern der virtuellen oder tatsächlichen Bewegung des Trägers innerhalb des Schallfeldes umfasst.
     
    3. Verfahren nach Anspruch 1 oder Anspruch 2, wobei das Erzeugen der virtuellen akustischen Benutzerschnittstelle das gleichzeitige oder sequentielle Wiedergeben von Schall aus jedem der Schallfeldbereiche umfasst.
     
    4. Verfahren nach einem der vorhergehenden Ansprüche, umfassend das Einstellen der vom Träger wahrgenommenen Amplitude und Richtcharakteristik von Schall, der von den Schallfeldbereichen als Reaktion auf die Bewegung des Trägers innerhalb des Schallfeldes ausgeht.
     
    5. Verfahren nach einem der vorhergehenden Ansprüche, umfassend das Binauralisieren von Schall, der von den Schallfeldbereichen ausgeht, unter Verwendung einer Reihe von kopfbezogenen Übertragungsfunktionen (HRTFs).
     
    6. Verfahren nach einem der vorhergehenden Ansprüche, wobei der eine oder die mehreren Sensoren einen oder beide aus einem elektrookulographischen (EOG) und einem elektroenzephalographischen (EEG) Sensor umfassen.
     
    7. Verfahren nach einem der vorhergehenden Ansprüche, wobei zumindest einige der Schallfeldbereiche einem separaten Audiostrom zugeordnet sind.
     
    8. Vorrichtung, Folgendes umfassend:
    ein Paar von Hörgeräten, die so konfiguriert sind, dass sie von einem Träger getragen werden, wobei jedes Hörgerät Folgendes umfasst:

    einen Prozessor, der so konfiguriert ist, dass er eine virtuelle auditive Benutzerschnittstelle (300) erzeugt, die ein Schallfeld (302), mehrere unterschiedliche Schallfeldbereiche (sf 1, sf2, sf3) und mehrere Ruhebereiche (qz2,1, qz3,1, qz1,2, qz3,2, qz1,3, qz2,3) umfasst, die einen akustischen Kontrast zwischen den Schallfeldbereichen bereitstellen, wobei die Schallfeldbereiche und die Ruhebereiche innerhalb des Schallfeldes ortsfest bleiben, wobei jeder der mehreren Ruhebereiche an dem Ort erzeugt wird, an dem einer der mehreren Schallfeldbereiche erzeugt wird, und so konfiguriert ist, dass er Schall dämpft, der von einem anderen der mehreren Schallfeldbereiche ausgeht;

    einen oder mehrere Sensoren, die so konfiguriert sind, dass sie mehrere Eingaben des Trägers erfassen, wobei der Prozessor so konfiguriert ist, dass er die Bewegung des Trägers innerhalb des Schallfeldes als Reaktion auf eine Navigationseingabe erleichtert, die von dem einen oder den mehreren Sensoren empfangen wird; und

    einen Lautsprecher;

    wobei der Prozessor so konfiguriert ist, dass er einen der Schallfeldbereiche für die Wiedergabe über den Lautsprecher auswählt oder eine Hörgerätefunktion als Reaktion auf eine von einem oder mehreren Sensoren empfangene Auswahleingabe aktiviert.


     
    9. Vorrichtung nach Anspruch 8, wobei der Prozessor so konfiguriert ist, dass er die virtuelle oder tatsächliche Bewegung des Trägers innerhalb des Schallfeldes als Reaktion auf die von einem oder mehreren Sensoren empfangene Navigationseingabe erleichtert.
     
    10. Vorrichtung von Anspruch 8 oder Anspruch 9, wobei:

    der Prozessor in einem Navigationsmodus und einem Auswahlmodus betrieben werden kann;

    wobei der Prozessor so konfiguriert ist, dass er im Navigationsmodus gleichzeitig oder sequentiell Schall von jedem der Schallfeldbereiche über den Lautsprecher wiedergibt; und

    der Prozessor so konfiguriert ist, dass er im Auswahlmodus Schall aus der ausgewählten Schallfeldzone über den Lautsprecher wiedergibt oder eine Hörgerätefunktion aktiviert.


     
    11. Vorrichtung nach einem der Ansprüche 8 bis 10, wobei der Prozessor so konfiguriert ist, dass er die vom Träger wahrgenommene Amplitude und Richtcharakteristik des aus den Schallfeldbereichen austretenden Schalls als Reaktion auf die Bewegung des Trägers innerhalb des Schallfeldes einstellt.
     
    12. Vorrichtung nach einem der Ansprüche 8 bis 11, wobei der Prozessor so konfiguriert ist, dass er den aus den Schallfeldbereichen austretenden Schall mit Hilfe einer Reihe von kopfbezogenen Übertragungsfunktionen (HRTF) binauralisiert.
     
    13. Vorrichtung nach einem der Ansprüche 8 bis 12, wobei der Prozessor so konfiguriert ist, dass er eine Reihe von Filtern zur Einstellung des akustischen Kontrasts zwischen den Schallfeldbereichen und den Ruhebereichen als Reaktion auf die Bewegung des Trägers innerhalb des Schallfeldes modifiziert.
     
    14. Vorrichtung nach einem der Ansprüche 8 bis 13, wobei der eine oder die mehreren Sensoren einen oder beide aus einem elektrookulographischen (EOG) und einem elektroenzephalographischen (EEG) Sensor umfassen.
     
    15. Vorrichtung nach einem der Ansprüche 8 bis 10, wobei zumindest einige der Schallfeldbereiche mit einem separaten Audiostrom verbunden sind, der von einem drahtlosen Sendeempfänger des Hörgerätes empfangen wird.
     


    Revendications

    1. Procédé mis en œuvre par un agencement de dispositif auditif adapté pour être porté par un porteur, le procédé comprenant :

    la génération, par l'agencement de dispositif auditif, d'une interface utilisateur auditive virtuelle (300) comprenant un champ sonore (302), une pluralité de zones de champ sonore (sf1, sf2, sf3) disparates et une pluralité de zones silencieuses (qz2,1, qz3,1, qz1,2, qz3,2, qz1,3, qz2,3) qui fournissent un contraste acoustique entre les zones de champ sonore, les zones de champ sonore et les zones silencieuses restant positionnellement stationnaires à l'intérieur du champ sonore, chacune de la pluralité de zones silencieuses étant générée à l'emplacement où l'une de la pluralité de zones de champ sonore est créée et est configurée pour atténuer du son émanant d'une autre zone de la pluralité de zones de champ sonore ;

    la détection d'une entrée du porteur par l'intermédiaire d'un ou plusieurs capteurs au niveau de l'agencement de dispositif auditif ;

    la facilitation d'un mouvement du porteur à l'intérieur du champ sonore en réponse à une entrée de navigation reçue du ou des capteurs ; et

    la sélection de l'une des zones de champ sonore pour une lecture au porteur ou l'activation d'une fonction par un agencement de dispositif auditif en réponse à une entrée de sélection reçue du ou des capteurs.


     
    2. Procédé selon la revendication 1, dans lequel la facilitation d'un mouvement d'un porteur comprend la facilitation d'un mouvement virtuel ou réel du porteur à l'intérieur du champ sonore.
     
    3. Procédé selon la revendication 1 ou la revendication 2, dans lequel la génération de l'interface utilisateur auditive virtuelle comprend la lecture simultanée ou séquentielle de son depuis chacune des zones de champ sonore.
     
    4. Procédé selon l'une quelconque des revendications précédentes, comprenant l'ajustement de l'amplitude et de la directionnalité perçues par le porteur de son émanant des zones de champ sonore en réponse à un mouvement du porteur à l'intérieur du champ sonore.
     
    5. Procédé selon l'une quelconque des revendications précédentes, comprenant la binauralisation de son émanant des zones de champ sonore à l'aide d'un ensemble de fonctions de transfert asservies aux mouvements de la tête (HRTF).
     
    6. Procédé selon l'une quelconque des revendications précédentes, dans lequel le ou les capteurs comprennent un capteur électrooculographique (EOG) et/ou un capteur électroencéphalographique (EEG).
     
    7. Procédé selon l'une quelconque des revendications précédentes, dans lequel au moins certaines des zones de champ sonore sont associées à un flux audio séparé.
     
    8. Appareil comprenant :
    une paire de dispositifs auditifs configurés pour être portés par un porteur, chaque dispositif auditif comprenant :

    un processeur configuré pour générer une interface utilisateur auditive virtuelle (300) comprenant un champ sonore (302), une pluralité de zones de champ sonore (sf1, sf2, sf3) disparates et une pluralité de zones silencieuses (qz2,1, qz3,1, qz1,2, qz3,2, qz1,3, qz2,3) qui fournissent un contraste acoustique entre les zones de champ sonore, les zones de champ sonore et les zones silencieuses restant positionnellement stationnaires à l'intérieur du champ sonore, chacune de la pluralité de zones silencieuses étant générée à l'emplacement où l'une de la pluralité de zones de champ sonore est créée et est configurée pour atténuer du son émanant d'une autre zone de la pluralité de zones de champ sonore ;

    un ou plusieurs capteurs configurés pour détecter une pluralité d'entrées du porteur, le processeur étant configuré pour faciliter un mouvement du porteur à l'intérieur du champ sonore en réponse à une entrée de navigation reçue du ou des capteurs ; et

    un haut-parleur ;

    le processeur étant configuré pour sélectionner l'une des zones de champ sonore pour une lecture par l'intermédiaire du haut-parleur ou pour activer une fonction de dispositif auditif en réponse à une entrée de sélection reçue du ou des capteurs.


     
    9. Appareil selon la revendication 8, dans lequel le processeur est configuré pour faciliter un mouvement virtuel ou réel du porteur à l'intérieur du champ sonore en réponse à l'entrée de navigation reçue du ou des capteurs.
     
    10. Appareil selon la revendication 8 ou la revendication 9, dans lequel :

    le processeur peut fonctionner en mode navigation et en mode sélection ;

    le processeur est configuré pour lire simultanément ou séquentiellement du son depuis chacune des zones de champ sonore par l'intermédiaire du haut-parleur en mode navigation ; et

    le processeur est configuré pour lire du son depuis la zone de champ sonore sélectionnée par l'intermédiaire du haut-parleur ou pour activer une fonction de dispositif auditif en mode sélection.


     
    11. Appareil selon l'une quelconque des revendications 8 à 10, dans lequel le processeur est configuré pour ajuster l'amplitude et la directionnalité perçues par le porteur de son émanant des zones de champ sonore en réponse à un mouvement du porteur à l'intérieur du champ sonore.
     
    12. Appareil selon l'une quelconque des revendications 8 à 11, dans lequel le processeur est configuré pour binauraliser du son émanant des zones de champ sonore à l'aide d'un ensemble de fonctions de transfert asservies aux mouvements de la tête (HRTF).
     
    13. Appareil selon l'une quelconque des revendications 8 à 12, dans lequel le processeur est configuré pour modifier un ensemble de filtres pour régler le contraste acoustique entre les zones de champ sonore et les zones silencieuses en réponse à un mouvement du porteur à l'intérieur du champ sonore.
     
    14. Appareil selon l'une quelconque des revendications 8 à 13, dans lequel le ou les capteurs comprennent un capteur électrooculographique (EOG) et/ou un capteur électroencéphalographique (EEG).
     
    15. Appareil selon l'une quelconque des revendications 8 à 10, dans lequel au moins certaines des zones de champ sonore sont associées à un flux audio séparé reçu par un émetteur-récepteur sans fil du dispositif auditif.
     




    Drawing



















































    REFERENCES CITED IN THE DESCRIPTION



    This list of references cited by the applicant is for the reader's convenience only. It does not form part of the European patent document. Even though great care has been taken in compiling the references, errors or omissions cannot be excluded and the EPO disclaims all liability in this regard.

    Patent documents cited in the description