Field
[0001] The present disclosure relates to a device and a method for setting up a hearing
instrument.
Background
[0002] When programming a hearing device via a programming software e.g. running on a hearing
device programming/setup apparatus, the apparatus/software will - when a connection
to the hearing device is established - be able to identify the type (and the setting)
of the device.
[0003] However, the complete hearing-device system may also include one or more (additional)
exchangeable parts, which are not detected automatically. This could for example be
a speaker link and the dome in a receiver-in-the ear (RITE) hearing device.
[0004] The information about these additional parts has to be entered manually in the fitting
software by the person who fits the device. If the information is entered incorrectly,
e.g. indicating an incorrect type of dome, an error will be introduced in the estimation
of the acoustic performance. The consequence may be an erroneous gain prescription
and setting of the device.
[0005] Therefore, there is a need to provide a solution that allows for using machine vision
to detect and verify a hearing-device configuration.
[0006] Furthermore, when performing acoustical verification measurements on hearing instruments,
e.g. real-ear measurements (REM) and hearing-instrument testing (HIT), the hearing
instruments have to be positioned correctly, either on the ear (in case of REM) or
in a test box (in case of HIT).
[0007] Furthermore, just as mentioned above, also for the measurement scenario, the hearing
instruments have to be configured correctly, e.g. be fitted with the correct earpiece
(dome) and speaker link (in case of a RITE hearing instrument).
[0008] The verification of the hearing-instrument position and configuration is normally
based on a subjective evaluation made by the person who performs the verification
measurement. An error performed during the positioning or configuration of the hearing
instrument may lead to erroneous measurements.
[0009] Therefore, there is a need to provide a solution that allows for using machine vision
to verify hearing-instrument testing and real-ear measurement setups.
[0010] The present disclosure provides at least an alternative to the prior art.
Summary
[0011] According to an aspect of the present disclosure, there is provided a setup device
for a hearing instrument. The setup device comprises a receiving unit configured to
receive, from at least one image pickup device, at least one image of said hearing
instrument. The setup device further comprises a processing unit configured to process
said at least one image of said hearing instrument. Further, the setup device comprises
an identifying unit configured to identify a property of said hearing instrument based
on a result of said processing unit. In addition, the setup device comprises a controlling
unit configured to control a setup processing for said hearing instrument based on
said property of said hearing instrument.
[0012] The property of said hearing instrument may comprise a hardware configuration of
said hearing instrument.
[0013] The hardware configuration of said hearing instrument may be a type of an exchangeable
part of said hearing instrument. The hardware configuration of said hearing instrument
may be a type of an exchangeable part attached to said hearing instrument.
[0014] The exchangeable part of said hearing instrument may wear a type unique mark. The
exchangeable part attached to said hearing instrument may wear a type unique mark.
[0015] The property of said hearing instrument may comprise a location and/or an orientation
of said hearing instrument. The location may for example be a location in relation
to a loudspeaker of a REM equipment, or may for example be a location in relation
to a user's ear, or may for example be a location in relation to (a marker of) a desired
position in a test box of a HIT equipment.
[0016] The processing unit may be configured to process said at least one image of said
hearing instrument utilizing at least one of a machine-vision technique and an image
recognition algorithm.
[0017] The controlling unit may be configured to compare said property of said hearing instrument
with a preset property.
[0018] The controlling unit may be configured to halt or interrupt said setup processing
for said hearing instrument, if said property of said hearing instrument does not
match said preset property. In particular, in such case, the controlling unit may
be configured to start or resume said setup processing for said hearing instrument,
if said property of said hearing instrument matches said preset property.
[0019] The controlling unit may be configured to output a warning, if said property of said
hearing instrument does not match said preset property.
[0020] The preset property may comprise a target state and a range of an acceptable variation
from said target state.
[0021] The controlling unit may be configured to enter said property of said hearing instrument
into a reserved memory or program section.
[0022] The hearing instrument may be one of a hearing aid device, an earphone, and a headset.
[0023] The setup device may comprise said at least one image pickup device.
[0024] Further, the at least one image pickup device may be one of a web camera, a smartphone
camera, and a camera integrated with said setup device.
[0025] According to another aspect of the present disclosure, there is provided a setup
method for a hearing instrument. The setup method comprises receiving, from at least
one image pickup device, at least one image of said hearing instrument. The setup
method further comprises processing said at least one image of said hearing instrument.
Further, the setup method comprises identifying a property of said hearing instrument
based on a result of said processing unit. Still further, the setup method comprises
controlling a setup processing for said hearing instrument based on said property
of said hearing instrument.
[0026] According to another aspect of the present disclosure, there is provided a computer
program product comprising computer-executable program code, which, when the program
is run on a computer, is configured to cause the computer to carry out the method
according to the aspect above.
Brief description of the drawings
[0027] The aspects of the disclosure may be best understood from the following detailed
description taken in conjunction with the accompanying figures. The figures are schematic
and simplified for clarity, and they just show details to improve the understanding
of the claims, while other details are left out. Throughout, the same reference numerals
are used for identical or corresponding parts. The individual features of each aspect
may each be combined with any or all features of the other aspects. These and other
aspects, features and/or technical effect will be apparent from and elucidated with
reference to the illustrations described hereinafter in which:
Figure 1 illustrates a setup device according to an embodiment of the disclosure,
Figure 2 illustrates a method according to an embodiment of the disclosure,
Figure 3 illustrates details of a method according to an embodiment of the disclosure,
Figure 4 illustrates details of a method according to an embodiment of the disclosure,
Figure 5 illustrates a test box according to an embodiment of the disclosure, and
Figure 6 illustrates a scenario of a verification process according to an embodiment
of the disclosure.
Detailed description of drawings and embodiments of the present invention
[0028] The detailed description set forth below in connection with the appended drawings
is intended as a description of various configurations. The detailed description includes
specific details for the purpose of providing a thorough understanding of the various
concepts. However, it will be apparent to those skilled in the art that these concepts
may be practiced without these specific details. Several aspects of the apparatus
and methods are described by various blocks, functional units, modules, components,
circuits, steps, processes, algorithms, etc. (collectively referred to as "elements").
Depending upon particular application, design constraints or other reasons, these
elements may be implemented using electronic hardware, computer program, or any combination
thereof.
[0029] The electronic hardware may include microprocessors, microcontrollers, digital signal
processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices
(PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured
to perform the various functionality described throughout this disclosure. Computer
program shall be construed broadly to mean instructions, instruction sets, code, code
segments, program code, programs, subprograms, software modules, applications, software
applications, software packages, routines, subroutines, objects, executables, threads
of execution, procedures, functions, etc., whether referred to as software, firmware,
middleware, microcode, hardware description language, or otherwise.
[0030] A hearing device (or hearing instrument) may include a hearing aid that is adapted
to improve or augment the hearing capability of a user by receiving an acoustic signal
from a user's surroundings, generating a corresponding audio signal, possibly modifying
the audio signal and providing the possibly modified audio signal as an audible signal
to at least one of the user's ears. The "hearing device" may further refer to a device
such as an earphone or a headset adapted to receive an audio signal electronically,
possibly modifying the audio signal and providing the possibly modified audio signals
as an audible signal to at least one of the user's ears. Such audible signals may
be provided in the form of an acoustic signal radiated into the user's outer ear,
or an acoustic signal transferred as mechanical vibrations to the user's inner ears
through bone structure of the user's head and/or through parts of the middle ear of
the user, or electric signals transferred directly or indirectly to the cochlear nerve
and/or to the auditory cortex of the user.
[0031] The hearing device is adapted to be worn in any known way. This may include i) arranging
a unit of the hearing device behind the ear with a tube leading air-borne acoustic
signals into the ear canal or with a receiver/ loudspeaker arranged close to or in
the ear canal such as in a Behind-the-Ear type hearing aid, and/or ii) arranging the
hearing device entirely or partly in the pinna and/or in the ear canal of the user
such as in an In-the-Ear type hearing aid or In-the-Canal/Completely-in-the-Canal
type hearing aid, or iii) arranging a unit of the hearing device attached to a fixture
implanted into the skull bone such as in Bone Anchored Hearing Aid or Cochlear Implant,
or iv) arranging a unit of the hearing device as an entirely or partly implanted unit
such as in Bone Anchored Hearing Aid or Cochlear Implant.
[0032] A "hearing system" refers to a system comprising one or two hearing devices, and
a "binaural hearing system" refers to a system comprising two hearing devices where
the devices are adapted to cooperatively provide audible signals to both of the user's
ears. The hearing system or binaural hearing system may further include auxiliary
device(s) that communicate(s) with at least one hearing device, the auxiliary device
affecting the operation of the hearing devices and/or benefitting from the functioning
of the hearing devices. A wired or wireless communication link between the at least
one hearing device and the auxiliary device is established that allows for exchanging
information (e.g. control and status signals, possibly audio signals) between the
at least one hearing device and the auxiliary device. Such auxiliary devices may include
at least one of remote controls, remote microphones, audio gateway devices, mobile
phones, public-address systems, car audio systems or music players, or a combination
thereof. The audio gateway is adapted to receive a multitude of audio signals such
as from an entertainment device like a TV or a music player, a telephone apparatus
like a mobile telephone or a computer, or a personal computer (PC). The audio gateway
is further adapted to select and/or combine an appropriate one of the received audio
signals (or combination of signals) for transmission to the at least one hearing device.
The remote control is adapted to control functionality and operation of the at least
one hearing device. The function of the remote control may be implemented in a SmartPhone
or other electronic device, the SmartPhone/electronic device possibly running an application
that controls functionality of the at least one hearing device.
[0033] In general, a hearing device includes i) an input unit such as a microphone for receiving
an acoustic signal from a user's surroundings and providing a corresponding input
audio signal, and/or ii) a receiving unit for electronically receiving an input audio
signal. The hearing device further includes a signal-processing unit for processing
the input audio signal and an output unit for providing an audible signal to the user
in dependence on the processed audio signal.
[0034] The input unit may include multiple input microphones, e.g. for providing direction-dependent
audio signal processing. Such a directional microphone system is adapted to enhance
a target acoustic source among a multitude of acoustic sources in the user's environment.
In one aspect, the directional system is adapted to detect (such as adaptively detect)
from which direction a particular part of the microphone signal originates. This may
be achieved by using conventionally known methods. The signal-processing unit may
include an amplifier that is adapted to apply a frequency dependent gain to the input
audio signal. The signal processing unit may further be adapted to provide other relevant
functionality such as compression, noise reduction, etc. The output unit may include
an output transducer such as a loudspeaker/receiver for providing an air-borne acoustic
signal to the ear canal, or a vibrator for providing a structure-borne or liquid-borne
acoustic signal either transcutaneously or percutaneously. In some hearing devices,
the output unit may include one or more output electrodes for providing the electric
signals such as in a Cochlear Implant.
[0035] Now referring to Figure 1, which illustrates a setup device according to an embodiment
of the disclosure. In general, such setup device 10 may comprise a receiving unit
11, a processing unit 12, an identifying unit 13, and a controlling unit 14. The receiving
unit 11 receives, from at least one image pickup device, at least one image of said
hearing instrument. The processing unit 12 processes said at least one image of said
hearing instrument. The identifying unit 13 identifies a property of said hearing
instrument based on a result of said processing unit. The controlling unit 14 controls
a setup processing for said hearing instrument based on said property of said hearing
instrument.
[0036] Figure 2 is a schematic diagram of a method according to an embodiment of the disclosure.
The device according to Figure 1 may perform the method of Figure 2 but is not limited
to this method. The method of Figure 2 may be performed by the device of Figure 1
but is not limited to being performed by this device.
[0037] As shown in Figure 2, in general, such method comprises an operation of receiving
(S21), from at least one image pickup device, at least one image of said hearing instrument,
an operation of processing (S22) said at least one image of said hearing instrument,
an operation of identifying (S23) a property of said hearing instrument based on a
result of said processing unit, and an operation of controlling (S24) a setup processing
for said hearing instrument based on said property of said hearing instrument.
[0038] Now referring to Figure 3, which illustrates details of a method according to an
embodiment of the disclosure.
[0039] According to the illustrated basic principle of the disclosure, one or more photos
are taken (S31) of the hearing device (instrument) with all additional parts in place,
and machine-vision techniques (image-recognition algorithms) are used to identify
(S32) all the specific additional parts of the device, which affect the gain calculations
done by the fitting software in the fitting the particular type of device.
[0040] The identified parts can be entered automatically in the relevant empty sections
of the software.
[0041] Alternatively, the identified parts can be compared (S33) with information already
entered (by the fitter), and then an error message can be generated in case of a mismatch
between already entered and automatically identified data (S34). The mismatch may
indicate that information was entered incorrectly by the fitter, or that the fitter
attached an incorrect part to the device (e.g. wrong type of earpiece).
[0042] Once accepted, changed settings corresponding to identified parts can be entered
(S35) in the relevant empty sections of the software.
[0043] Now referring to Figure 4, which illustrates details of a method according to an
embodiment of the disclosure.
[0044] In particular, Figure 4 illustrates a specific example of a preferable/intended use
of the principles of the present disclosure.
[0045] Namely, in short, according to Figure 4, the additional parts of the Receiver-In-The-Ear
hearing device are identified and compared to existing software settings. One mismatch
is found and communicated to the fitter of the device.
[0046] In detail, in the taken picture on the left of Figure 4, a speaker link 41 and the
power dome 42 are detected, and from the detected speaker link 41 a fitting level
of "100" is derived.
[0047] When comparing the detected additional parts (and the derived properties thereof)
with respective entries in the fitting software ("Fitting Level = 100"; "Earpiece
= Bass Dome, single") shown in the middle portion of Figure 4, a mismatch in relation
to the dome is found and notified, as shown in the right of Figure 4. The notification
may provide guidance to the fitter on how to solve the mismatch and/or may prompt
confirmation to automatically solve the mismatch.
[0048] The image recognition algorithm is preferably able to recognize all possible additional
parts of the hearing devices, which are to be fitted with the fitting software. To
maximize the likelihood of correct recognition, some additional visual cues (unique
mark) may be added to different parts, which have (almost) the same shape and color
(e.g. some types of earpieces). These cues could, for example, be letters or symbols
printed on the parts, possibly in specific colors.
[0049] Besides fitting of hearing devices/instruments, the principles of the present disclosure
can also be used for setting processing in relation to earphones and headsets, in
particular, in-ear types with different earpiece options.
[0050] The picture of the hearing device is taken with a camera, which allows easy and instantaneous
transfer of the picture to the computer (in general, setup device) hosting the fitting
software and the machine-vision software (which could be an integrated part of the
fitting software).
[0051] The camera could, for example, be a smartphone camera (used via a dedicated app,
which automatically connects to the fitting software and transfers the picture), it
could be a personal computer (PC) webcam (controlled by the fitting software), or
it could be a camera integrated in another piece of equipment, e.g. real-ear measurement
(REM) equipment or hearing-instrument testing (HIT) equipment.
[0052] According to a concept of the present disclosure, an automatized verification of
the position and/or configuration of a hearing instrument is performed based on a
use of machine vision techniques (e.g. image recognition) applied to of one or more
pictures taken with a camera (or multiple cameras) built into the measurement equipment.
[0053] Now referring to Figure 5, which illustrates a test box, i.e. an example of such
measurement equipment, according to an embodiment of the disclosure.
[0054] The test box 51 may comprise a compartment or bay 52, which can be closed or covered
by a lid or cover 54.
[0055] For measuring, the hearing instrument 53 may be placed within the compartment or
bay 52.
[0056] The test box may comprise a speaker 55 and/or microphone. The speaker 55 and/or microphone
may be arranged in the lid/cover 54 so as to be directed toward the inside of the
compartment/bay 52 when closed.
[0057] The test box may further comprise a camera 56. The camera 56 may be arranged in the
lid/cover 54 so as to be directed toward the inside of the compartment/bay 52 when
closed as well. The camera 56 may further be arranged in the lid/cover 54 so as to
be directed toward a wearer of a hearing instrument when the lid/cover 54 is opened.
In such case, the same camera may be used for REM as well as HIT processing.
[0058] Namely, in a measurement scenario, measurement equipment (e.g. test box 51) may have
the camera built into the movable/hinged lid 54 as illustrated. When the lid is open
(e.g. for REM), the camera 56 is pointing at the person wearing the hearing instrument.
When the lid is closed (for HIT), the camera is pointing at the hearing instrument
positioned in the box.
[0059] The camera 56 is however not limited to the position shown in Figure 5.
[0060] Further, the test box is not limited to having (only) one camera. In other words,
systems with multiple cameras are also possible.
[0061] The test box may further comprise a light element 57 arranged such that the light
element is able to emit light (at least) into the direction in which the camera operates
to pick up an image.
[0062] The measurement equipment is not limited to the test box 51 illustrated in Figure
5.
[0063] According to this concept of the present disclosure, a specified state (e.g. the
correct position/orientation of a hearing instrument 53 within a test box 51) is compared
with a detected state (as described by image recognition or another type of processing
of a picture taken by the (built-in) camera 56).
[0064] The camera 56 and associated machine-vision software (incl. image-recognition algorithms)
may according to the concept of the present disclosure be used for monitoring the
position of the hearing instrument 53 (in the test box 51) when a coupler measurement
is performed. This measurement is performed with the lid 54 closed and, assuming the
use of a standard camera, provision of additional sufficient light 57 (to obtain a
photo with required quality) may therefore preferably be part of the system (test
box).
[0065] Referring to Figure 6, which illustrates a scenario of a verification process according
to an embodiment of the disclosure, the scenario of imaging when a coupler measurement
is performed is shown.
[0066] In particular, in the Figure 6, an image of an exemplary correct position (on the
left of Figure 6) and of an exemplary incorrect position (on the right of that Figure)
of a BTE hearing instrument 61 during coupler measurement (using a measurement coupler
62) is shown.
[0067] The incorrect (displaced) position of the hearing instrument 61 e.g. in relation
to a reference microphone 63 can be detected in the image and can be interpreted to
give rise to a warning in the test software.
[0068] The camera 56 and associated machine-vision software (incl. image-recognition algorithms)
may according to the concept of the present disclosure further be used for monitoring
the position of the REM probe microphone during a calibration measurement, where the
probe microphone is to be held in a certain position (distance and orientation) relative
to the loudspeaker 55 (in the lid).
[0069] The camera 56 and associated machine-vision software (incl. image-recognition algorithms)
may according to the concept of the present disclosure further be used for monitoring
the distance from the loudspeaker 55 to the test subject, and the orientation of the
test subject (who is supposed to face the loudspeaker 55), when a REM measurement
is performed.
[0070] The camera 56 and associated machine-vision software (incl. image-recognition algorithms)
may according to the concept of the present disclosure further be used for monitoring
the configuration of the hearing instrument, e.g., verifying that the actual earpiece
(dome) 42 and speaker link 41 of a RITE hearing instrument is identical to that indicated
in the fitting software, see Figure 4.
[0071] The procedures involving estimates of distance and orientation (in connection with
REM) may be optimized by including some easily visually recognizable landmarks on
the probe microphones or a headband, which holds the used microphones in place.
[0072] The verification process - taking a picture, image processing, and providing feedback
to tester - is preferably built into the relevant software (for HIT, REM, or hearing-instrument
fitting), such that the verification process runs automatically, either initiated
by the user as a separate task or integrated as the first step of the actual measurement.
[0073] The image-processing algorithms are preferably based on a definition of the ideal
state (e.g. the ideal position of the hearing instrument in the test box) with an
accompanying range of acceptable variation.
[0074] If the acceptable range is exceeded, which would indicate a violation of the specification,
a warning may be given to the user of the equipment, e.g. "The hearing instrument
is placed incorrectly in the test box. Please change position to mimic the recommended
position shown in this picture [...]", where a picture corresponding to the image
on the left of Figure 6 may be displayed.
[0075] The position of the hearing instrument 61 may for example be a location in relation
to a loudspeaker of a REM equipment, or a location in relation to a user's ear, or
a location in relation to (a marker of) a test box of a HIT equipment.
[0076] In case of HIT, by just presenting the images (photos) taken (inside the test box
51) on a screen, a resulting system may be used without automatically outputting warnings
(as described above), but may give the tester the option to verify (by inspection)
that the hearing instrument 53 did not change position when the lid 54 was closed,
which is a likely cause of an error in relation to the hearing instrument position.
[0077] In the test box scenario described in relation to Figure 5, such screen (and a related
input unit) may be provided on or (communicatively) connected to the test box 51.
[0078] In an aspect, the functions may be stored on or encoded as one or more instructions
or code on a tangible computer-readable medium. The computer readable medium includes
computer storage media adapted to store a computer program comprising program codes,
which when run on a processing system causes the data processing system to perform
at least some (such as a majority or all) of the steps of the method described above
and in the claims.
[0079] By way of example, and not limitation, such computer-readable media can comprise
RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other
magnetic storage devices, or any other medium that can be used to carry or store desired
program code in the form of instructions or data structures and that can be accessed
by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc,
optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks
usually reproduce data magnetically, while discs reproduce data optically with lasers.
Combinations of the above should also be included within the scope of computer-readable
media. In addition to being stored on a tangible medium, the computer program can
also be transmitted via a transmission medium such as a wired or wireless link or
a network, e.g. the Internet, and loaded into a data processing system for being executed
at a location different from that of the tangible medium.
[0080] In an aspect, a data processing system is provided comprising a processor adapted
to execute the computer program for causing the processor to perform at least some
(such as a majority or all) of the steps of the method described above and in the
claims.
[0081] It is intended that the structural features of the devices described above, either
in the detailed description and/or in the claims, may be combined with steps of the
method, when appropriately substituted by a corresponding process.
[0082] As used, the singular forms "a," "an," and "the" are intended to include the plural
forms as well (i.e. to have the meaning "at least one"), unless expressly stated otherwise.
It will be further understood that the terms "includes," "comprises," "including,"
and/or "comprising," when used in this specification, specify the presence of stated
features, integers, steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers, steps, operations,
elements, components, and/or groups thereof. It will also be understood that when
an element is referred to as being "connected" or "coupled" to another element, it
can be directly connected or coupled to the other element but an intervening elements
may also be present, unless expressly stated otherwise. Furthermore, "connected" or
"coupled" as used herein may include wirelessly connected or coupled. As used herein,
the term "and/or" includes any and all combinations of one or more of the associated
listed items. The steps of any disclosed method is not limited to the exact order
stated herein, unless expressly stated otherwise.
[0083] It should be appreciated that reference throughout this specification to "one embodiment"
or "an embodiment" or "an aspect" or "features" included as "may" means that a particular
feature, structure or characteristic described in connection with the embodiment is
included in at least one embodiment of the disclosure. Furthermore, the particular
features, structures or characteristics may be combined as suitable in one or more
embodiments of the disclosure. The previous description is provided to enable any
person skilled in the art to practice the various aspects described herein. Various
modifications to these aspects will be readily apparent to those skilled in the art,
and the generic principles defined herein may be applied to other aspects.
[0084] The claims are not intended to be limited to the aspects shown herein, but is to
be accorded the full scope consistent with the language of the claims, wherein reference
to an element in the singular is not intended to mean "one and only one" unless specifically
so stated, but rather "one or more." Unless specifically stated otherwise, the term
"some" refers to one or more.
[0085] Accordingly, the scope should be judged in terms of the claims that follow.
1. A setup device for a hearing instrument, comprising
a receiving unit configured to receive, from at least one image pickup device, at
least one image of said hearing instrument,
a processing unit configured to process said at least one image of said hearing instrument,
an identifying unit configured to identify a property of said hearing instrument based
on a result of said processing unit, and
a controlling unit configured to control a setup processing for said hearing instrument
based on said property of said hearing instrument.
2. The setup device according to claim 1, wherein
said property of said hearing instrument comprises a hardware configuration of said
hearing instrument.
3. The setup device according to claim 2, wherein
said hardware configuration of said hearing instrument is a type of an exchangeable
part of said hearing instrument.
4. The setup device according to claim 3, wherein
said exchangeable part of said hearing instrument wears a type unique mark.
5. The setup device according to claim 1, wherein
said property of said hearing instrument comprises a location and/or an orientation
of said hearing instrument.
6. The setup device according to any of claims 1 to 5, wherein
said processing unit is configured to process said at least one image of said hearing
instrument utilizing at least one of a machine-vision technique and an image recognition
algorithm.
7. The setup device according to any of claims 1 to 6, wherein
said controlling unit is configured to compare said property of said hearing instrument
with a preset property.
8. The setup device according to claim 7, wherein
said controlling unit is configured to halt or interrupt said setup processing for
said hearing instrument, if said property of said hearing instrument does not match
said preset property, and
said controlling unit is configured to start or resume said setup processing for said
hearing instrument, if said property of said hearing instrument matches said preset
property.
9. The setup device according to claim 7 or 8, wherein
said controlling unit is configured to output a warning, if said property of said
hearing instrument does not match said preset property.
10. The setup device according to any of claims 7 to 9, wherein
said preset property comprises a target state and a range of an acceptable variation
from said target state.
11. The setup device according to any of claims 1 to 6, wherein
said controlling unit is configured to enter said property of said hearing instrument
into a reserved memory or program section.
12. The setup device according to any of claims 1 to 11, wherein
said hearing instrument is one of a hearing aid device, an earphone, and a headset.
13. The setup device according to any of claims 1 to 12, wherein
said setup device comprises said at least one image pickup device, and/or
said at least one image pickup device is one of a web camera, a smartphone camera,
and a camera integrated with said setup device.
14. A setup method for a hearing instrument, comprising
receiving, from at least one image pickup device, at least one image of said hearing
instrument,
processing said at least one image of said hearing instrument, identifying a property
of said hearing instrument based on a result of said processing unit, and
controlling a setup processing for said hearing instrument based on said property
of said hearing instrument.
15. A computer program product comprising computer-executable computer program code which,
when the program is run on a computer, is configured to cause the computer to carry
out the method according to claim 14.