FIELD
[0001] The present invention relates to a medical apparatus having a communication device
for allowing communication between a patient and an operator, and a program stored
in the medical apparatus.
BACKGROUND
[0002] An X-ray CT apparatus is known as a medical apparatus for non-invasively imaging
the inside of a patient. Because of its ability to image a body part to be imaged
in a short time, the X-ray CT apparatus is widely used in medical institutions, such
as hospitals.
[0003] A CT apparatus has a gantry and a table as its main components. The gantry and table
are disposed in a scan room. The gantry is provided with a rotating section on which
an X-ray tube and a detector are mounted. In imaging a patient, a scan is performed
while rotating the rotating section. The CT apparatus also has an operator console
for operating the gantry and table, the operator console being disposed in an operation
room provided separately from the scan room. The operator can control the gantry and
table by operating the operator console disposed in the operation room.
[0004] The CT apparatus moreover has a communication device allowing the operator in the
operation room to communicate with the patient in the scan room. The communication
device has a microphone for receiving an operator's voice, and a speaker for transmitting
the voice received by the microphone to the patient in the scan room. One example
of the communication device is disclosed in
Japanese Patent Application KOKAI No. 2000-070256.
[0005] When the operator utters a voice, the microphone receives the operator's voice, causing
the operator's voice to be generated from the speaker. Accordingly, the patient can
hear the operator's voice while in the scan room.
[0006] However, in the case that the communication device is set to an "OFF" mode in which
no communication is made, or some failure occurs in the communication device, for
example, when the operator in the operation room talks into the microphone about requirements
for an examination, the operator's voice is not output from the speaker in the scan
room. Accordingly, when talking into the microphone and receiving no response from
the patient, the operator may sometimes be worried that the voice of his/her own is
not output from the speaker in the scan room. At other times, the operator may not
be aware that the voice of his/her own is not output from the speaker in the scan
room.
[0007] Accordingly, it is desirable to enable the operator to, when talking into a microphone,
recognize whether or not the voice of his/her own is being output from the speaker
in the scan room.
BRIEF SUMMARY
[0008] The present invention, in its first aspect, is a medical apparatus having:
a first microphone installed in a first room for receiving a voice of an operator;
a second microphone installed in a second room for receiving a voice of a patient;
a first speaker installed in said first room for outputting the voice of said patient
received by said second microphone;
a second speaker installed in said second room for outputting the voice of said operator
received by said first microphone; and
means for informing, in a case that said second microphone has received the voice
of said operator output from said second speaker, said operator that the voice of
said operator is being output from said second speaker.
[0009] The present invention, in its second aspect, is a program stored in a medical apparatus,
said apparatus having: a first microphone installed in a first room for receiving
a voice of an operator; a second microphone installed in a second room for receiving
a voice of a patient; a first speaker installed in said first room for outputting
the voice of said patient received by said second microphone; a second speaker installed
in said second room for outputting the voice of said operator received by said first
microphone; and means for informing, in a case that said second microphone has received
the voice of said operator output from said second speaker, said operator that the
voice of said operator is being output from said second speaker, said program being
for causing one or more processors to execute:
processing of receiving a first digital signal containing sound data representing
a sound that said first microphone has received, and a second digital signal containing
sound data representing a sound that said second microphone has received, generating
from said second digital signal a third digital signal representing signal components
corresponding to noise, and generating a fourth digital signal containing sound data
representing the voice of said operator by subtracting said third digital signal from
said second digital signal; and
control processing of controlling said means for informing said operator based on
said fourth digital signal.
[0010] The present invention, in its third aspect, is a non-transitory, computer-readable
recording medium provided in a medical apparatus, said apparatus having: a first microphone
installed in a first room for receiving a voice of an operator; a second microphone
installed in a second room for receiving a voice of a patient; a first speaker installed
in said first room for outputting the voice of said patient received by said second
microphone; a second speaker installed in said second room for outputting the voice
of said operator received by said first microphone; and means for informing, in a
case that said second microphone has received the voice of said operator output from
said second speaker, said operator that the voice of said operator is being output
from said second speaker,
in said recording medium are stored one or more instructions executable by one or
more processors, said one or more instructions, when executed by said one or more
processors, causing said one or more processors to execute an operation comprising
the acts of:
receiving a first digital signal containing sound data representing a sound that said
first microphone has received;
receiving a second digital signal containing sound data representing a sound that
said second microphone has received;
generating from said second digital signal a third digital signal representing signal
components corresponding to noise;
generating a fourth digital signal containing sound data representing the voice of
said operator by subtracting said third digital signal from said second digital signal;
and
controlling said means for informing said operator based on said fourth digital signal.
[0011] The second speaker outputs a voice of the operator in the first room. When the operator's
voice is output from the second speaker, the second microphone receives the voice
output from the second speaker. The medical apparatus in the present invention has
means for informing, in the case that said second microphone has received the voice
of said operator output from said second speaker, said operator that the voice of
said operator is being output from said second speaker. Accordingly, when the operator's
voice is output from the second speaker, the operator can recognize while in the first
room that the voice is being output from the second speaker via said means for informing.
Thus, the operator is freed from worry that the voice of his/her own may not be heard
by the patient, and therefore, the operator can concentrate on his/her work to smoothly
achieve a scan on the patient.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
FIG. 1 is an external view of an X-ray CT apparatus in one embodiment of the present
invention.
FIG. 2 is a diagram schematically showing a hardware configuration of an X-ray CT
apparatus 1 in accordance with a first embodiment.
FIG. 3 is a circuit diagram of a communication device 500.
FIG. 4 is a perspective view of an appearance of an intercom module 4.
FIG. 5 is a diagram showing the intercom module 4 set to a first communication mode.
FIG. 6 is a diagram showing the intercom module 4 set to a second communication mode.
FIG. 7 is an explanatory diagram for the first communication mode.
FIG. 8 is an explanatory diagram for the second communication mode.
FIG. 9 is a diagram briefly showing an example of a basic configuration of a function
for informing an operator 81 whether or not a voice of his/her own is being output
from a speaker 5 in a scan room R1.
FIG. 10 is an explanatory diagram of a light-emitting section 31.
FIG. 11 is an explanatory diagram of the light-emitting section 31 in t0≤t<t1.
FIG. 12 is an explanatory diagram of the light-emitting section 31 in t1≤t<t2.
FIG. 13 is an explanatory diagram of the light-emitting section 31 in t2≤t<t3.
FIG. 14 is an explanatory diagram of the light-emitting section 31 in t3≤t<t4.
FIG. 15 is an explanatory diagram of the light-emitting section 31 in t4≤t<t5.
FIG. 16 is an explanatory diagram of the light-emitting section 31 in t5≤t<t7.
FIG. 17 is an explanatory diagram of a filter block.
FIG. 18 is an explanatory diagram for an operation of the intercom module 4 in the
first communication mode.
FIG. 19 is an explanatory diagram for an operation of the intercom module 4 in the
second communication mode.
FIG. 20 is an explanatory diagram for a case in which the operator 81 is informed
that his/her voice is being output from the speaker 5 by a display section 33 on a
gantry 100.
FIG. 21 is an enlarged view of the display section 33 on the gantry 100.
FIG. 22 is a diagram showing a case in which the operator is informed that the speaker
5 has output the voice of the operator by a display device 302.
DETAILED DESCRIPTION
[0013] Now embodiments for practicing the invention will be described hereinbelow; however,
the present invention is not limited thereto.
[0014] FIG. 1 is an external view of an X-ray CT apparatus in one embodiment of the present
invention.
[0015] As shown in FIG. 1, an X-ray CT apparatus 1 comprises a gantry 100, a table 200,
and an operator console 300.
[0016] The gantry 100 and table 200 are installed in a scan room R1. The operator console
300 is installed in an operation room R2 separate from the scan room R1. In FIG. 1,
ceilings and some sidewalls of the scan room R1 and operation room R2 are omitted
from the drawing for convenience of explanation.
[0017] The scan room R1 and operation room R2 are separated from each other by a wall 101.
The wall 101 is provided with a window 102 allowing an operator 81 to view the scan
room R1 from the operation room R2. The wall 101 is also provided with a door 103
for allowing the operator 81 to move between the scan room R1 and operation room R2.
[0018] The wall 101 and window 102 lying between the scan room R1 and operation room R2
can have any shape, and moreover, various materials may be used as a material(s) making
up the wall and window, insofar as satisfactory safety of a human body can be ensured.
[0019] The gantry 100 is provided on its front surface with a display section 33. The display
section 33 is capable of displaying patient information, information helpful for preparation
for a scan, and/or the like. Accordingly, the operator can smoothly prepare for a
scan on a patient 80 while checking over the display on the display section 33.
[0020] FIG. 2 is a diagram schematically showing a hardware configuration of the X-ray CT
apparatus 1 in accordance with a first embodiment.
[0021] The gantry 100 has a bore 11 for forming space through which the patient 80 can be
moved.
[0022] The gantry 100 also has an X-ray tube 12, an aperture 13, a collimator device 14,
an X-ray detector 15, a data acquisition system 16, a rotating section 17, a high-voltage
power source 18, an aperture driving apparatus 19, a rotation driving apparatus 20,
a GT (Gantry Table) control section 21, etc.
[0023] The rotating section 17 is constructed to be rotatable around the bore 11.
[0024] The rotating section 17 has the X-ray tube 12, aperture 13, collimator device 14,
X-ray detector 15, and data acquisition system 16 mounted thereon.
[0025] The X-ray tube 12 and X-ray detector 15 are disposed to face each other across the
bore 11 of the gantry 100.
[0026] The aperture 13 is disposed between the X-ray tube 12 and bore 11. The aperture 13
shapes X-rays emitted from an X-ray focus of the X-ray tube 12 toward the X-ray detector
15 into a fan beam or a cone beam.
[0027] The collimator device 14 is disposed between the bore 11 and X-ray detector 15. The
collimator device 14 removes scatter rays entering the X-ray detector 15.
[0028] The X-ray detector 15 has a plurality of X-ray detector elements two-dimensionally
arranged in directions of the extent and thickness of the fan-shaped X-ray beam emitted
from the X-ray tube 12. Each X-ray detector element detects X-rays passing through
the patient 80, and outputs an electrical signal depending upon the intensity thereof.
[0029] The data acquisition system 16 receives electrical signals output from the X-ray
detector elements in the X-ray detector 15, and converts them into X-ray data for
acquisition.
[0030] The table 200 has a cradle 201 and a driving apparatus 202. The patient 80 lies on
the cradle 201. The driving apparatus 202 drives the table 200 and cradle 201 so that
the cradle 201 can move in y- and z-directions.
[0031] The high-voltage power source 18 supplies high voltage and electric current to the
X-ray tube 12.
[0032] The aperture driving apparatus 19 drives the aperture 13 to modify the shape of its
opening.
[0033] The rotation driving apparatus 20 rotationally drives the rotating section 17.
[0034] The GT control section 21 executes processing for controlling several apparatuses/devices
and several sections in the gantry 100, the driving apparatus 202 for the table 200,
etc. The GT control section 21 also supplies to a light-emission control section 32
a signal carrying thereon information necessary for controlling a light-emitting section
31. The light-emission control section 32 and light-emitting section 31 will be discussed
later.
[0035] The operator console 300 accepts several kinds of operations from the operator. The
operator console 300 has an input device 301, a display device 302, a storage device
303, a processing device 304, and an intercom module 4.
[0036] The input device 301 may comprise buttons and a keyboard for accepting an input of
a command and information from the operator, and a pointing device, such as a mouse.
The display device 302 is an LCD (Liquid Crystal Display), an organic EL (Electro-Luminescence)
display, or the like.
[0037] The storage device 303 may comprise a HDD (Hard Disk Drive), semiconductor memory
such as RAM (Random Access Memory) and ROM (Read Only Memory), etc. The operator console
300 may have all of the HDD, RAM, and ROM as the storage device 303. The storage device
303 may also comprise a portable storage medium 305, such as a CD (Compact Disk) or
a DVD (Digital Versatile Disk).
[0038] The processing device 304 comprises a processor for executing several kinds of processing.
[0039] The intercom module 4 is used when the operator 81 communicates with the patient
80. The intercom module 4 will be described in detail later.
[0040] The CT apparatus 1 moreover has a communication device 500 for allowing the operator
81 in the operation room R2 and the patient 80 in the scan room R1 to communicate
with each other.
[0041] The communication device 500 has a patient microphone 2, an amplifier board 3, an
intercom module 4, and a speaker 5. Now the communication device 500 will be described
with reference to FIG. 3, as well as FIG. 2.
[0042] FIG. 3 is a circuit diagram of the communication device 500.
[0043] In FIG. 3, the scan room R1 and operation room R2 are each designated by dashed lines.
[0044] In the scan room R1 are disposed the patient microphone 2, speaker 5, and amplifier
board 3 of the communication device 500.
[0045] The patient microphone 2 is for receiving a voice of the patient 80. The patient
microphone 2 can be installed in the proximity of the bore 11 of the gantry 100, as
shown in FIG. 1. The patient microphone 2 is, however, not necessarily installed in
the gantry 100, and may be installed at a different place from the gantry 100 (e.g.,
in the table 200, or on the wall or ceiling of the scan room R1) insofar as it can
receive the voice of the patient 80.
[0046] The speaker 5 is for outputting a voice of the operator 81 in the operation room
R2. The speaker 5 may be installed under the cradle 201 of the table 200, as shown
in FIG. 1. The speaker 5 is, however, not necessarily installed in the table 200,
and may be installed at a different place from the table 200 (e.g., in the gantry
100, or on the wall or ceiling of the scan room R1) insofar as the patient 80 can
hear the voice from the speaker 5.
[0047] Returning to FIG. 3, the description will be continued.
[0048] The amplifier board 3 amplifies a signal of a sound received by the patient microphone
2. The amplifier board 3 may be installed in the inside of the gantry 100.
[0049] On the other hand, in the operation room R2 is disposed the intercom module 4 of
the communication device 500.
[0050] As shown in FIG. 3, the intercom module 4 has an operator microphone 41, a preamplifier
42, an ADC (Analog-to-Digital Converter) 43, a DAC (Digital-to-Analog Converter) 44,
a power amplifier 45, a buffer amplifier 46, an ADC 47, a DAC 48, a power amplifier
49, a speaker 50, a microphone switch 51, and a switch section 52. While the intercom
module 4 comprises circuit parts, several kinds of switches, and several kinds of
buttons in addition to the components 41 to 52, they are omitted in the drawings because
they are not needed for the description of the present invention.
[0051] The switch section 52 has two switching elements 52a and 52b.
[0052] The switching element 52a is provided between the ADC 43 and DAC 44. When the switching
element 52a is set to "ON," the switching element 52a electrically connects the ADC
43 and the DAC 44 together, and when the switching element 52a is set to "OFF," the
ADC 43 is electrically disconnected from the DAC 44.
[0053] On the other hand, the switching element 52b is provided between the ADC 47 and DAC
48. When the switching element 52b is set to "ON," the switching element 52b electrically
connects the ADC 47 and the DAC 48 together, and when the switching element 52b is
set to "OFF," the ADC 47 is electrically disconnected from the DAC 48.
[0054] FIG. 4 is a perspective view of an appearance of the intercom module 4. The intercom
module 4 has a generally rectangular parallelepiped-shaped housing 4a. The housing
4a has the components 41 to 51 (see FIG. 3) of the intercom module 4 incorporated
therein. In FIG. 4, three of the components 41 to 51, i.e., the operator microphone
41, speaker 50, and microphone switch 51, are shown. The microphone switch 51 is provided
on an upper surface 4b of the housing 4a. The microphone switch 51 is a switch for
changing the communication mode of the intercom module 4. Here, the microphone switch
51 is constructed to allow the operator 81 to press it, and the communication mode
of the intercom module 4 can be changed by the operator 81 pressing the microphone
switch 51 as needed. Now the communication mode of the intercom module 4 will be described
hereinbelow.
[0055] When the operator 81 has pressed the microphone switch 51, the mode is set to a first
communication mode in which the voice of the operator 81 can be transmitted to the
patient 80. FIG. 5 is a diagram showing the intercom module 4 set to the first communication
mode. When the microphone switch 51 is pressed, the switching element 52a in the switch
section 52 is set to "ON," and thus, the operator microphone 41 and speaker 5 are
electrically connected with each other. Accordingly, by continuously pressing the
microphone switch 51, the operator 81 can transmit the voice of his/her own to the
patient 80 while the microphone switch 51 is pressed.
[0056] While the microphone switch 51 is pressed, the switching element 52b is set to "OFF."
Accordingly, the patient microphone 2 is electrically disconnected from the speaker
50, and thus, no sound is output from the speaker 50 in the first communication mode.
[0057] On the other hand, when the operator 81 is not pressing the microphone switch 51,
the mode is set to a second communication mode in which the voice of the patient 80
can be transmitted to the operator 81. FIG. 6 is a diagram showing the intercom module
4 set to the second communication mode. The switching element 52b is in "ON" when
the operator 81 is not pressing the microphone switch 51 in the present embodiment.
Accordingly, in the second communication mode, the patient microphone 2 and speaker
50 are in an electrically connected state. Thus, when the patient 80 utters a voice,
the voice of the patient 80 is received by the patient microphone 2 and is output
to the speaker 50, so that the operator 81 can hear the voice of the patient 80 while
in the operation room R2.
[0058] The switching element 52a is in "OFF" when the operator 81 is not pressing the microphone
switch 51. Accordingly, the operator microphone 41 is a state electrically disconnected
from the speaker 5. Thus, in the second communication mode, no sound is output from
the speaker 5.
[0059] As described above, the intercom module 4 has two communication modes, and the operator
81 can change the communication mode by the microphone switch 51 to thereby allow
communication between the operator 81 and patient 80.
[0060] Now an operation of the communication device 500 in the first communication mode
and that in the second communication mode will be described one by one hereinbelow.
(On the First Communication Mode)
[0061] FIG. 7 is an explanatory diagram for the first communication mode.
[0062] To set the intercom module 4 to the first communication mode, the operator 81 continuously
presses the microphone switch 51. While the operator 81 is pressing the microphone
switch 51, the switching element 52a is set to "ON" and the switching element 52b
is set to "OFF," as shown in FIG. 7. Accordingly, in the first communication mode,
the patient microphone 2 is set to a state electrically disconnected from the speaker
50 while the operator microphone 41 is set to a state electrically connected to the
speaker 5. Thus, in the first communication mode, by the operator 81 talking into
the operator microphone 41, the patient 80 can hear the voice of the patient 80 from
the speaker 5. In the first communication mode, the intercom module 4 operates as
follows.
[0063] When the operator 81 utters a voice v1, the operator microphone 41 receives the voice
v1 of the operator 81. Upon receiving the voice v1, the operator microphone 41 outputs
an analog signal d1(t) representing the received voice v1.
[0064] The preamplifier 42 receives the analog signal d1(t) output from the operator microphone
41, and amplifies the received analog signal d1(t). The preamplifier 42 amplifies
the analog signal d1(t) from the operator microphone 41 up to an input voltage range
for the ADC 43 at the following stage.
[0065] The ADC 43 converts an analog signal d2(t) output from the preamplifier 42 into a
digital signal D(n).
[0066] Accordingly, a circuitry part constituted by the preamplifier 42 and ADC 43 operates
as a circuitry part that generates the digital signal D(n) based on the analog signal
d1(t).
[0067] The DAC 44 converts the digital signal D(n) from the ADC 43 into an analog signal
c1(t).
[0068] The power amplifier 45 receives the analog signal c1(t) from the DAC 44, amplifies
the received analog signal c1(t), and outputs the resulting signal as an analog signal
c2(t). The analog signal c2(t) is supplied to the speaker 5. Accordingly, a circuitry
part constituted by the DAC 44 and power amplifier 45 operates as a circuitry part
that generates the analog signal c2(t) to be supplied to the speaker 5 based on the
digital signal D(n).
[0069] The speaker 5 receives the analog signal c2(t) output from the power amplifier 45,
and outputs a sound corresponding to the received analog signal c2(t).
[0070] Accordingly, the patient 80 can hear the voice of the operator 81.
(On the Second Communication Mode)
[0071] FIG. 8 is an explanatory diagram for the second communication mode.
[0072] In the case that the operator 81 is not pressing the microphone switch 51, the switching
element 52b is in an "ON" state and the switching element 52a is in an "OFF" state,
as shown in FIG. 8. Accordingly, in the second communication mode, the operator microphone
41 is in a state electrically disconnected from the speaker 5 while the patient microphone
2 is in a state electrically connected to the speaker 50. Thus, in the second communication
mode, when the patient 80 utters a voice, the operator 81 can hear the voice of the
patient 80. In the second communication mode, the intercom module 4 operates as follows.
[0073] When the patient 80 utters a voice v3, the patient microphone 2 receives the voice
v3 of the patient 80. Upon receiving the voice v3, the patient microphone 2 outputs
an analog signal m1(t) representing the received voice v3.
[0074] The amplifier board 3 receives the analog signal m1(t) output from the patient microphone
2, amplifies the received analog signal m1(t), and outputs an analog signal m2(t).
The amplifier board 3 amplifies the analog signal m1(t) so that noise, if any, mixed
on a signal line can be ignored.
[0075] The buffer amplifier 46 is for performing impedance conversion. Moreover, the buffer
amplifier 46 adjusts the analog signal m2(t) received from the amplifier board 3 to
fall within a voltage range of the ADC 47 at the following stage, and outputs the
resulting signal as an analog signal m3(t).
[0076] The ADC 47 converts the analog signal m3(t) output from the buffer amplifier 46 into
a digital signal M(n).
[0077] Accordingly, a circuitry part constituted by the amplifier board 3, buffer amplifier
46, and ADC 47 operates as a circuitry part that generates the digital signal M(n)
based on the analog signal m1(t).
[0078] The DAC 48 converts the digital signal M(n) from the ADC 47 into an analog signal
f1(t).
[0079] The power amplifier 49 receives the analog signal f1(t) from the DAC 48, amplifies
the received analog signal f1(t), and outputs an analog signal f2(t).
[0080] The speaker 50 receives the analog signal f2(t) from the power amplifier 49, and
outputs a sound corresponding to the received analog signal f2(t).
[0081] Accordingly, when the patient 80 utters the voice v3, the operator 81 can hear the
voice v3 of the patient 80 via the speaker 50.
[0082] It can be seen from the explanation of FIGS. 7 and 8 that the intercom module 4 is
in the second communication mode for transmitting the voice of the patient 80 to the
operator 81 unless the operator 81 presses the microphone switch 51. Accordingly,
when the patient 80 utters something, the operator 81 can hear the voice of the patient
80 from the speaker 50 while in the operation room R2. The operator 81 may continuously
press the microphone switch 51 only when (s)he has to talk to the patient 80, whereby
(s)he can set the intercom module 4 to the first communication mode for transmitting
his/her voice to the patient 80. Having finished communicating necessary information
to the patient 80, the operator 81 gets his/her hand off from the microphone switch
51. This causes the intercom module 4 to be changed from the first communication mode
for transmitting the voice of the operator 81 to the patient 80 to the second communication
mode for transmitting the voice of the patient 80 to the operator 81, and thus, the
operator 81 can hear a response from the patient 80 via the speaker 50.
[0083] It is sometimes encountered, however, that when the operator 81 talks to the patient
80, the patient 80 does not give a prompt response. In this case, the operator 81
may be worried that the voice of the operator 81 is not output from the speaker 5
in the scan room R1 because of some problem occurring in the communication device
500. At that time, the operator 81 may talk to the patient 80 many times in order
to confirm whether or not the voice of the operator 81 is being output from the speaker
5, which may disadvantageously cause unwanted work stress to the operator 81.
[0084] Moreover, there is a fear that although a voice uttered by the operator 81 is problematically
not output from the speaker 5 in the scan room R1 due to, for example, a failure or
the like in the communication device 500, the operator 81 is unaware of that. In this
case, although the matter the operator 81 has spoken is not transmitted to the patient
80, the operator 81 may assume that the matter has been transmitted to the patient
80, and thus, the patient 80 may suffer from discomfort.
[0085] Hence, the CT apparatus 1 of the present embodiment is configured so that when uttering
a voice, the operator 81 him/herself can recognize whether or not the voice of his/her
own is being output from the speaker 5 in the scan room R1. Specifically, the CT apparatus
1 has a function of, when the operator 81 utters a voice, informing the operator 81
whether or not the voice of his/her own is output from the speaker 5 in the scan room
R1. Now a basic configuration of the function will be described hereinbelow.
[0086] FIG. 9 is a diagram briefly showing an example of the basic configuration of the
function for informing the operator 81 whether or not the voice of his/her own is
being output from the speaker 5 in the scan room R1.
[0087] To describe the basic configuration of this function, in FIG. 9 are shown the GT
control section 21, light-emitting section 31, and light-emission control section
32, although not shown in FIGS. 5 to 8.
[0088] The GT control section 21 receives the digital signal M(n) output from the ADC 47.
The light-emitting section 31 is connected to the GT control section 21 through the
light-emission control section 32.
[0089] The light-emitting section 31 is provided on the front surface of the gantry 100,
as shown in FIG. 1. The operator 81 can see the light-emitting section 31 in the gantry
100 via the window 102 while in the operation room R2. In the present embodiment,
the light-emitting section 31 has a right light-emitting section 31R and a left light-emitting
section 31L. The right light-emitting section 31R is provided on the right side of
the bore 11, while the left light-emitting section 31L is provided on the left side
of the bore 11.
[0090] FIG. 10 is an explanatory diagram of the light-emitting section 31.
[0091] The light-emitting section 31 has the left light-emitting section 31L and right light-emitting
section 31R. The basic structure of the left light-emitting section 31L and that of
the right light-emitting section 31R are identical. Accordingly, the left one of the
left light-emitting section 31L and right light-emitting section 31R will be taken
here as a representative to describe the light-emitting section 31.
[0092] Referring to FIG. 10, the left light-emitting section 31L is shown in close-up. The
left light-emitting section 31L comprises a plurality of light-emitting elements.
While the left light-emitting section 31L having five light-emitting elements e1 to
e5 is exemplified here for convenience of explanation, the number of the light-emitting
elements may be less than or more than five. As the light-emitting element, an LED
may be used, for example. The right light-emitting section 31R also has the same number
of the light-emitting elements as that in the left light-emitting section 31L.
[0093] Returning to FIG. 9, the description will be continued.
[0094] The light-emission control section 32 controls the light-emitting section 31 to inform
the operator 81 whether or not the voice of the operator 81 is being output from the
speaker 5 in the scan room R1. Now a method of controlling the light-emitting section
31 will be described hereinbelow.
[0095] In FIG. 9, when the operator 81 utters the voice v1, the voice v2 of the operator
81 is output from the speaker 5, as described earlier with reference to FIG. 7.
[0096] The patient microphone 2 receives the voice v2 of the operator 81 output from the
speaker 5. Upon receiving the voice v2, the patient microphone 2 outputs the analog
signal m1(t) representing the received voice v2. The amplifier board 3 processes the
analog signal m1(t) to output the analog signal m2(t). The analog signal m2(t) is
processed by the buffer amplifier 46, and the analog signal m3(t) output from the
buffer amplifier 46 is converted into the digital signal M(n) by the ADC 47. While
the digital signal M(n) is output toward the DAC 48, it is not supplied to the DAC
48 because the switching element 52b at the previous stage of the DAC 48 is "OFF."
[0097] However, since the ADC 47 is connected to the GT control section 21, the digital
signal M(n) is supplied to the GT control section 21.
[0098] The GT control section 21 converts the digital signal M(n) into a digital signal
Q(n) compatible with a CAN (Controller Area Network) communication, and outputs the
digital signal Q(n) to the light-emission control section 32.
[0099] The light-emission control section 32 outputs a control signal L(n) to the light-emitting
section 31 based on the digital signal Q(n), for energizing the light-emitting section
31 depending upon the loudness of the voice of the operator 81.
[0100] Now a method of energizing the light-emitting section 31 depending upon the loudness
of the voice of the operator 81 will be described hereinbelow with reference to FIGS.
11 to 16.
[0101] First, FIG. 11 will be described.
[0102] FIG. 11 shows in its lower portion a waveform of the voice of the operator 81 between
time points t0 and t8. A vertical axis represents the time, and a horizontal axis
represents the loudness of the voice of the operator 81. Although the voice waveform
changes with time in a complex manner in practice, it is shown as a simple waveform
in FIG. 11 to facilitate understanding of the operation of the light-emitting section
31. Here, the loudness of the voice is assumed to linearly increase with time from
time point t0 to point t6 and linearly decrease with time from time point t6 to point
t8.
[0103] The light-emission control section 32 identifies which of regions w1 to w6 the loudness
of the voice at a time point t falls within. The light-emission control section 32
then determines those to be energized and those not to be energized among the light-emitting
elements (LED) e1 to e5 depending upon which region the loudness of the voice falls
within.
[0104] In the present embodiment, the light-emitting elements to be energized and those
not to be energized are determined following (1) to (6) below.
- (1) In the case that the loudness of the voice falls within the region w1, it is determined
that the light-emitting elements to be energized do not exist, i.e., no light-emitting
element is energized.
- (2) In the case that the loudness of the voice falls within the region w2, it is determined
that the light-emitting element e1 is energized and the other light-emitting elements
e2 to e5 are not energized.
- (3) In the case that the loudness of the voice falls within the region w3, it is determined
that the light-emitting elements e1 and e2 are energized and the other light-emitting
elements e3 to e5 are not energized.
- (4) In the case that the loudness of the voice falls within the region w4, it is determined
that the light-emitting elements e1, e2, and e3 are energized and the other light-emitting
elements e4 and e5 are not energized.
- (5) In the case that the loudness of the voice falls within the region w5, it is determined
that the light-emitting elements e1 to e4 are energized and the other light-emitting
element e5 is not energized.
- (6) In the case that the loudness of the voice falls within the region w6, it is determined
that all the light-emitting elements e1 to e5 are energized.
[0105] Now which one(s) of the light-emitting elements e1 to e5 is/are energized at each
time point t will be described hereinbelow.
(On t0≤t<t1)
[0106] In t0≤t<t1, the light-emission control section 32 decides that the loudness of the
voice falls within the region w1. Accordingly, the light-emission control section
32 outputs the control signal L(n) not to energize any of the light-emitting elements
(LEDs) e1 to e5 following (1) above. Accordingly, no light-emitting elements e1 to
e5 emit light in t0≤t<t1.
(On t1≤t<t2)
[0107] FIG. 12 is an explanatory diagram of the light-emitting section 31 in t1≤t<t2.
[0108] In t1≤t<t2, the light-emission control section 32 decides that the loudness of the
voice falls within the region w2. Accordingly, the light-emission control section
32 outputs the control signal L(n) to energize the light-emitting element e1 and not
to energize the other light-emitting elements e2 to e5 following (2) above. Accordingly,
only the light-emitting element e1 among the light-emitting elements e1 to e5 emits
light in t1≤t<t2.
(On t2≤t<t3)
[0109] FIG. 13 is an explanatory diagram of the light-emitting section 31 in t2≤t<t3.
[0110] In t2≤t<t3, the light-emission control section 32 decides that the loudness of the
voice falls within the region w3. Accordingly, the light-emission control section
32 outputs the control signal L(n) to energize the light-emitting elements e1 and
e2 and not to energize the other light-emitting elements e3 to e5 following (3) above.
Accordingly, the light-emitting elements e1 and e2 among the light-emitting elements
e1 to e5 emit light in t2≤t<t3.
(On t3≤t<t4)
[0111] FIG. 14 is an explanatory diagram of the light-emitting section 31 in t3≤t<t4.
[0112] In t3≤t<t4, the light-emission control section 32 decides that the loudness of the
voice falls within the region w4. Accordingly, the light-emission control section
32 outputs the control signal L(n) to energize the light-emitting elements e1, e2,
and e3 and not to energize the other light-emitting elements e4 and e5 following (4)
above. Accordingly, the light-emitting elements e1, e2, and e3 among the light-emitting
elements e1 to e5 emit light in t3≤t<t4.
(On t4≤t<t5)
[0113] FIG. 15 is an explanatory diagram of the light-emitting section 31 in t4≤t<t5.
[0114] In t4≤t<t5, the light-emission control section 32 decides that the loudness of the
voice falls within the region w5. Accordingly, the light-emission control section
32 outputs the control signal L(n) to energize the light-emitting elements e1, e2,
e3, and e4 and not to energize the other light-emitting element e5 following (5) above.
Accordingly, the light-emitting elements e1, e2, e3, and e4 among the light-emitting
elements e1 to e5 emit light in t4≤t<t5.
(On t5≤t<t7)
[0115] FIG. 16 is an explanatory diagram of the light-emitting section 31 in t5≤t<t7.
[0116] In t5≤t<t7, the light-emission control section 32 decides that the loudness of the
voice falls within the region w6. Accordingly, the light-emission control section
32 outputs the control signal L(n) to energize all the light-emitting elements e1
to e5 following (6) above. Accordingly, all the light-emitting elements e1 to e5 emit
light in t5≤t<t7.
(On t7≤t<t8)
[0117] In t7≤t<t8, the loudness of the voice falls within the region w5, as in t4≤t<t5,
and therefore, the light-emitting elements e1, e2, e3, and e4 among the light-emitting
elements e1 to e5 emit light, as shown in FIG. 15.
[0118] Accordingly, in the case that the loudness of the voice of the operator 81 exceeds
a threshold between the regions w1 and w2, the light-emitting section 31 emits light,
and therefore, the operator 81 can see the light-emitting section 31 while uttering
a voice to thereby visually confirm whether or not the voice of the operator 81 is
being output from the speaker 5 (see FIG. 1).
[0119] Moreover, when the operator 81 utters a voice, the number of energized light-emitting
elements changes depending upon the loudness of the voice. In the present embodiment,
the number of energized light-emitting elements increases as the loudness of the voice
increases. For example, in the case that the voice waveform changes as shown in FIGS.
11 to 16, the number of energized light-emitting elements is incremented by one with
time in t0≤t<t7. On the other hand, the number of energized light-emitting elements
decreases as the loudness of the voice decreases. For example, in t7≤t<t8, the number
of energized light-emitting elements is decremented by one. Accordingly, the light-emitting
section 31 functions as a level meter in which the number of energized light-emitting
elements increases or decreases depending upon the loudness of the voice of the operator
81, and thus, volume information indicating the loudness of the voice of the operator
81 can be given to the operator 81. Therefore, the operator 81 can see the light-emitting
section 31 while uttering a voice to thereby visually recognize at how much loudness
the voice of the operator 81 is heard by the patient 80.
[0120] Moreover, to avoid a situation in which the voice of the operator 81 is so low that
the patient 80 is unaware of the voice of the operator 81, the light-emitting section
31 is set not to emit light in the case that the loudness of the voice that the operator
81 has uttered is lower than the threshold between the regions w1 and w2. Accordingly,
in the case that the light-emitting section 31 emit no light in spite of the fact
that the operator 81 utters a voice, the operator 81 can become aware that the voice
of his/her own may be too low, and therefore, the operator 81 can immediately utter
the voice again so as to be heard by the patient 80.
[0121] Sometimes the patient 80 may utter the voice v3 (see FIG. 9) while the intercom module
4 is set to the first communication mode. In this case, since the patient 80 utters
the voice v3 when the operator 81 does not utter the voice v1, the patient microphone
2 receives the voice v3 of the patient 80. However, the light-emitting section 31
emitting light in response to the voice v3 of the patient 80 received by the patient
microphone 2 in spite of the fact that the operator 81 does not utter the voice v1,
may confuse the operator 81. Moreover, in CT imaging the patient 80, the operation
of machinery, such as the gantry 100 and/or table 200, generates operating noise,
and furthermore, another operator, if any, in the scan room R1, may utter a voice.
Again, in these cases, the light-emitting section 31 emitting light in response to
the operating noise from machinery or the voice of the operator in the scan room R1,
may confuse the operator 81.
[0122] Hence, the CT apparatus 1 in the present embodiment has a filter block for energizing
the light-emitting section 31 in response only to the voice of the operator 81 even
when a sound (e.g., the voice of the patient 80, operating noise from machinery, and/or
voice of the operator in the scan room R1) other than the voice of the operator 81
is generated while the intercom module 4 is set to the first communication mode. Now
the filter block will be described hereinbelow.
[0123] FIG. 17 is an explanatory diagram of the filter block.
[0124] The intercom module 4 has a filter block 60. The filter block 60 is constructed from
a DSP (Digital Signal Processor). The filter block 60 has an adaptive filter 61, a
subtracting section 62, and a subtracting section 63.
[0125] The adaptive filter 61 has an input section 61a connected to a node 64 between the
switching element 52a and DAC 44. The digital signal D(n) output from the ADC 43 is
input to the input section 61a of the adaptive filter 61.
[0126] The subtracting section 62 is connected to an output section 61b of the adaptive
filter 61 and to the ADC 47. The subtracting section 62 receives the digital signal
M(n) from the ADC 47 and a digital signal D'(n) from the adaptive filter 61, subtracts
the digital signal D'(n) from the digital signal M(n), and outputs a digital signal
M'(n) resulting from the subtraction.
[0127] Moreover, the adaptive filter 61 has an input section 61c for receiving the digital
signal M'(n) output from the subtracting section 62. The adaptive filter 61 adjusts
its coefficients based on the digital signal M'(n) so that a difference between the
digital signal D(n) received at the input section 61a and the digital signal D'(n)
output from the output section 61b is as close to zero as possible.
[0128] The subtracting section 63 receives the digital signal M'(n) output from the subtracting
section 62 and also receives the digital signal M(n) output from the ADC 47. The subtracting
section 63 subtracts the digital signal M'(n) from the digital signal M(n), and outputs
a digital signal P(n) resulting from the subtraction.
[0129] The filter block 60 is thus configured as described above.
[0130] Next, an operation of the intercom module 4 provided with the filter block 60 will
be described separately for the first communication mode and for the second communication
mode.
(On the First Communication Mode)
[0131] FIG. 18 is an explanatory diagram for an operation of the intercom module 4 in the
first communication mode.
[0132] To set the intercom module 4 to the first communication mode, the operator 81 continuously
presses the microphone switch 51. As shown in FIG. 18, the switching element 52a is
set to "ON" and the switching element 52b is set to "OFF" while the operator 81 is
pressing the microphone switch 51. Accordingly, in the first communication mode, the
patient microphone 2 is set to a state electrically disconnected from the speaker
50, while the operator microphone 41 is in a state electrically connected to the speaker
5.
[0133] When the operator 81 utters the voice v1, the operator microphone 41 receives the
voice v1 of the operator 81. Upon receiving the voice v1, the operator microphone
41 outputs the analog signal d1(t) representing the received voice v1. The preamplifier
42 receives the analog signal d1(t), amplifies the analog signal d1(t), and outputs
the analog signal d2(t).
[0134] The ADC 43 converts the analog signal d2(t) into the digital signal D(n). The digital
signal D(n) is a signal containing sound data representing the sound that the operator
microphone 41 has received (the voice v1 of the operator 81 here). The digital signal
D(n) is supplied to the DAC 44.
[0135] The DAC 44 converts the digital signal D(n) into the analog signal c1(t). The power
amplifier 45 receives the analog signal c1(t), and outputs the analog signal c2(t).
The analog signal c2(t) is input to the speaker 5, which in turn outputs the voice
v2 of the operator 81 corresponding to the received analog signal c2(t).
[0136] The operation described above is identical to that described with reference to FIG.
9, where the filter block 60 is omitted in the drawing. However, since the CT apparatus
1 comprises the filter block 60 as shown in FIG. 18, the digital signal D(n) is supplied
to the filter block 60, in addition to the DAC 44. In the case that the sound received
by the patient microphone 2 contains the voice v2 (voice of the operator 81 output
from the speaker 5), and in addition, another sound v4 (e.g., the voice of the patient
80, operating noise from machinery, and/or voice of the operator in the scan room
R1), the filter block 60 executes processing of removing the sound v4 from a sound
(v2+v4) received by the patient microphone 2. Now the processing will be particularly
described hereinbelow.
[0137] As shown in FIG. 18, the patient microphone 2 receives the voice v2 of the operator
81 output from the speaker 5, and in addition, another sound v4 (e.g., including the
voice of the patient 80, operating noise from machinery, and/or the voice of the operator
in the scan room R1). The sound v4 will be referred to hereinbelow as noise. Upon
receiving the sound containing the voice v2 of the operator 81 and noise v4, the patient
microphone 2 outputs an analog signal m1(t) representing the received sound.
[0138] The analog signal m1(t) is input to the amplifier board 3. The amplifier board 3
processes the analog signal m1(t), and outputs an analog signal m2(t). The buffer
amplifier 46 processes the analog signal m2(t), and outputs an analog signal m3(t).
The analog signal m3(t) may be expressed by the following equation:

wherein the signal component d3(t) is a signal component corresponding to the voice
v2 of the operator 81 output from the speaker 5, and the signal component e(t) is
a signal component corresponding to the noise v4.
[0139] The analog signal m3(t) is converted into a digital signal M(n) at the ADC 47. The
digital signal M(n) is a signal containing sound data representing the sound (sound
containing the voice v2 and noise v4 here) that the patient microphone 2 has received.
The digital signal M(n) may be expressed by the following equation:

wherein D3(n) and E(n) correspond, respectively, to the signal components d3(t) and
e(t) of the analog signal m3(t) input to the ADC 47 (see the right side of EQ. (1)).
Accordingly, the signal component D3(n) of the digital signal M(n) represents the
signal component corresponding to the voice v2 of the operator 81 output from the
speaker 5, and the signal component E(n) of the digital signal M(n) represents the
signal component E(n) corresponding to the noise v4.
[0140] Comparing the signal component D3(n) with the digital signal D(n) described earlier,
the signal component D3(n) represents the voice v2 received by the patient microphone
2 in the scan room R1, while the digital signal D(n) represents the voice v1 received
by the operator microphone 41 in the operation room R2. Since the voice v2 may be
considered to be substantially the same as the voice v1, the signal component D3(n)
may be considered to be substantially the same as the digital signal D(n). Hence,
representing D3(n) = D(n), EQ. (2) may be expressed by the following equation:

[0141] Accordingly, in the present embodiment, the digital signal M(n) is considered to
be expressed by a sum of the two signal components D(n) and E(n), as given by EQ.
(3).
[0142] The digital signal M(n) is supplied to the subtracting section 62.
[0143] Moreover, as described earlier, the filter block 60 has the adaptive filter 61. The
adaptive filter 61 receives the digital signal D(n), and outputs the digital signal
D'(n). The digital signal D'(n) is output to the subtracting section 62.
[0144] The subtracting section 62 subtracts the digital signal D'(n) that the adaptive filter
61 has output, from the digital signal M(n) that the ADC 47 has output, and outputs
a digital signal M'(n). The digital signal M'(n) may be expressed by the following
equation:

[0145] Substituting M(n) expressed by EQ. (3) into EQ. (4), the following equation results:

[0146] As described earlier, the adaptive filter 61 receives the digital signal M'(n) from
the subtracting section 62 via the input section 61c. The adaptive filter 61 adjusts
its coefficients based on the digital signal M'(n) received from the subtracting section
62 so that a difference between the digital signal D(n) received at the input section
61a and the digital signal D'(n) output from the output section 61b is as close to
zero as possible. This allows us to regard D'(n) as D'(n)≈D(n). Thus, EQ. (5) may
be expressed by the following equation:

[0147] As described earlier, E(n) represents the signal component corresponding to the noise
v4. Accordingly, by the subtracting section 62 subtracting the digital signal D'(n)
that the adaptive filter 61 has output, from the digital signal M(n) that the ADC
47 has output, the digital signal M'(n) representing the signal component corresponding
to the noise v4 can be generated from the digital signal M(n).
[0148] The filter block 60 also has another subtracting section 63. The subtracting section
63 subtracts the digital signal M'(n) from the digital signal M(n) that the ADC 47
has output, and outputs a digital signal P(n). The digital signal P(n) is expressed
by the following equation:

wherein M(n) is expressed by EQ. (3), and M'(n) is expressed by EQ. (6); therefore,
EQ. (7) may be changed into the following equation:

[0149] As given by EQ. (6), the digital signal M'(n) represents the signal component E(n)
substantially corresponding to the noise v4. Accordingly, by the subtracting section
63 subtracting the digital signal M'(n) from the digital signal M(n), a digital signal
P(n)≈D(n) containing sound data representing the voice v2 of the operator can be generated.
[0150] In this way, the filter block 60 can remove signal components substantially corresponding
to the noise v4 from the digital signal M(n) containing the voice v2 of the operator
and noise v4 to extract the digital signal P(n)≈D(n) corresponding to the voice v2
of the operator.
[0151] The digital signal P(n)≈D(n) output from the filter block 60 is input to the GT control
section 21.
[0152] The GT control section 21 executes processing of converting the digital signal P(n)≈D(n)
into a digital signal Q(n) compatible with a CAN (Controller Area Network) communication.
The GT control section 21 has a storage section storing therein a program for executing
the processing of converting the digital signal P(n) into the digital signal Q(n)
compatible with a CAN communication, and a processor for loading the program stored
in the storage section and executing the aforesaid conversion processing. The storage
section in the GT control section 21 may be a non-transitory, computer-readable recording
medium storing therein one or more processor-executable instructions. The one or more
instructions, when executed by the processor, causes the processor to execute the
operation of converting the digital signal P(n) into the digital signal Q(n).
[0153] The GT control section 21 outputs the digital signal Q(n) to the light-emission control
section 32.
[0154] The light-emission control section 32 executes processing of controlling the light-emitting
section 31 based on the digital signal Q(n). The light-emission control section 3
has a storage section storing therein a program for controlling the light-emitting
section 31 based on the digital signal Q(n), and a processor for loading the program
stored in the storage section and executing the aforesaid control processing. The
storage section in the light-emission control section 32 may be a non-transitory,
computer-readable recording medium storing therein one or more processor-executable
instructions. The one or more instructions, when executed by the processor, causes
the processor to execute the operation of controlling the light-emitting section 31
based on the digital signal Q(n).
[0155] As described earlier with reference to FIGS. 11 to 16, the light-emitting section
31 changes the number of energized light-emitting elements depending upon the loudness
of the voice of the operator 81.
[0156] As described above, when noise v4 occurs in the first communication mode, the patient
microphone 2 receives the voice v2 of the patient 80, and in addition, the noise v4.
However, since the operation of the filter block 60 can remove the noise v4 from the
sound (v2+v4) received by the patient microphone 2, the GT control section 21 is supplied
with the digital signal P(n) containing substantially only the voice of the operator
81. Accordingly, even when the noise v4 occurs in the first communication mode, the
light-emitting section 31 can be energized in response to the loudness of the voice
of the operator 81.
[0157] The main operations of the adaptive filter 61, and subtracting sections 62 and 63
in the first communication mode shown in FIG. 18 are as follows.
(a1) The input section 61a of the adaptive filter 61 receives the digital signal D(n)
containing sound data representing a sound that the operator microphone 41 has received.
(a2) The subtracting section 62 receives the digital signal M(n) containing sound
data representing a sound that the patient microphone 2 has received.
(a3) The subtracting section 62 generates the digital signal M'(n) representing signal
components corresponding to the noise v4 from the digital signal M(n).
(a4) The adaptive filter 61 generates the digital signal D'(n) based on the digital
signal D(n) and digital signal M'(n). The adaptive filter 61 also adjusts its coefficients
based on the digital signal M'(n) so that a difference between the digital signal
D(n) and digital signal D'(n) is as close to zero as possible.
(a5) The subtracting section 63 subtracts the digital signal M'(n) from the digital
signal M(n) to thereby generate the digital signal P(n) containing sound data representing
the voice of the operator 81.
[0158] Moreover, the intercom module 4 has a storage section 53 storing therein a program
for executing the processing of the filter block 60 described above with reference
to FIG. 18. The filter block 60 is configured as a processor for loading the program
stored in the storage section 53 and executing the aforesaid processing. The storage
section 53 may be a non-transitory, computer-readable recording medium storing therein
one or more processor-executable instructions. The one or more instructions, when
executed by the processor, causes the processor to execute the operations comprising
the processing of (b1) - (b5) below:
(b1) receiving the digital signal D(n) containing sound data representing a sound
that the operator microphone 41 has received;
(b2) receiving the digital signal M(n) containing sound data representing a sound
that the patient microphone 2 has received;
(b3) generating the digital signal M'(n) representing signal components corresponding
to noise from the digital signal M(n);
(b4) generating the digital signal D'(n) based on the digital signal D(n) and digital
signal M'(n); and
(b5) subtracting the digital signal M'(n) from the digital signal D(n) to thereby
generate the digital signal P(n) containing sound data representing the voice of the
operator 81.
[0159] In the present embodiment, the program for executing the operations comprising the
processing (b1) - (b5) above is stored in the storage section 53 of the intercom module
4. The program, however, may be stored in a storage section different from the storage
section 53, or only part of the program may be stored in a storage section different
from the storage section 53.
[0160] In FIG. 18, the operation of the CT apparatus 1 in the first communication mode is
described. Next, the operation of the CT apparatus 1 in the second communication mode
will be described hereinbelow.
(On the Second Communication Mode)
[0161] FIG. 19 is an explanatory diagram for an operation of the intercom module 4 in the
second communication mode.
[0162] When the operator 81 is not pressing the microphone switch 51, the switching element
52b is in an "ON" state and the switching element 52a is in an "OFF" state, as shown
in FIG. 19. Accordingly, in the second communication mode, the operator microphone
41 is in a state electrically disconnected from the speaker 5 while the patient microphone
2 is in a state electrically connected to the speaker 50.
[0163] When the patient 80 utters the voice v3, the patient microphone 2 receives the voice
v3 of the patient 80. Upon receiving the voice v3, the patient microphone 2 outputs
the analog signal m1(t) representing the received voice v3.
[0164] The amplifier board 3 receives the analog signal m1(t) output from the patient microphone
2, amplifies the received analog signal m1(t), and outputs the analog signal m2(t).
The buffer amplifier 46 processes the analog signal m2(t) received from the amplifier
board 3, and outputs the analog signal m3(t).
[0165] The ADC 47 converts the analog signal m3(t) output from the buffer amplifier 46 into
the digital signal M(n).
[0166] The digital signal M(n) is supplied to the subtracting section 62.
[0167] The subtracting section 62 subtracts the digital signal D'(n) that the adaptive filter
61 has output, from the digital signal M(n) that the ADC 47 has output, and outputs
the digital signal M'(n). The digital signal M'(n) may be expressed by the following
equation:

[0168] In the second communication mode, the switching element 52a is "OFF," and this allows
us to regard the digital signal D'(n) as D'(n)≈0, Accordingly, EQ. (9) may be expressed
by the following equation:

[0169] Since the digital signal M(n) represents the voice v3 of the patient 80, it can be
seen that the digital signal M'(n) output by the subtracting section 62 substantially
represents the voice v3 of the patient 80.
[0170] The digital signal M'(n) is input to the DAC 48. The DAC 48 converts the digital
signal M'(n) into the analog signal f1(t). The power amplifier 49 receives the analog
signal f1(t) from the DAC 48, amplifies the received analog signal f1(t), and outputs
the analog signal f2(t). The analog signal f2(t) is supplied to the speaker 50. Accordingly,
a circuitry part constituted by the DAC 48 and power amplifier 49 operates as a circuitry
part that generates the analog signal f2(t) to be supplied to the speaker 50 based
on the digital signal M'(n).
[0171] The speaker 50 receives the analog signal f2(t) from the power amplifier 49, and
outputs a sound corresponding to the received analog signal f2(t).
[0172] Accordingly, when the patient 80 utters the voice v3, the operator 81 can hear the
voice v3 of the patient 80 through the speaker 50.
[0173] The digital signal M'(n) is also supplied to the subtracting section 63. The subtracting
section 63 subtracts the digital signal M'(n) from the digital signal M(n) that the
ADC 47 has output, and outputs the digital signal P(n). The digital signal P(n) is
expressed by the following equation:

wherein since M'(n)≈M(n) (see EQ. (10)), EQ. (11) may be changed into the following
equation:

[0174] Accordingly, in the second communication mode, the digital signal P(n) is P(n)≈0.
Thus, the light-emission control section 32 decides that the operator 81 is uttering
substantially no voice, and therefore, the light-emitting section 31 can be prevented
from emitting light when the patient 80 utters the voice v3.
[0175] As described above, in the first communication mode (see FIG. 18), when the noise
v4 occurs, it can be removed from the sound that the patient microphone 2 has received.
Accordingly, the operator 81 can see the light-emitting section 31 while uttering
a voice to thereby visually confirm a change of the number of energized light-emitting
elements depending upon the loudness of the voice of the operator 81 in real-time,
and therefore, can visually recognize at how much loudness the voice of the operator
81 is heard by the patient 80.
[0176] In the second communication mode (see FIG. 19), when the patient 80 utters the voice
v3, the operator 81 can hear the voice of the patient 80. Moreover, the light-emitting
section 31 emits no light when the patient 80 utters the voice v3, and therefore,
it is possible to avoid light emission by the light-emitting section 31 in spite of
the fact that the voice of the operator 81 is not output from the speaker 5.
[0177] In the present embodiment, the GT control section 21 and light-emission control section
32 are used to generate the control signal L(n) for controlling the light-emitting
section 31 from the digital signal P(n). The GT control section 21 and light-emission
control section 32, however, may be constructed as a single control section, which
may be used to generate the control signal L(n) for controlling the light-emitting
section 31 from the digital signal P(n).
[0178] The present embodiment describes a case in which the operator 81 is informed that
his/her voice is being output from the speaker 5 by the light-emitting section 31.
The method of informing the operator 81 that his/her voice is being output from the
speaker 5 is not limited to the case above, and the operator 81 may be informed by
a different method. Now as the other method, a case in which the display section 33
(see FIG. 1) on the gantry 100 is used will be described hereinbelow.
[0179] FIG. 20 is an explanatory diagram for the case in which the operator 81 is informed
that his/her voice is being output from the speaker 5 by the display section 33 on
the gantry 100.
[0180] The GT control section 21 receives the digital signal P(n), based on which it generates
a control signal T(n) for controlling the display section 33. The display section
33 informs the operator 81 that his/her voice is being output from the speaker 5 based
on the control signal T(n) (see FIG. 21).
[0181] FIG. 21 is an enlarged view of the display section 33 on the gantry 100.
[0182] On the display section 33 is displayed a level meter 34. The level meter 34 is divided
into a plurality of areas. While the level meter 34 is shown to be divided into five
areas in FIG. 21 for convenience of explanation, the level meter 34 may be divided
into more than or less than five areas. Each area corresponds to a respective light-emitting
element in the light-emitting section 31 (see FIG. 10). The level meter 34 indicates
the loudness of the voice of the operator 81 by five levels: level 1 to level 5. In
FIG. 21, a case in which the loudness of the voice of the operator 81 is at level
4 is shown.
[0183] In response to the control signal T(n), the display section 33 changes the level
indicated by the level meter 34 so that the level corresponds to the loudness of the
voice of the operator 81. Accordingly, the operator 81 can confirm whether or not
his/her voice is being output from the speaker 5 by visually confirming the display
section 33.
[0184] In FIG. 20 (and FIG. 21), the control signal T(n) is transmitted to the display section
33 on the gantry 100 to display the level meter 34 on the display section 33. The
control signal T(n), however, may be transmitted to the display device 302 on the
operator console 300 to display the level meter on the display device 302, as shown
in FIG. 22.
[0185] Moreover, at least two or more of the light-emitting section 31, display section
33 on the gantry 100, and display device 302 on the operator console may be used to
inform the operator 81 that his/her voice is being output from the speaker 5. Furthermore,
the intercom module 4 may be provided with a display section to display information
for informing the operator 81 whether or not his/her voice is being output from the
speaker 5 on the display section on the intercom module 4.
[0186] While the present embodiment describes a case in which the light-emitting section
31 functioning as the level meter may be used to inform the operator 81 that his/her
voice is being output from the speaker, a manner different from the level meter may
be used insofar as it can inform the operator 81 that his/her voice is being output
from the speaker.
[0187] Moreover, in the present embodiment, the number of energized light-emitting elements
among the light-emitting elements e1 to e5 in the light-emitting section 31 is changed
depending upon the loudness of the voice of the operator 81. The light-emitting section
31, however, may be constructed from only one light-emitting element, which is energized
when the operator 81 utters a voice and not energized when the operator 81 is not
uttering a voice.
[0188] Furthermore, in the present embodiment, communication between the operator 81 and
patient 80 are implemented using the intercom module 4 capable of changing between
the first communication mode and second communication mode with the microphone switch
51. The present invention is, however, not limited to the case in which the intercom
module 4 described above is used, and it may be applied to a case in which a communication
system capable of performing communication from the operator 81 to the patient 80
and that from the patient 80 to the operator 81 at the same time is used.
[0189] In the present embodiment, the filter block 60 is constructed from the adaptive filter
61, and subtracting sections 62 and 63. The filter block 60 is, however, not limited
to this construction, and it may have a construction different from that of the adaptive
filter 61, subtracting sections 62 and 63 insofar as noise may be removed from the
sound received by the patient microphone 2. For example, the filter block 60 may be
constructed using a computing section (e.g., an adding section, a multiplying section,
or a dividing section) different from the subtracting section.
[0190] Moreover, in the present embodiment, a DSP is used as the filter block 60. In the
present invention, however, the filter block 60 is not limited to the DSP and it may
be implemented using circuitry, such as, for example, an FPGA (field-programmable
gate array), different from the DSP.
[0191] Furthermore, in the present embodiment, the operator 81 visually confirms the light-emitting
section 31 via the window 102 (see FIG. 1). The operator 81, however, may confirm
the light emission state of the light-emitting section 31 by a method different from
that of visually confirming the light-emitting section 31 via the window 102. For
example, a camera for monitoring the inside of the scan room R1 may be provided to
display the camera image on the display device in the operation room R2 so as to allow
the operator 81 to visually confirm the light emission state of the light-emitting
section 31.
[0192] In addition, while the scan room R1 and operation room R2 are separated from each
other by the wall 101 in the present embodiment, the present invention is not limited
to the case in which the scan room R1 and operation room R2 are separated from each
other by the wall 101. For example, a corridor may be provided between the scan room
R1 and operation room R2 so that the operator can walk therethrough to move between
the scan room R1 and operation room, instead of separating the scan room R1 operation
room R2 by the wall 101. In this case, in order that the operator 81 can visually
confirm the light emission state of the light-emitting section 31, windows for allowing
the operator 81 to visually confirm the light emission state of the light-emitting
section 31 may be provided in both the scan room R1 and operation room R2. Alternatively,
a camera for monitoring the inside of the scan room R1 may be provided to display
the camera image on the display device in the operation room R2 so as to allow the
operator 81 to visually confirm the light emission state of the light-emitting section
31.
[0193] Moreover, in the present embodiment, the gantry 100 is provided with the light-emitting
section 31 for visually informing the operator that his/her voice is being output
from the speaker. The operator is, however, not necessarily visually informed insofar
as the operator can recognize that his/her voice is being output from the speaker,
and the operator may be informed by another way, for example, by an auditory way.
[0194] Furthermore, the present embodiment describes the case of the CT apparatus 1. The
present invention, however, may be applied to any medical apparatus, such as an MRI
apparatus or a SPECT apparatus, other than the CT apparatus 1, that requires communication
between an operator and a patient.