[0001] The present invention relates to a sound source processing method in a musical tone
waveform generation apparatus and, more particularly, to a musical tone waveform generation
apparatus capable of mixing a plurality of sound source methods.
[0002] Along with the development of the digital signal processing techniques and LSI processing
techniques, various electronic musical instruments having good performance have been
realized.
[0003] Since a musical tone waveform generation apparatus for an electronic musical instrument
requires large-volume, high-speed digital calculations, a conventional apparatus is
constituted by a special-purpose sound source circuit which realizes an architecture
equivalent to a musical tone generation algorithm based on a required sound source
method by hardware components. Such a sound source circuit generates a musical tone
waveform on the basis of a PCM or modulation method.
[0004] The above-mentioned sound source circuit has a large circuit scale regardless of
the sound source method adopted. When the sound source circuit is formed in an LSI,
it has a scale about twice that of a versatile data processing microprocessor since
the sound source circuit requires complicated address control for accessing waveform
data on the basis of various performance data. Registers or the like for temporarily
storing intermediate data obtained in the process of sound source generation processing
must be arranged everywhere in the architecture corresponding to the sound source
method. Furthermore, in order to realize a polyphonic arrangement capable of simultaneously
generating a plurality of musical tones, shift registers or the like for time-divisionally
executing sound source processing in a hardware manner must be arranged everywhere.
[0005] As described above, since the conventional musical tone waveform generation apparatus
is constituted by the special-purpose sound source circuit corresponding to the sound
source method, its hardware scale is undesirably increased. This results in an increase
in manufacturing cost in terms of, e.g., a yield in the manufacture of LSI chips,
when the sound source circuit is realized by an LSI. This also results in an increase
in size of the musical tone waveform generation apparatus.
[0006] When a sound source method is to be changed, or when the number of polyphonic channels
is to be increased, the sound source circuit must be considerably modified, resulting
in an increase in development cost.
[0007] When the conventional musical tone waveform generation apparatus is realized as an
electronic musical instrument, a control circuit, comprising, e.g., a microprocessor,
for generating, based on performance data corresponding to a performance operation,
data which can be processed by the sound source circuit, and for communicating performance
data with another musical instrument, is required. The control circuit requires a
sound source control program, corresponding to the sound source circuit, for supplying
data corresponding to performance data to the sound source circuit in addition to
a performance data processing program for processing performance data. In addition,
these two programs must be synchronously operated. The development of such complicated
programs causes a considerable increase in cost.
[0008] On the other hand, in recent years, a large number of high-performance microprocessors
for performing versatile data processing have been developed, and a musical tone waveform
generation apparatus for exectuting sound source processing in a software manner using
such a microprocessor may be realized. However, no technique for synchronously operating
a performance data processing program for processing performance data, and a sound
source processing program for executing sound source processing on the basis of the
performance data is available. In particular, since a processing time in the sound
source processing program varies depending on the sound source method, a complicated
timing control program for outputting generated musical tone data to a D/A converter
is required. When the sound source processing is merely performed in a software manner,
the processing programs are complicated very much, and processing of the high-speed
sound source method such as a modulation method cannot be executed in terms of a processing
speed and a program capacity. In particular, high-grade sound source processing for
switching sound source methods in units of tone generation channels, and generating
tones in different sound source methods in accordance with performance data so as
to generate a real musical tone waveform having a complicated frequency structure
like musical tones generated by an acoustic instrument cannot be performed.
[0009] Furthermore, a player sometimes wants to make a performance with a plurality of instrument
tone colors by himself or herself to meet his or her requirements on a performance.
In this case, the following processing is required. That is, a split point is determined
for tone ranges or velocities of ON keys of an electronic musical instrument, so that
musical tones in a plurality of instrument tone colors can be generated in accordance
with a range having the split point as a boundary to which the tone range or velocity
belongs, thus attaining complicated colorful musical expressions. However, simple
software processing cannot attain such high-grade sound source method processing.
It is also difficult to execute processing for generating tones in different instrument
tone colors in units of music parts.
[0010] It is an object of the present invention to attain high-grade sound source processing
which can assign different sound source methods to a plurality of tone generation
channels under the program control of a microprocessor without requiring a special-purpose
sound source circuit.
[0011] It is another object of the present to allow generation of musical tone signals in
different tones or different sound source methods in units of regions, or operation
velocities, or music parts having a split point as a boundary under the program control
of a microprocessor without requiring a special-purpose sound source circuit.
[0012] According to the first aspect of the present invention, there is provided a musical
tone waveform generation apparatus comprising: storage means for storing a plurality
of sound source processing programs corresponding to a plurality of types of sound
source methods; musical tone signal generation means for generating musical tone signals
in arbitrary sound source methods in tone generation channels by executing the plurality
of sound source programs stored in the storage means; and musical tone signal output
means for outputting the musical tone signals generated by the musical tone signal
generation means at predetermined output time intervals.
[0013] According to the musical tone waveform generation apparatus of the first aspect of
the present invention, high-grade sound source processing which can assign different
sound source methods to a plurality of tone generation channels without using a special-purpose
sound source circuit can be performed. Since a constant output rate of a musical tone
signal can be maintained upon operation of the musical tone signal output means, a
musical tone waveform will not be distorted.
[0014] According to the second aspect of the present invention, there is provided a musical
tone waveform generation apparatus comprising: program storage means for storing a
performance data processing program for processing performance data, and a plurality
of sound source processing programs corresponding to a plurality of sound source methods
for obtaining a musical tone signal; address control means for controlling an address
of the program storage means; data storage means for storing musical tone generation
data necessary for generating a musical tone signal by an arbitrary one of the plurality
of sound source methods in units of tone generation channels; arithmetic processing
means for performing a predetermined arithmetic operation; program execution means
for executing the performance data processing program and the sound source processing
program stored in the program storage means while controlling the address control
means, the data storage means, and the arithmetic processing means, for normally executing
the performance data processing program to control musical tone generation data on
the data storage means, for executing the sound source processing program at predetermined
time intervals, for executing the performance data processing program again upon completion
of the sound source processing program, and for executing time-divisional processing
on the basis of musical tone generation data on the data storage means upon execution
of the sound source processing program so as to generate musical tone signals by the
sound source methods assigned to the tone generation channels; and musical tone signal
output means for holding the musical tone signals obtained upon execution of the sound
source processing programs by the program execution means, and outputting the held
musical tone signals at predetermined output time intervals.
[0015] In the musical tone waveform generation apparatus according to the second aspect
of the present invention, the program storage means, the address control means, the
data storage means, the arithmetic processing means, and the program execution means
have the same arrangement as a versatile microprocessor, and no special-purpose sound
source circuit is required at all. The musical tone signal output means is versatile
in the category of a musical tone waveform generation apparatus although it has an
arrangement different from that of a versatile microprocessor.
[0016] The circuit scale of the overall musical tone waveform generation apparatus can be
greatly reduced, and when the apparatus is realized by an LSI, the same manufacturing
technique as that of a normal processor can be adopted. Since the yield of chips can
be increased, manufacturing cost can be greatly reduced. Since the musical tone signal
output means can be constituted by simple latch circuits, addition of this circuit
portion causes almost no increase in manufacturing cost.
[0017] When a modulation method is required to be switched, or when the number of polyphonic
channels is required to be changed, a sound source processing program stored in the
program storage means need only be changed to meet the above requirements. Therefore,
the development cost of a new musical tone waveform generation apparatus can be greatly
reduced, and a new modulation method can be presented to a user by means of, e.g.,
a ROM card.
[0018] The above-mentioned effects can be provided since the second aspect of the present
invention can realize the following program and data architectures.
[0019] More specifically, the musical tone waveform generation apparatus according to the
second aspect of the present invention realizes a data architecture in which musical
tone generation data necessary for generating musical tones are stored on the data
storage means. When a performance data processing program is executed, corresponding
musical tone generation data on the data storage means are controlled, and when a
sound source processing program is executed, musical tone signals are generated on
the basis of the corresponding musical tone generation data on the data storage means.
In this manner, a data communication between the performance data processing program
and the sound source processing program is performed via musical tone generation data
on the data storage means, and access of one program to the data storage means can
be performed regardless of an execution state of the other program. Therefore, the
two programs can have substantially independent module arrangements, and hence, a
simple and efficient program architecture can be attained.
[0020] In addition to the data architecture, the musical tone waveform generation apparatus
according to the second aspect of the present invention realizes the following program
architecture. That is, the performance data processing program is normally executed
to execute, e.g., scanning of keyboard keys and various setting switches, demonstration
performance control, and the like. During execution of this program, the sound source
processing program is executed at predetermined time intervals, and upon completion
of the processing, the control returns to the performance data processing program.
Thus, the sound source processing program forcibly interrupts the performance data
processing program on the basis of an interrupt signal generated from the interrupt
control means at predetermined time intervals. For this reason, the performance data
processing program and the sound source processing program need not be synchronized.
[0021] When the program execution means executes the sound source processing program, its
processing time changes depending on sound source methods. However, the change in
processing time can be absorbed by the musical tone signal output means. Therefore,
no complicated timing control program for outputting musical tone signals to, e.g.,
a D/A converter is required.
[0022] As described above, the data architecture for attaining a data link between the performance
data processing program and the sound source processing program via musical tone generation
data on the data storage means, and the program architecture for executing the sound
source processing program at predetermined time intervals while interrupting the performance
data processing program are realized, and the musical tone signal output means is
arranged. Therefore, sound source processing under the efficient program control can
be realized by substantially the same arrangement as a versatile processor.
[0023] Furthermore, the data storage means stores musical tone generation data necessary
for generating musical tone signals in an arbitrary one of a plurality of sound source
methods in units of tone generation channels, and the program execution means executes
the performance data processing program and the sound source processing program by
time-divisional processing in correspondence with the tone generation channels. Therefore,
the program execution means accesses the corresponding musical tone generation data
on the data storage means at each time-divisional timing, and executes a sound source
processing program of the assigned sound source method while simply switching the
two programs. In this manner, musical tone signals can be generated by different sound
source methods in units of tone generation channels.
[0024] In this manner, according to the second aspect of the present invention, musical
tone signals can be generated by different sound source methods in units of tone generation
channels under the simple control, i.e., by simply switching between time-divisional
processing for musical tone generation data in units of tone generation channels on
the data storage means, and a sound source processing program based on the musical
tone generation data.
[0025] According to the third aspect of the present invention, there are provided a musical
tone waveform generation apparatus comprising: storage means for storing a sound source
processing program; musical tone signal generation means for executing the sound source
processing program stored in the storage means to generate a musical tone signal;
pitch designation means for designating a pitch of the musical tone signal generated
by the musical tone signal generation means; tone color determination means for determining
a tone color of the musical tone signal generated by the musical tone signal generation
means in accordance with the pitch designated by the pitch designation means; control
means for controlling the musical tone signal generation means to generate the musical
tone signal having the pitch designated by the pitch designation means and the tone
color determined by the tone color determination means; and musical tone signal output
means for outputting the musical tone signal generated by the musical tone signal
generation means at predetermined time intervals.
[0026] According to the fourth aspects of the present invention, there are provided a musical
tone waveform generation apparatus comprising: storage means for storing a sound source
processing program; musical tone signal generation means for executing the sound source
processing program stored in the storage means to generate a musical tone signal;
a performance operation member for instructing the musical tone signal generation
means to generate the musical tone signal; tone color determination means for determining
a tone color of the musical tone signal to be generated by the musical tone signal
generation means in accordance with an operation velocity of the performance operation
member; control means for controlling the musical tone signal generation means to
generate the musical tone signal having the tone color determined by the tone color
determination means; and musical tone signal output means for outputting the musical
tone signal generated by the musical tone signal generation means at predetermined
time intervals.
[0027] According to the fifth aspect of the present invention, there are provided a musical
tone waveform generation apparatus comprising: storage means for storing a sound source
processing program; musical tone signal generation means for executing the sound source
processing program stored in the storage means to generate a musical tone signal;
output means for outputting performance data of a plurality of parts constituting
a music piece; tone color determination means for determining a tone color of the
musical tone signal to be generated by the musical tone signal generation means in
accordance with one of the plurality of parts to which the performance data output
from the output means belongs; control means for controlling the musical tone generation
means to generate the musical tone signal having the tone color determined by the
tone color determination means; and musical tone signal output means for outputting
the musical tone signal generated by the musical tone signal generation means at predetermined
time intervals.
[0028] According to the musical tone waveform generation apparatuses of the third, fourth,
and fifth aspects of the present invention, musical tone signals can be generated
in different tone colors in units of regions, or operation velocities, or musical
parts having a split point as a boundary without using a special-purpose sound source
circuit. Since a constant output rate of musical tone signals can be maintained upon
operation of the musical tone signal output means, a musical tone waveform will not
be distorted.
[0029] According to the sixth aspect of the present invention, there are provided a musical
tone waveform generation apparatus comprising: program storage means for storing a
performance data processing program for processing performance data, and a sound source
processing program for obtaining a musical tone signal; address control means for
controlling an address of the program storage means; split point designation means
for causing a player to designate a split point to divide a range of a performance
data value into a plurality of ranges; tone color designation means for designating
tone colors of the plurality of ranges having the split point designated by the split
point designation means as a boundary; data storage means for storing musical tone
generation data necessary for generating the musical tone signal in correspondence
with a plurality of tone colors; arithmetic processing means for processing data;
program execution means for executing the performance data processing program and
the sound source processing program stored in the program storage means while controlling
the address control means, the data storage means, and the arithmetic processing means,
for normally executing the performance data processing program to control musical
tone generation data stored in the data storage means, for executing the sound source
processing program at predetermined time intervals, for executing the performance
data processing program again upon completion of the sound source processing program,
and for generating, upon execution of the sound source processing program, the musical
tone signal on the basis of the musical tone generation data on the data storage means
corresponding to the tone color designated by the tone color designation means in
correspondence with the range which has the split point designated by the split point
designation means as a boundary, and to which the performance data value belongs;
and musical tone signal output means for holding the musical tone signals in units
of tone generation operations obtained upon execution of the sound source processing
program by the program execution means, and outputting the held musical tone signals
at predetermined output time intervals.
[0030] According to the seventh aspect of the present invention, there are provided a musical
tone waveform generation apparatus comprising: program storage means for storing a
performance data processing program for processing performance data, and a plurality
of sound source processing programs corresponding to a plurality of sound source methods
for obtaining a musical tone signal; address control means for controlling an address
of the program storage means; split point designation means for causing a player to
designate a split point to divide a range of a performance data value into a plurality
of ranges; sound source method designation means for causing the player to designate
the sound source methods for the divided ranges having the split point designated
by the split point designation means as a boundary; data storage means for storing
musical tone generation data necessary for generating the musical tone signal in correspondence
with the plurality of sound source methods; arithmetic processing means for processing
data; program execution means for executing the performance data processing program
or the sound source processing program stored in the program control means while controlling
the address control means, the data storage means, and the arithmetic processing means,
for normally executing the performance data processing program to control musical
tone generation data on the data storage means, for executing the sound source processing
program at predetermined time intervals, for executing the performance data processing
program again upon completion of the sound source processing program, and for generating,
upon execution of the sound source processing program, the musical tone signal on
the basis of the musical tone generation data corresponding to the sound source method
corresponding to the range to which the performance data value belongs, and by the
sound source processing program corresponding to the sound source method; and musical
tone signal output means for holding the musical tone signals obtained upon execution
of the sound source processing programs by the program execution means, and outputting
the held musical tone signals at predetermined output time intervals.
[0031] According to the eighth aspects of the present invention, there are provided a musical
tone waveform generation apparatus comprising: program storage means for storing a
performance data processing program for processing performance data, and a sound source
processing program for obtaining a musical tone signal; address control means for
controlling an address of the program storage means; tone color designation means
for causing a player to designate tone colors in units of music parts of musical tone
signals to be played; data storage means for storing musical tone generation data
necessary for generating a musical tone signal in an arbitrary one of the plurality
of tone colors; arithmetic processing means for processing data; program execution
means for executing the performance data processing program and the sound source processing
program stored in the program control means while controlling the address control
means, the data storage means, and the arithmetic processing means, for normally executing
the performance data processing program to control musical tone generation data on
the data storage means, for executing the sound source processing program at predetermined
time intervals, for executing the performance data processing program again upon completion
of the sound source processing program, and for generating, upon execution of the
sound source processing program, the musical tone signal on the basis of the musical
tone generation data on the data storage means corresponding to the tone color designated
by the tone color designation means in correspondence with the music part of the musical
tone signal generated by the sound source processing program; and musical tone signal
output means for holding the musical tone signals in units of tone generation operations
obtained upon execution of the sound source processing program by the program execution
means, and outputting the held musical tone signals at predetermined output time intervals.
[0032] According to the ninth aspect of the present invention, there are provided a musical
tone waveform generation apparatus comprising: program storage means for storing a
performance data processing program for processing performance data, and a plurality
of sound source processing programs corresponding to a plurality of sound source methods
for obtaining a musical tone signal; address control means for controlling an address
of the program storage means; sound source method designation means for causing a
player to designate sound source methods in units of music parts of musical tone signals
to be played; data storage means for storing musical tone generation data necessary
for generating a musical tone signal by an arbitrary one of the plurality of sound
source methods; arithmetic processing means for processing data; program execution
means for executing the performance data processing program and the sound source processing
program stored in the program control means while controlling the address control
means, the data storage means, and the arithmetic processing means, for normally executing
the performance data processing program to control musical tone generation data on
the data storage means, for executing the sound source processing program at predetermined
time intervals, for executing the performance data processing program again upon completion
of the sound source processing program, and for generating, upon execution of the
sound source processing program, the musical tone signal on the basis of the musical
tone generation data corresponding to the sound source method corresponding to the
music part of the musical tone signal generated by the sound source processing program,
and by the sound source processing program corresponding to the sound source method;
and musical tone signal output means for holding the musical tone signals obtained
upon execution of the sound source processing programs by the program execution means,
and outputting the held musical tone signals at predetermined output time intervals.
[0033] According to the musical tone waveform generation apparatuses according to the sixth
and seventh aspects of the present invention, a player can designate a split point,
and can also designate tone colors or sound source methods in units of ranges having
the designated split point as a boundary, so that musical tone signals can be generated
by switching the corresponding tone colors or sound source methods in accordance with
the above-described ranges of predetermined performance data.
[0034] According to the musical tone waveform generation apparatuses according to the eighth
and ninth aspects of the present invention, tone colors or sound source methods can
also be switched in accordance with not a split point but music parts.
[0035] This invention can be more fully understood from the following detailed description
when taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a block diagram showing the overall arrangement according to the first embodiment
of the present invention;
Fig. 2 is a block diagram showing the internal arrangement of a microcomputer;
Fig. 3 is a block diagram of a conventional D/A converter unit;
Fig. 4 is a block diagram of a D/A converter unit according to the first embodiment;
Fig. 5 is a timing chart in D/A conversion;
Figs. 6 to 8 are flow charts showing the overall operations of the first embodiment;
Fig. 9 is a schematic chart showing the relationship between the main operation flow
chart and interrupt processing;
Fig. 10 is a view showing storage areas in units of tone generation channels on a
RAM;
Fig. 11 is a schematic chart when a sound source processing method of each tone generation
channel is selected;
Fig. 12 shows a data format in units of sound source methods on the RAM;
Fig. 13 is an operation flow chart of sound source processing based on a PCM method;
Fig. 14 is an operation flow chart of sound source processing based on a DPCM method;
Figs. 15 and 16 are charts for explaining the principle when an interpolation value
XQ is calculated using a difference D and a present address AF in the PCM and DPCM methods, respectively;
Fig. 17 is an operation flow chart of sound source processing based on an FM method;
Fig. 18 is a chart showing an algorithm of the sound processing method based on the
FM method;
Fig. 19 is an operation flow chart of sound source processing based on a TM method;
Fig. 20 is a chart showing an algorithm of the sound source processing based on the
TM method;
Fig. 21 is a view showing an arrangement of some function keys (Part 1);
Fig. 22 is a view showing a data architecture of tone color parameters;
Fig. 23 is a view showing an arrangement of a buffer B and registers X and Y on a
RAM 2061;
Fig. 24 is an explanatory view of keyboard keys (64 keys);
Fig. 25 is an operation flow chart of an embodiment A of keyboard key processing;
Fig. 26 is an operation flow chart of an embodiment B of keyboard key processing;
Fig. 27 is a view showing an arrangement of some function keys (Part 2);
Fig. 28 is an operation flow chart of an embodiment C of keyboard key processing;
Fig. 29 is an operation flow chart of an embodiment D of keyboard key processing;
Fig. 30 is an operation flow chart of an embodiment A of demonstration performance
processing;
Fig. 31 is an operation flow chart of an embodiment B of demonstration performance
processing;
Figs. 32 and 33 are views showing assignment methods of X and Y tone colors to tone
generation channels;
Fig. 34 is a block diagram showing the overall arrangement according to an embodiment
of the present invention;
Fig. 35 is a block diagram showing an internal arrangement of a master CPU;
Fig. 36 is a block diagram showing an internal arrangement of a slave CPU;
Figs. 37 to 40 are flow charts showing operations of the overall arrangement of this
embodiment;
Fig. 41 is a schematic view showing the relationship among the main operation flow
charts and interrupt processing;
Fig. 42 is a diagram of a conventional D/A converter unit;
Fig. 43 is a diagram of a D/A converter unit according to this embodiment;
Fig. 44 is a timing chart in D/A conversion;
Fig. 45 illustrates an arrangement of a function key and a keyboard key;
Fig. 46 is an explanatory view of keyboard keys;
Fig. 47 shows storage areas in units of tone generation channels on a RAM;
Fig. 48 is a schematic diagram upon selection of a sound source processing method
of each tone generation channel;
Fig. 49 shows an architecture of data formats in units of sound source methods on
the RAM;
Fig. 50 shows buffer areas on the RAM;
Figs. 51 to 54 are charts showing algorithms in a modulation method;
Fig. 55 is an operation flow chart of sound source processing based on an FM method
(Part 2);
Fig. 56 is an operation flow chart of sound source processing based on a TM method
(Part 2);
Fig. 57 is an operation flow chart of a first modification of the modulation method;
Fig. 58 is an operation flow chart of operator 1 processing based on the FM method
according to the first modification;
Fig. 59 is a chart showing an arithmetic algorithm per operator in the operator 1
processing based on the FM method according to the first modification;
Fig. 60 is an operation flow chart of operator 1 processing based on the TM method
according to the first modification;
Fig. 61 is a chart showing an arithmetic algorithm per operator in the operator 1
processing based on the TM method according to the first modification;
Fig. 62 is an operation flow chart of algorithm processing according to the first
modification;
Fig. 63 is an operation flow chart of a second modification of the modulation method;
Fig. 64 is an operation flow chart of algorithm processing according to the second
modification;
Fig. 65 shows an arrangement of some function keys;
Figs. 66 and 67 show examples of assignments of sound source methods to tone generation
channels;
Fig. 68 is an operation flow chart of function key processing;
Fig. 69 is an operation flow chart of an embodiment A of ON event keyboard key processing;
Fig. 70 is an operation flow chart of an second embodiment B of ON event keyboard
key processing; and
Fig. 71 is an operation flow chart of an embodiment of OFF event keyboard key processing.
[First Embodiment]
[0036] The first embodiment of the present invention will be described below with reference
to the accompanying drawings.
Arrangement of the First Embodiment
[0037] Fig. 1 is a block diagram showing the overall arrangement according to the first
embodiment of the present invention.
[0038] In Fig. 1, the entire apparatus is controlled by a microcomputer 1011. In particular,
not only control input processing for an instrument but also processing for generating
musical tones are executed by the microcomputer 1011, and no sound source circuit
for generating musical tones is required.
[0039] A switch unit 1041 comprising a keyboard 1021 and function keys 1031 serves as an
operation/input section of a musical instrument, and performance data input from the
switch unit 1041 are processed by the microcomputer 1011. Note that the function keys
1031 will be described in detail later.
[0040] A display unit 1091 includes red and green LEDs indicating which tone color on the
function keys 1031 is designated when a player determines a split point and sets different
tone colors to keys as will be described later. The display unit 1091 will be described
in detail later in a description of Fig. 21 or 26.
[0041] An analog musical tone signal generated by the microcomputer 1011 is smoothed by
a low-pass filter 1051, and the smoothed signal is amplified by an amplifier 1061.
Thereafter, the amplified signal is produced as a tone via a loudspeaker 1071. A power
supply circuit 1081 supplies a necessary power supply voltage to the low-pass filter
1051 and the amplifier 1061.
[0042] Fig. 2 is a block diagram showing the internal arrangement of the microcomputer 1011.
[0043] A control data/waveform data ROM 2121 stores musical tone control parameters such
as target values of envelope values (to be described later), musical tone waveform
data in respective sound source methods, musical tone difference data, modulated waveform
data, and the like. A command analyzer 207 accesses the data on the control data/waveform
data ROM 2121 while sequentially analyzing the content of a program stored in a control
ROM 2011, thereby executing software sound source processing.
[0044] The control ROM 2011 stores a musical tone control program (to be described later),
and sequentially outputs program words (commands) stored at addresses designated by
a ROM address controller 2051 via a ROM address decoder 2021. More specifically, the
word length of each program word is 28 bits, and a next address method is employed.
In this method, a portion of each program word is input to the ROM address controller
2051 as lower bits (intra-page address) of an address to be read out next. Note that
the control ROM 2011 may comprise a CPU of a conventional program counter type.
[0045] The command analyzer 2071 analyzes operation codes of commands output from the control
ROM 2011, and supplies control signals to the respective units of the circuit so as
to execute the designated operations.
[0046] When an operand of a command from the control ROM 2011 designates a register, a RAM
address controller 2041 designates an address of a corresponding register in a RAM
2061. The RAM 2061 stores various musical tone control data (to be described later
with reference to Figs. 9 and 10) for eight tone generation channels, and various
buffers (to be described later), and is used in sound source processing (to be described
later).
[0047] When a command from the control ROM 2011 is an arithmetic command, an ALU unit 2081
and a multiplier 2091 respectively execute a subtraction/addition and logic arithmetic
operation, and a multiplication on the basis of an instruction from the command analyzer
2071.
[0048] An interrupt controller 2031 supplies an interrupt signal to the ROM address controller
2051 and a D/A converter unit 2131 at predetermined time intervals on the basis of
an internal hardware timer (not shown).
[0049] An input port 2101 and an output port 2111 are connected to the switch unit 1041
and the display unit 1091 (Fig. 1).
[0050] Various data read out from the control ROM 2011 or the RAM 2061 are supplied to the
ROM address controller 2051, the ALU unit 2081, the multiplier 2091, the control data/waveform
data ROM 2121, the D/A converter unit 2131, the input port 2101, and the output port
2111 via a bus. The outputs from the ALU unit 2081, the multiplier 2091, and the control
data/waveform data ROM 2121 are supplied to the RAM 2061 via the bus.
[0051] Fig. 4 shows the internal arrangement of the D/A converter unit 2131 shown in Fig.
1. Data of musical tones for one sampling period generated by sound source processing
are input to a latch 3011 via a data bus. When the clock input of the latch 3011 receives
a sound processing end signal from the command analyzer 2071 (Fig. 2), the musical
tone data for one sampling period on the data bus are latched by the latch 3011, as
shown in Fig. 5.
[0052] Since a time required for the sound source processing changes depending on execution
conditions of sound source processing software, a timing at which the sound source
processing is ended, and the musical tone data are latched by the latch 3011 is not
fixed. For this reason, as shown in Fig. 3, the output from the latch 301 cannot be
directly input to a D/A converter 3031.
[0053] In the first embodiment, as shown in Fig. 4, the musical tone signals output from
the latch 3011 are latched by a latch 3021 in response to interrupt signals equal
to a sampling clock interval, which signals are output from the interrupt controller
2031 (Fig. 2), and are output to the D/A converter 3031 at predetermined time intervals.
[0054] Since a change in processing time in the respective sound source methods can be absorbed
by using the two latches, a complicated timing control program for outputting musical
tone data to the D/A converter can be omitted.
Overall Operation of the First Embodiment
[0055] The overall operation of the first embodiment will be described below.
[0056] In the first embodiment, the microcomputer 1011 repetitively executes a series of
processing operations in steps S₅₀₂ to S₅₁₀, as shown in the main flow chart of Fig.
6. Sound source processing is executed as interrupt processing in practice. More specifically,
the program executed as the main flow chart shown in Fig. 6 is interrupted at predetermined
time intervals, and a sound source processing program for generating musical tone
signals for eight channels is executed based on the interrupt. Upon completion of
this processing, the musical tone signals for eight channels are added to each other,
and the sum signal is output from the D/A converter unit 2131 shown in Fig. 2. Thereafter,
the control returns from the interrupt state to the main flow. Note that the above-described
interrupt operation is periodically performed on the basis of the internal hardware
timer in the interrupt controller 2031 (Fig. 2). This period is equal to the sampling
period when musical tones are output.
[0057] The schematic operation of the first embodiment has been described. The overall operation
of the first embodiment will be described in detail below with reference to Figs.
6 to 8.
[0058] The main flow chart of Fig. 6 shows a flow of processing operations other than the
sound source processing, which are executed by the microcomputer 1011 in a non-interrupt
state from the interrupt controller 2031.
[0059] The power switch is turned on, and the contents of the RAM 2061 (Fig. 2) in the microcomputer
1011 are initialized (S₅₀₁).
[0060] Switches of the function keys 1031 (Fig. 1) externally connected to the microcomputer
1011 are scanned (S₅₀₂), and states of the respective switches are fetched from the
input port 2101 to a key buffer area in the RAM 2061. As a result of scanning, a function
key whose state is changed is discriminated, and processing of a corresponding function
is executed (S₅₀₃). For example, a musical tone number and an envelope number are
set, and if a rhythm performance function is presented as an optional function, a
rhythm number is set.
[0061] Thereafter, ON keyboard key data on the keyboard 1021 (Fig. 1) are fetched in the
same manner as the function keys described above (S₅₀₄), and keys whose states are
changed are discriminated, thereby executing key assignment processing (S₅₀₅). The
keyboard key processing is particularly associated with the present invention, and
will be described later.
[0062] When a demonstration performance key (not shown) of the function keys 1031 (Fig.
1) is depressed, demonstration performance data (sequencer data) are sequentially
read out from the control data/waveform data ROM 2121 to execute, e.g., key assignment
processing (S₅₀₆). When a rhythm start key is depressed, rhythm data are sequentially
read out from the control data/waveform data ROM 2121 to execute, e.g., key assignment
processing (S₅₀₇). The demonstration performance processing (S₅₀₆) and the rhythm
processing (S₅₀₇) are also particularly associated with the present invention, and
will be described in detail later.
[0063] Thereafter, timer processing to be described below is executed (S₅₀₈). More specifically,
a value of time data which is incremented by interrupt timer processing (S₅₁₂) (to
be described later) is discriminated. The time data value is compared with time control
sequencer data sequentially read out for demonstration performance control or time
control rhythm data read out for rhythm performance control, thereby executing time
control when a demonstration performance in step S₅₀₆ or a rhythm performance in step
S₅₀₇ is performed.
[0064] In tone generation processing in step S₅₀₉, pitch envelope processing, and the like
are executed. In this processing, an envelope is added to a pitch of a musical tone
to be subjected to tone generation processing, and pitch data is set in a corresponding
tone generation channel.
[0065] Furthermore, one flow cycle preparation processing is executed (S₅₁₀). In this processing,
processing for changing a state of a tone generation channel of a note number corresponding
to an ON event detected in the keyboard key processing in step S₅₀₅ to an ON event
state, and processing for changing a state of a tone generation channel of a note
number corresponding to an OFF event to a muting state, and the like are executed.
[0066] Interrupt processing will be described below with reference to Fig. 7.
[0067] When the program corresponding to the main flow shown in Fig. 6 is interrupted by
the interrupt controller 2031 shown in Fig. 2, processing of the program is interrupted,
and execution of the interrupt processing program shown in Fig. 7 is started. In this
case, control is made to inhibit contents of registers to be subjected to write access
in the main flow program in Fig. 6 from being rewritten in the interrupt processing
program. Therefore, register save/restoration processing normally executed at the
beginning and end of interrupt processing can be omitted. Thus, transition between
the processing of the main flow chart shown in Fig. 6 and the interrupt processing
can be quickly performed.
[0068] Subsequently, in the interrupt processing, sound source processing is started (S₅₁₁).
The sound source processing is shown in Fig. 8. As a result, musical tone waveform
data obtained by accumulating tones for eight tone generation channels is obtained
in a buffer B (to be described later) of the RAM 2061 (Fig. 2).
[0069] In step S₅₁₂, interrupt timer processing is executed. In this processing, the value
of time data (not shown) on the RAM 2061 (Fig. 2) is incremented by utilizing the
fact that the interrupt processing shown in Fig. 7 is executed for every predetermined
sampling period. More specifically, a time elapsed from power-on can be detected based
on the value of the time data. The time data obtained in this manner is used in time
control in the timer processing in step S₅₀₈ in the main flow chart shown in Fig.
6, as described above.
[0070] In step S₅₁₃, the content of the buffer area is latched by the latch 3011 (Fig. 4)
of the D/A converter unit 2131.
[0071] Operations of the sound source processing executed in step S₅₁₁ in the interrupt
processing will be described below with reference to the flow chart shown in Fig.
8.
[0072] A waveform addition area on the RAM 2061 is cleared (S₅₁₃). Then, sound source processing
is executed in units of tone generation channels (S₅₁₄ to S₅₂₁). After the sound source
processing for the eighth channel is completed, waveform data obtained by adding those
for eight channels is obtained in a predetermined buffer area B. These processing
operations will be described in detail later.
[0073] Fig. 9 is a schematic flow chart showing the relationship among the processing operations
of the flow charts shown in Figs. 6 and 7. Given processing A (the same applies to
B, C,..., F) is executed (S₆₀₁). This "processing" corresponds to, e.g., "function
key processing", or "keyboard key processing" in the main flow chart of Fig. 6. Thereafter,
the control enters the interrupt processing, and sound source processing is started
(S₆₀₂). Thus, a musical tone signal for one sampling period obtained by accumulating
waveform data for eight tone generation channels can be obtained, and is output to
the D/A converter unit 2131. Thereafter, the control returns to some processing B
in the main flow chart.
[0074] The above-mentioned operations are repeated while executing sound source processing
for each of eight tone generation channels (S₆₀₄ to S₆₁₁). The repetition processing
continues as long as musical tones are being produced.
Data Architecture in Sound Source Processing
[0075] The sound source processing executed in step S₅₁₁ in Fig. 7 will be described in
detail below.
[0076] In the first embodiment, the microcomputer 1011 executes sound source processing
for eight tone generation channels. The sound source processing data for eight channels
are set in areas in units of tone generation channels of the RAM 2061 (Fig. 2), as
shown in Fig. 10.
[0077] The waveform data accumulation buffer B and tone color No. registers X and Y are
allocated on the RAM 2061, as shown in Fig. 23.
[0078] In this case, a sound source method is set in (assigned to) each tone generation
channel area shown in Fig. 10 by operations to be described in detail later, and thereafter,
control data from the control data/waveform data ROM 2121 are set in the area in data
formats in units of sound source methods, as shown in Fig. 12. The data formats in
the control data/waveform data ROM 2121 will be described in detail later with reference
to Fig. 22. In the first embodiment, different sound source methods can be assigned
to tone generation channels, as will be described later.
[0079] In Table 1 showing the data formats of the respective sound source methods shown
in Fig. 12, S indicates a sound source method No. as a number for identifying the
sound source methods. A represents an address designated when waveform data is read
out in the sound source processing, and A
I A₁, and A₂ represent integral parts of current addresses, and directly correspond
to addresses of the control data/waveform data ROM 2121 (Fig. 2) where waveform data
are stored. A
F represents a decimal part of the current address, and is used for interpolating waveform
data read out from the control data/waveform data ROM 2121. A
E and A
L respectively represent end and loop addresses. P
I, P₁, and P₂ represent integral parts of pitch data, and P
F represents a decimal part of pitch data. For example, P
I = 1 and P
F = 0 express a pitch of an original tone, P
I = 2 and P
F = 0 express a pitch higher than the original pitch by one octave, and P
I = 0 and P
F = 0.5 express a pitch lower by one octave. X
P represents storage of previous sample data, and X
N represents storage of the next sample data. D represents a difference between magnitudes
of two adjacent sample data, and E represents an envelope value. Furthermore, O represents
an output value. Various other control data will be described later in descriptions
of sound source methods.
[0080] In the first embodiment, when the main flow chart shown in Fig. 6 is executed, sound
source method No. data, and control data necessary for sound source processing of
the sound source method, e.g., pitch data, envelope data, and the like are set in
a corresponding tone generation channel area. In the sound source processing shown
in Fig. 8 executed as sound source processing in the interrupt processing shown in
Fig. 7, musical tone generation processing is executed while using the control data
set in the tone generation channel area. In this manner, a data communication between
the main flow program and the sound source processing program is performed via control
data (musical tone generation data) in the tone generation channel areas on the RAM
2061. For this reason, since access of one program to the tone generation channel
area can be performed regardless of an execution state of the other program, the two
programs can have substantially independent module arrangements, and hence, a simple
and efficient program architecture can be attained.
[0081] The sound source processing operations of the respective sound source methods executed
using the above-mentioned data architecture will be described below in turn. These
sound source processing operations are realized by analyzing and executing a sound
source processing program stored in the control ROM 2011 by the command analyzer 2071
of the microcomputer 1011. Assume that the processing is executed under this condition
unless otherwise specified.
[0082] In the flow chart shown in Fig. 8, when the sound source processing (one of steps
S₅₁₇ to S₅₂₄) for each channel is started, the sound source method No. data S of the
data in the data format (Table 1) shown in Fig. 12 stored in the corresponding tone
generation channel area of the RAM 2061 is discriminated to determine sound source
processing of a sound source method to be described below.
Sound Source Processing Based on PCM Method
[0083] When the sound source method No. data S indicates the PCM method, sound source processing
based on the PCM method shown in the operation flow chart of Fig. 13 is executed.
Variables in the flow chart are PCM data of Table 1 shown in Fig. 12, which data are
stored in the corresponding tone generation channel area (Fig. 10) on the RAM 2061
(Fig. 2).
[0084] Of an address group on the control data/waveform data ROM 2121 (Fig. 2) where PCM
waveform data are stored, an address where waveform data as an object to be currently
processed is stored is assumed to be (A
I, A
F) shown in Fig. 15.
[0085] Pitch data (P
I, P
F) is added to the present address (S₁₀₁). The pitch data corresponds to the type of
a key determined as an ON key of the keyboard 1021 shown in Fig. 1.
[0086] It is then checked if the integral part A
I of the sum address is changed (S₁₀₀₂). If NO in step S₁₀₀₂, an interpolation data
value
O corresponding to the decimal part A
F of the address is calculated by arithmetic processing D × A
F using a difference D as a difference between sample data X
N and X
P at addresses (A
I+1) and A
I shown in Fig. 15 (S₁₀₀₇). Note that the difference D has already been obtained by
the sound source processing at the previous interrupt timing (see step S₁₀₀₆ to be
described later).
[0087] The sample data X
P corresponding to the integral part A
I of the address is added to the interpolation data value
O to obtain a new sample data value
O (corresponding to X
Q in Fig. 15) corresponding to the current address (A
I, A
F) (S₁₀₀₈).
[0088] Thereafter, the sample data is multiplied with the envelope value E (S₁₀₀₉), and
the content of the obtained interpolation data value
O is added to the content of the waveform data buffer B (Fig. 23) in the RAM 2061 (Fig.
2) (S₁₀₁₀).
[0089] Thereafter, the control returns to the main flow chart shown in Fig. 6. The control
is interrupted in the next sampling period, and the operation flow chart of the sound
source processing shown in Fig. 13 is executed again. Thus, pitch data (P
I, P
F) is added to the current address (A
I, A
F) (S₁₀₀₁).
[0090] The above-mentioned operations are repeated until the integral part A
I of the address is changed (S₁₀₀₂).
[0091] Before the integral part is changed, the sample data X
P and the difference D are left unchanged, and only the interpolation data value
O is updated in with the address A
F. Thus, every time the address A
F is updated, new sample data X
Q is obtained.
[0092] If the integral part A
I of the current address is changed (S₁₀₀₂) as a result of addition of the current
address (A
I, A
F) and the pitch data (P
I, P
F) in step S₁₀₀₁
' it is checked if the address A
I has reached or exceeded the end address A
E (S₁₀₀₃).
[0093] If YES in step S₁₀₀₃, the next loop processing is executed. More specifically, a
value (A
I - A
E) as a difference between the updated current address and the end address A
E is added to the loop address A
L to obtain a new current address (A
I, A
F). A loop reproduction is started from the integral part A
I of obtained new current address (S₁₀₀₄). The end address A
E is an end address of an area of the control data/waveform data ROM 2121 (Fig. 2)
where PCM waveform data are stored. The loop address A
L is an address of a position where a player wants to repeat an output of a waveform.
With the above-mentioned operations, known loop processing is realized by the PCM
method.
[0094] If NO in step S₁₀₀₃
' the processing in step S₁₀₀₄ is not executed.
[0095] Sample data is then updated. In this case, sample data corresponding to the new updated
current address A
I and the immediately preceding address (A
I-1) are read out as X
N and X
P from the control data/waveform data ROM 2121 (Fig. 2) (S₁₀₀₅).
[0096] Furthermore, the difference so far is updated with a difference D between the updated
data X
N and X
P (S₁₀₀₆).
[0097] The following operation is as described above.
[0098] In this manner, waveform data by the PCM method for one tone generation channel is
generated.
Sound Source Processing Based on DPCM Method
[0099] The sound source processing based on the DPCM method will be described below.
[0100] The operation principle of the DPCM method will be briefly described below with reference
to Fig. 16.
[0101] In Fig. 16, sample data X
P corresponding to an address A
I of the control data/waveform data ROM 2121 (Fig. 2) is obtained by adding sample
data corresponding to an address (A
I-1) (not shown) to a difference between the sample data corresponding to the address
(A
I-1) and sample data corresponding to the address A
I.
[0102] A difference D with sample data at the next address (A
I+1) is written at the address A
I of the control data/waveform data ROM 2121. Sample data at the next address (A₁+1)
is obtained by

.
[0103] In this case, if the current address is represented by A
F' as shown in Fig. 16, sample data corresponding to the current address

is obtained by

.
[0104] In this manner, in the DPCM method, a difference D between sample data corresponding
to the current address and the next address is read out from the control data/waveform
data ROM 2121, and is added to the current sample data to obtain the next sample data,
thereby sequentially forming waveform data.
[0105] If the DPCM method is adopted, when a waveform such as a voice or a musical tone
which generally has a small difference between adjacent samples is to be quantized,
quantization can be performed by a smaller number of bits as compared to the normal
PCM method.
[0106] The operation of the above-mentioned DPCM method will be described below with reference
to the operation flow chart shown in Fig. 14. Variables in the flow chart are DPCM
data in Table 1 shown in Fig. 12, which data are stored in the corresponding tone
generation area (Fig. 10) on the RAM 2061 (Fig. 2).
[0107] Of addresses on the control data/waveform data ROM 2121 where DPCM differential waveform
data are stored, an address where data as an object to be currently processed is stored
is assumed to be (A
I, A
F) shown in Fig. 16.
[0108] Pitch data (P
I, P
F) is added to the present address (A
I, A
F) (S₁₁₀₁).
[0109] It is then checked if the integral part A
I of the sum address is changed (S₁₁₀₂). If NO in step S₁₁₀₂, an interpolation data
value
O corresponding to the decimal part A
F of the address is calculated by arithmetic processing D × A
F using a difference D at the address A
I in Fig. 16 (S₁₁₁₄). Note that the difference D has already been obtained by the sound
source processing at the previous interrupt timing (see steps S₁₁₀₆ and S₁₁₁₀ to be
described later).
[0110] The interpolation data value O is added to sample data X
P corresponding to the integral part A
I of the address to obtain a new sample data value O (corresponding to X
Q in Fig. 16) corresponding to the current address (A
I, A
F) (S₁₁₁₅).
[0111] Thereafter, the sample data value O is multiplied with an envelope value E (S₁₁₁₆),
and the obtained value is added to a value stored in the waveform data buffer B (Fig.
23) in the RAM 2061 (Fig. 2) (S₁₁₁₇).
[0112] Thereafter, the control returns to the main flow chart shown in Fig. 6. The control
is interrupted in the next sampling period, and the operation flow chart of the sound
source processing shown in Fig. 14 is executed again. Thus, pitch data (P
I, P
F) is added to the current address (A
I, A
F) (S₁₁₀₁).
[0113] The above-mentioned operations are repeated until the integral part AI of the address
is changed.
[0114] Before the integral part is changed, the sample data X
P and the difference D are left unchanged, and only the interpolation data
O is updated in accordance with the address A
F. Thus, every time the address A
F is updated, new sample data X
Q is obtained.
[0115] If the integral part A
I of the present address is changed (S₁₁₀₂) as a result of addition of the current
address (A
I, A
F) and the pitch data (P
I, P
F) in step S₁₁₀₁, it is checked if the address A
I has reached or exceeded the end address A
E (S₁₁₀₃).
[0116] If NO in step S₁₁₀₃, sample data corresponding to the integral part A
I Of the updated present address is calculated by the following loop processing in
steps S₁₁₀₄ to S₁₁₀₇. More specifically, a value before the integral part A
I of the present address is changed is stored in a variable "old A
I" (see the column of DPCM in Table 1 shown in Fig. 12). This can be realized by repeating
processing in step S₁₁₀₆ or S₁₁₁₃ (to be described later). The old A
I value is sequentially incremented in S₁₁₀₆, and differential waveform data on the
control data/waveform data ROM 2121 (Fig. 2) addressed by the incremented old A
I values are read, out as D in step S₁₁₀₇. The readout data D are sequentially accumulated
on sample data X
P in step S₁₁₀₅. When the old A
I value becomes equal to the integral part A
I of the changed current address, the sample data X
P as a value corresponding to the integral part A
I of the changed current address.
[0117] When the sample data X
P corresponding to the integral part A
I of the current address is obtained in this manner, YES is determined in step S₁₁₀₄,
and the control starts the arithmetic processing of the interpolation value (S₁₁₁₄)
described above.
[0118] The above-mentioned sound source processing is repeated at the respective interrupt
timings, and when the judgment in step S₁₁₀₃ is changed to YES, the control enters
the next loop processing.
[0119] An address value (A
I-A
E) exceeding the end address A
E is added to the loop address A
L, and the obtained address is defined as an integral part A
I of a new current address (S₁₁₀₈).
[0120] An operation for accumulating the difference D several times depending on an advance
in address from the loop address A
L is repeated to calculate sample data X
P corresponding to the integral part A
I of the new current address. More specifically, sample data X
P is initially set as the value of sample data X
PL (see the column of DPCM in Table 1 shown in Fig. 12) at the current loop address
A
L, and the old A
I is set as the value of the loop address A
L (S₁₁₀₉). The following processing operations in steps S₁₁₁₀ to S₁₁₁₃ are repeated.
More specifically, the old A
I value is sequentially incremented in step S₁₁₁₃, and differential waveform data on
the control data/waveform data ROM 2121 designated by the incremented old A
I values are read out as data D. The data D are sequentially accumulated on the sample
data X
P in step S₁₁₁₂. When the old A
I value becomes equal to the integral part A
I of the new current address, the sample data X
P has a value corresponding to the integral part A
I of the new current address after loop processing.
[0121] When the sample data X
P corresponding to the integral part A
I of the new current address is obtained in this manner, YES is determined in step
S₁₁₁₁, and the control enters the above-mentioned arithmetic processing of the interpolation
value (S₁₁₁₄).
[0122] As described above, waveform data by the DPCM method for one tone generation channel
is generated.
Sound Source Processing Based on FM Method
[0123] The sound source processing based on the FM method will be described below.
[0124] In the FM method, hardware or software elements having the same contents, called
"operators", are normally used, and are connected based on connection rules, called
algorithms, thereby generating musical tones. In the first embodiment, the FM method
is realized by a software program.
[0125] The operation of one embodiment executed when the sound source processing is performed
using two operators will be described below with reference to the operation flow chart
shown in Fig. 17. The algorithm of the processing is shown in Fig. 18. Variables in
the flow chart are FM data in Table 1 shown in Fig. 12, which data are stored in the
corresponding tone generation channel area (Fig. 10) on the RAM 2061 (Fig. 2).
[0126] First, processing of an operator 2 (OP2) as a modulator is performed. In pitch processing
(processing for accumulating pitch data for determining an incremental width of an
address for reading out waveform data stored in the ROM 2121), since no interpolation
is performed unlike in the PCM method, an address consists of only an integral address
A₂. More specifically, modulation waveform data are stored in the control data/waveform
data ROM 2121 (Fig. 2) at sufficiently fine incremental widths.
[0127] Pitch data P₂ is added to the current address A₂ (S₁₃₀₁).
[0128] A feedback output F
O2 is added to the address A₂ as a modulation input to obtain a new address A
M2 (S₁₃₀₂). The feedback output F
O2 has already been obtained upon execution of processing in step S₁₃₀₅ (to described
later) at the immediately preceding interrupt timing.
[0129] The value of a sine wave corresponding to the address A
M2 (phase) is calculated. In practice sine wave data are stored in the control data/wave
from data ROM 2121, and are obtained by addressing the ROM 2121 by the address A
M2 to read out the corresponding data (S₁₃₀₃).
[0130] Subsequently the sine wave data is multiplied with an envelope value E₂ to obtain
an output O₂ (S₁₃₀₄).
[0131] Thereafter, the output O₂ is multiplied with a feedback level F
L2 to obtain a feedback output F
O2 (S₁₃₀₅). In the first embodiment, this output F
O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
[0132] The output O₂ is multiplied with a modulation level M
L2 to obtain a modulation output M
O2 (S₁₃₀₆). The modulation output M
O2 serves as a modulation input to an operator 1 (OP1).
[0133] The control then enters processing of the operator 1 (OP1). This processing is substantially
the same as that of the operator 2 (OP2) described above, except that there is no
modulation input based on the feedback output.
[0134] The present address A₁ of the operator 1 (OP1) is added to pitch data P₁ (S₁₃₀₇),
and the sum is added to the above-mentioned modulation output M
O2 to obtain a new address A
M1 (S₁₃₀₈).
[0135] The value of sine wave data corresponding to this address A
M1 (phase) is read out from the control data/waveform data ROM 2121 (S₁₃₀₉), and is
multiplied with an envelope value E₁ to obtain a musical tone waveform output O₁ (S₁₃₁₀).
[0136] This output O₁ is added to a value held in the buffer B (Fig. 23) in the RAM 2061
(S₁₃₁₁), thus completing the FM processing for one tone generation channel.
Sound Source Processing Based on TM (Triangular Wave Modulation) Method (Part 1)
[0137] The sound source processing based on the TM method will be described below.
The principle of the TM method will be described below.
[0138] The FM method described above is based on the following formula:
where ω
ct is the carrier wave phase angle (carrier signal), sinω
mt is the modulation wave phase angle (modulation signal), and I(t) is the modulation
index.
[0140] In the TM method, the above-mentioned triangular wave function is modulated by a
sum signal obtained by adding a carrier signal generated by the above-mentioned function
f
c(t) to the modulation signal sinω
m(t) at a ratio indicated by the modulation index I(t). In this manner, when the value
of the modulation index I(t) is 0, a sine wave can be generated, and as the value
I(t) is increased, a very deeply modulated waveform can always be generated. Various
other may be used in place of the modulation signal sinω
m(t), and as will be described later, the same operator output in the previous arithmetic
processing may be fed back at a predetermined feedback level, or an output from another
operator may be input.
[0141] The sound source processing based on the TM method according to the abovementioned
principle will be described below with reference to the operation flow chart shown
in Fig. 19. The sound source processing is also performed using two operators like
in the FM method shown in Figs. 17 and 18, and the algorithm of the processing is
shown in Fig. 20. Variables in the flow chart are TM format data in Table 1 shown
in Fig. 12, which data are stored in the corresponding tone generation channel area
(Fig. 10) on the RAM 2061 (Fig. 2).
[0142] First, processing of an operator 2 (OP2) as a modulator is performed. In pitch processing,
since no interpolation is performed unlike in the PCM method, an address consists
of only an integral address A₂.
[0143] The present address A₂ is added to pitch data P₂ (S₁₄₀₁).
[0144] Modified sine wave data corresponding to the address A₂ (phase) is read out from
the control data/waveform waveform data ROM 2121 (Fig. 2) by the modified sine conversion
f
c, and is output as a carrier signal O₂ (S₁₄₀₂).
[0145] Subsequently, the carrier signal O₂ is added to a feedback output F
O2 (S₁₄₀₆) as a modulation signal, and the sum signal is output as a new address O₂
(S₁₄₀₃). The feedback output F
O2 has already been obtained upon execution of processing in step S₁₄₀₆ (to be described
later) at the immediately preceding interrupt timing.
[0146] The value of a triangular wave corresponding to the carrier signal O₂ is calculated.
In practice, the above-mentioned triangular wave data are stored in the control data/waveform
data ROM 2121 (Fig. 2), and are obtained by addressing the ROM 2121 by the address
O₂ to read out the corresponding triangular wave data (S₁₄₀₄).
[0147] Subsequently, the triangular wave data is multiplied with an envelope value E₂ to
obtain an output O₂ (S₁₄₀₅).
[0148] Thereafter, the output O₂ is multiplied with a feedback level F
L2 to obtain a feedback output F
O2 (S₁₄₀₇). In the first embodiment, the output F
O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
[0149] The output O₂ is multiplied with a modulation level M
L2 to obtain a modulation output M
O2 (S₁₄₀₇). The modulation output M
O2 serves as a modulation input to an operator 1 (OP1).
[0150] The control then enters processing of the operator 1 (OP1). This processing is substantially
the same as that of the operator 2 (OP2) described above, except that there is no
modulation input based on the feedback output.
[0151] The present address A₁ of the operator 1 is added to pitch data P₁ (S1408), and the
sum is subjected to the above-mentioned modified sine conversion to obtain a carrier
signal O₁ (S1409).
[0152] The carrier signal O₁ is added to the above- mentioned modulation output M
O2 to obtain a new value O₁ (S1410), and the value O₁ is subjected to triangular wave
conversion (S1411). The converted is multiplied with an envelope value E₁ to obtain
a musical tone waveform output O₁ (S1412).
[0153] The output O₁ is added to a value held in the buffer B (Fig. 23) in the RAM 2061
(Fig. 2) (S1413), thus completing the TM processing for one tone generation channel.
[0154] The sound source processing operations based on four methods, i.e., the PCM, DPCM,
FM, and TM methods have been described. Of these methods, the FM and TM methods are
modulation methods, and, in the above examples, two-operator processing operations
are executed based on the algorithms shown in Figs. 18 and 20. However, in sound source
processing in an actual performance, more operators may be used, and the algorithms
may be more complicated.
Summary of Keyboard Key Processing
[0155] The operations of keyboard key processing (S₅₀₅) in the main flow chart shown in
Fig. 6 when an actual electronic musical instrument is played will be described in
detail below.
[0156] In the above-described sound source processing, data in units of sound source methods
(Fig. 12) are set in the corresponding tone generation channel areas (Fig. 10) on
the RAM 2061 (Fig. 2) by the function keys 1031 (Fig. 1). The function keys 1031 are
connected to, e.g., an operation panel of the electronic musical instrument via the
input port 2101 (Fig. 2).
[0157] In the first embodiment, split points based key codes and velocities, and two tone
colors are designated in advance, thus allowing characteristic assignment of tone
colors to the tone generation channels.
[0158] The split points and the tone colors are designated, as shown in Fig. 21 or 27.
[0159] Fig. 21 shows an arrangement of some function keys 1031 (Fig. 1). A keyboard split
point designation switch 15011 comprises a slide switch which has a click feeling,
and can designated a split point based on key codes of ON keys in units of keyboard
key. When two tone colors, e.g., "piano" and "guitar" are designated as X and Y tone
colors by tone color switches 15021, the X tone color is designated for a bass tone
range, and the Y tone color is designated for a high tone range to have the above-mentioned
split point as a boundary. In this case, a tone color designated first is set as the
X tone color, and for example, a red LED is turned on. A tone color designated next
is set as the Y tone color, and a green LED is turned on. The LEDs correspond to the
display unit 1091 (Fig. 1).
[0160] A split point based on velocities is designated by a velocity split point designation
switch 15031 shown in Fig. 27. For example, when the switch 15031 is set at velocity
= 60, an X tone color is designated for ON events having a velocity of 60 or less,
and a Y tone color is designated for ON events having a velocity faster than 60. In
this case, the X and Y tone colors are designated by tone color switches 20021 (Fig.
27) in same manner as in Fig. 21 (the case of a split point based on key codes).
[0161] The arrangement shown in Fig. 21 or 27 can constitute an independent embodiment.
However, an embodiment having both these functions may be realized. In order to allow
the above-mentioned tone color setting operations, the control data/waveform data
ROM 2121 (Fig. 2) stores various tone color parameters in data formats shown in Fig.
22. More specifically, tone color parameters for the four sound source methods, i.e.,
the PCM, DPCM, FM, and TM methods are stored in units of instruments corresponding
to the tone color switches 15021 of "piano" as the tone color No. 1, "guitar" as the
tone color No. 2, and the like shown in Fig. 21. The tone color parameters for the
respective sound source methods are stored in the data formats in units of sound source
methods shown in Fig. 12. On the other hand, the buffer B for accumulating waveform
data for eight tone generation channels, and the tone color No. registers for holding
the tone color Nos. of the X and Y tone colors are allocated on the RAM 2061 (Fig.
2).
[0162] Tone color parameters in units of sound source methods, which have the data formats
shown in Fig. 22, are set in the tone generation channel areas (Fig. 10) for the eight
channels of the RAM 2061, and sound source processing is executed based on these parameters.
Processing operations for assigning tone color parameters to the tone generation channels
in accordance with ON events on the basis of the split point and the two, i.e., X
and Y tone colors designated by the function keys shown in Fig. 21 or 27 will be described
below in turn.
Embodiment A of Keyboard key Processing
[0163] The embodiment A of keyboard key processing will be described below.
[0164] The embodiment A is for an embodiment having the arrangement shown in Fig. 21 as
some function keys 1031 shown in Fig. 1. Based on an operation of the keyboard split
point designation switch 15011 shown in Fig. 21 by a player, key codes of ON keys
are split into two groups at the split point. Then, musical tone signals in two, i.e.,
X and Y tone colors designated upon operation of the tone color switches 15021 (Fig.
21) by the player are generated. Furthermore, one of the four sound source methods
is selected in accordance with the magnitude of a velocity (corresponding to an ON
key speed) obtained upon an ON event of a key on the keyboard 1021 (Fig. 1). Tone
color generation is performed on the basis of the tone colors and the sound source
method determined in this manner.
[0165] In the embodiment A, as shown in Fig. 32, musical tone signals in the X tone color
are generated using the first to fourth tone generation channels (ch1 to ch4), and
musical tone signals in the Y tone color are generated using the fifth to eighth tone
generation channels (ch5 to ch8).
[0166] Note that operations of the keyboard split point designation switch 15011 and the
tone color switches 15021 shown in Fig. 21 by the player are detected in the function
key scanning processing in step S₅₀₂ in the main flow chart of Fig. 6, and in the
function key processing in step S503 in Fig. 6, key codes corresponding to the operation
states are held in registers (not shown) on the RAM 2061. In addition, the X and Y
tone colors are held in the X and Y tone color No. registers (Fig. 23) in the RAM
2061.
[0167] Fig. 25 is an operation flow chart of the embodiment A of the keyboard key processing
in step S₅₀₅ in the main flow chart shown in Fig. 6.
[0168] It is checked if a key code of a key determined as an "ON key" in step S₅₀₄ in the
main flow chart shown in Fig. 6 is equal to or smaller than that at the split point
designated in advance (S₁₈₀₁).
[0169] If YES in step S₁₈₀₁, tone color parameters of the X tone color designated beforehand
by the player are set in one of the first to fourth tone generation channels (Fig.
32) by the following processing operations in steps S₁₈₀₂ to S₁₈₀₅ and S₁₈₁₀ to S₁₈₁₃.
It is checked if the first to fourth tone generation channels include an empty channel
(S₁₈₀₂).
[0170] If it is determined that there is no empty channel, and NO is determined in step
S₁₈₀₂, no assignment is performed.
[0171] If it is determined that there is an empty channel, and YES in step S₁₈₀₂, tone color
parameters for the X tone color, and corresponding to one of the PCM, DPCM, TM, and
FM methods are set in the empty channel in accordance with the velocity value as follows.
[0172] It is checked if the velocity value of a key determined as an "ON key" in step S₅₀₄
in the main flow chart in Fig. 6 is equal to or smaller than 63 (almost corresponding
to mezzo piano mp) (S1803).
[0173] If YES in step S1₈₀₃, e.i., if it is determined that the velocity value is equal
to or smaller than 63, it is then checked if the value is equal to or smaller than
31 (almost corresponding to piano p) (S₁₈₀₅).
[0174] If YES in step S₁₈₀₅, e.i., if it is determined that the velocity value V falls within
a range of 0 ≦ V ≦ 31, the tone color parameters for the X tone color are set in the
FM format shown in Fig. 12 in one tone generation channel area (empty channel area)
of the first to fourth channels (Fig. 2) to which the ON key is assigned on the RAM
2061. More specifically, sound source method No. data S representing the FM method
is set in the first area of the corresponding tone generation channel area (see the
column of FM in Fig. 12). Then, the tone color parameters corresponding to the tone
color of the tone color No. presently stored in the X tone color No. register (Fig.
23) on the RAM 2061 are read out from a data architecture portion shown in Fig. 22
of the control data/waveform data ROM 2121, and are set in the second and subsequent
areas of the tone generation channel area (S₁₈₁₃).
[0175] If YES in step S₁₈₀₅, e.i., if it is determined that the velocity value falls within
a range of 31 ≦ V 63, tone color parameters for the X tone color are set in the TM
format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to which
the ON key is assigned (S₁₈₁₂). In this case, the parameters set in the same manner
as in step S₁₈₁₃.
[0176] If NO in step S₁₈₀₃, it is then checked if the velocity value is equal to or smaller
than 95 (almost corresponding to piano p) (S₁₈₀₄).
[0177] If YES in step S₁₈₀₄, i.e., if it is determined that the velocity value V falls within
a range of 63 ≦ V ≦ 95, tone color parameters for the X tone color are set in the
DPCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to
which the ON key is assigned (S₁₈₁₁). In this case, the parameters set in the same
manner as in step S₁₈₁₃.
[0178] If NO in step S₁₈₀₄, i.e., if it is determined that the velocity value V falls within
a range of 95 ≦ V ≦ 127, tone color parameters for the X tone color are set in the
PCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to
which the ON key is assigned (S₁₈₁₀). In this case, the parameters are set in the
same manner as in step S₁₈₁₃.
[0179] On the other hand, if NO in first step S₁₈₀₁, tone color parameters for the Y tone
color designated in advance by the player are set in one of the fifth to eighth tone
generation channels (Fig. 32) by the following processing in steps S₁₈₀₆ to S₁₈₀₉
and S₁₈₁₄ to S₁₈₁₇.
[0180] It is checked if the fifth to eighth tone generation channels include an empty channel
(S₁₈₀₆).
[0181] It it is determined that there is no empty channel, and NO is determined in step
S₁₈₀₆, no assignment is performed.
[0182] If it is determined that there is an empty channel, and YES is determined in step
S₁₈₀₆, tone color parameters for the Y tone color, and corresponding to one of the
PCM, DPCM, TM, and FM methods are set in the empty channel in accordance with the
velocity value as follows.
[0183] First, it is checked if the velocity value of an ON key is equal to or smaller than
63 (S₁₈₀₇).
[0184] If YES in step S₁₈₀₇, i.e., if it is determined that the velocity value is equal
to or smaller than 63, it is then checked if the value is equal to or smaller than
31 (S₁₈₀₈).
[0185] If YES in step S₁₈₀₈, i.e., if it is determined that the velocity value V falls within
a range of 0 ≦ V ≦ 31, tone color parameters for the Y tone color are set in the FM
format in Fig. 12 in one of the fifth to eighth channels to which the ON key is assigned.
More specifically, sound source method No. data S representing the FM method is set
in the first area of the corresponding tone generation channel area (see the column
of FM in Fig. 12). Then, the tone color parameters corresponding to the tone color
of the tone color No. presently stored in the Y tone color No. register (Fig. 23)
on the RAM 2061 are read out from a data architecture portion shown in Fig. 22 of
the control data/waveform data ROM 2121, and are set in the second and subsequent
areas of the tone generation channel area (S₁₈₁₄).
[0186] If YES in step S₁₈₀₈, i.e., if it is determined that the velocity value falls within
a range of 31 ≦ V ≦ 63, tone color parameters for the Y tone color are set in the
TM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to
which the ON key is assigned (S₁₈₁₅). In this case, the parameters are set in the
same manner as in step S₁₈₁₄.
[0187] If NO in step S₁₈₀₇, it is checked if the velocity value is equal to or smaller than
95 (S₁₈₀₉).
[0188] If YES in step S₁₈₀₉, i.e., if it is determined that the velocity value V falls within
a range of 63 < V ≦ 95, tone color parameters for the Y tone color are set in the
DPCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to
which the ON key is assigned (S₁₈₁₆). In this case, the parameters are set in the
same manner as in step S₁₈₁₄.
[0189] If NO in step S₁₈₁₆, i.e., if it is determined that the velocity value V falls within
a range of 95 < V ≦ 127, tone color parameters for the Y tone color are set in the
PCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to
which the ON key is assigned (S₁₈₁₇). In this case, the parameters are set in the
same manner as in step S₁₈₁₄.
[0190] As described above, one of the X and Y tone colors is selected in accordance with
whether the key code is lower or higher than the split point, and one of the four
sound source methods is selected in accordance with the magnitude of an ON key velocity,
thus generating musical tones.
Embodiment B of Keyboard Processing
[0191] The embodiment B of the keyboard key processing will be described below.
[0192] In the embodiment A described above, as shown in Fig. 32, the tone generation channels
to which the X and Y tone colors are assigned are fixed as the first to fourth tone
generation channels and the fifth to eighth tone generation channels, respectively.
In the embodiment B, channels to which each tone color is assigned are not fixed,
and the X and Y tone colors are sequentially assigned to empty channels, as shown
in Fig. 33.
[0193] Fig. 26 is an operation flow chart of the embodiment B of the keyboard key processing
in step S₅₀₅ in the main flow chart shown in Fig. 6. As shown in Fig. 26, it is checked
if the first to eighth channels include an empty channel (S₁₉₀₁). If there is an empty
channel, tone color assignment is performed. The processing operations in steps S₁₉₀₂
to S₁₉₁₆ the same as those in steps S₁₈₀₁, S₁₈₀₃ to S₁₈₀₅, and S₁₈₀₆ to S₁₈₁₇ in the
embodiment A.
[0194] According to the embodiment B, flexible tone color assignment to the tone generation
channels can be performed.
Embodiment C of Keyboard Key Processing
[0195] The embodiment C of the keyboard key processing will be described below.
[0196] The embodiment C corresponds to a case wherein processing for a key code and processing
for a velocity in the embodiment A are replaced.
[0197] More specifically, the embodiment C is for an embodiment having an arrangement shown
in Fig. 27 as some function keys 1031 shown in Fig. 1, and velocities of ON keys are
split into two groups at the split point upon operation of the velocity split point
designation switch 20011 (Fig. 27) by the player. Then, musical tone signals are generated
in the two, i.e., X and Y tone colors designated upon operation of the tone color
switches 20021 (Fig. 27) by the player. In this case one of the four sound source
methods is selected in accordance with a key code value of an ON key on the keyboard
1021 (Fig. 1) by the player. Tone color generation is performed in accordance with
the tone colors and the sound source method determined in this manner. The X and Y
tone colors are assigned to the tone generation channels, as shown in Fig. 32, in
the same manner as in the embodiment A.
[0198] Fig. 28 is an operation flow chart of the embodiment C of the keyboard key processing
in step S₅₀₅ in the main flow chart of Fig. 6.
[0199] It is checked if the velocity of a key determined as an "ON key" in step S₅₀₄ in
the main flow chart in Fig. 6 is equal to or smaller than the velocity at the split
point designated in advance by the player (S₂₁₀₁).
[0200] If YES in step S₂₁₀₁, tone color parameters for the X tone color designated in advance
by the player are set in one of the first to fourth tone generation channels (Fig.
32) by the following processing in steps S₂₁₀₂ to S₂₁₀₅ and S₂₁₁₀ to S₂₁₁₃.
[0201] It is checked if the first to fourth tone generation channels include an empty channel
(S₂₁₀₂).
[0202] If it is determined that there is no empty channel, and NO is determined in step
S₂₁₀₂, no assignment is performed.
[0203] If it is determined that there is an empty channel, and YES is determined in step
S₂₁₀₂, tone color parameters for the X tone color, and corresponding to one of the
PCM, DPCM, TM, and FM methods are set in the empty channel in accordance with the
key code value as follows.
[0204] It is checked if the key code value of a key determined as an "ON key" in step S₅₀₄
in the main flow chart in Fig. 6 is equal to or larger than 32 (S₂₁₀₃).
[0205] If YES in step S₂₁₀₃, i.e., if it is determined that the key code value is equal
to or larger than 32, it is then checked if the value is equal to or larger than 48
(S₂₁₀₅).
[0206] If YES in step S₂₁₀₅, i.e., if it is determined that the key code value K falls within
a range of 48 ≦ K ≦ 63 (63 = maximum value), tone color parameters for the X tone
color are set in the FM format shown in Fig. 12 in one of the first to fourth channels
area on the RAM 2061 to which the ON key is assigned (Fig. 2). In this case, the parameters
are set in the same manner as in step S₁₈₁₃ in the embodiment A.
[0207] If YES in step S₂₁₀₅, i.e., if the key code value falls within a range of 32 ≦ V
< 48, tone color parameters for the X tone color are set in the TM format shown in
Fig. 12 in the tone generation channel area on the RAM 2061 to which the ON key is
assigned (S₂₁₁₂). In this case, the parameters are set in the same manner as in step
S₁₈₁₃ in the embodiment A.
[0208] If NO in step S₂₁₀₃, it is checked if the key code value is equal to or larger than
16 (S₂₁₀₄).
[0209] If YES in step S₂₁₀₄, i.e., if it is determined that the key code value K falls within
a range of 16 ≦ K ≦ 32, tone color parameters for the X tone color are set in the
DPCM format shown in Fig. 12 in the tone generation channel area on the RAM 2061 to
which the ON key is assigned (S₂₁₁₁). In this case, the parameters are set in the
same manner as in step S1813 in the embodiment A.
[0210] Furthermore, if NO in step S₂₁₀₄, i.e., if it is determined that the key code value
K falls within a range of 0 ≦ V < 16, tone color parameters for the X tone color are
set in the PCM format shown in Fig. 12 in the tone generation channel area on the
RAM 2061 to which the ON key is assigned (S₂₁₁₀). In this case, the parameters are
set in the same manner as in step S₁₈₁₃ in the embodiment A.
[0211] If NO in first step S₂₁₀₁, tone color parameters for the Y tone color designated
in advance by the player are set in one of the fifth to eighth tone generation channels
(Fig. 32) by the following processing in steps S₂₁₀₆ to S₂₁₀₉ and S₂₁₁₄ to S₂₁₁₇.
[0212] It is checked if the fifth to eighth tone generation channels include an empty channel
(S₂₁₀₆).
[0213] If it is determined that there is no empty channel, and NO is determined in step
S₂₁₀₆, no assignment is performed.
[0214] If there is an empty channel, and YES is determined in step S₂₁₀₆, it is checked
in the processing in steps S₂₁₀₇ to S₂₁₀₉ having the same judgment conditions as those
in steps S₂₁₀₃ to S₂₁₀₅ if the key code value falls within a range of 48 ≦ K ≦ 63,
32 ≦ K < 48, 16 ≦ K < 32, or 0 ≦ K < 16. Thus, in steps S₂₁₁₄ to S₂₁₁₇, tone color
parameters for the Y color and corresponding to one of the FM, TM, DPCM, and PCM methods
are set in an empty channel.
Embodiment D of Keyboard Key Processing
[0215] Furthermore, the embodiment D of the keyboard key processing will be described below.
[0216] In the embodiment C, as shown in Fig. 32, the tone generation channels to which the
X and Y tone colors are assigned are fixed as the first to fourth tone generation
channels and the fifth to eighth tone generation channels, respectively. In the embodiment
D, channels to which each tone color is assigned are not fixed, and the X and Y tone
colors are sequentially assigned to empty channels, as shown in Fig. 33 like in the
embodiment B.
[0217] Fig. 29 is an operation flow chart of the embodiment D of the keyboard key processing
in step S₅₀₅ in the main flow chart shown in Fig. 6. As shown in Fig. 29, it is checked
if the first to eighth channels include an empty channel (S₂₂₀₁). If there is empty
channel, tone color assignment is performed. The processing operations in steps S₂₂₀₂
to S₂₂₁₆ are the same as those in steps S₂₂₀₁, S₂₂₀₃ to S₂₂₀₅, and S₂₂₀₆ to S₂₂₁₇
in the embodiment C shown in Fig. 28.
Demonstration Performance Processing
[0218] The operations of the demonstration performance processing (S₅₀₆) in the main flow
chart shown in Fig. 6 when a demonstration performance (automatic performance) is
executed in some electronic musical instruments in addition to the keyboard key processing
described above, will be described in detail below.
[0219] In the first embodiment, different tone colors and sound source methods can be assigned
to the tone generation channels in accordance with whether the ON key plays a melody
or accompaniment part.
[0220] Fig. 30 is an operation flow chart of an embodiment A of the demonstration performance
processing in step S₅₀₆ in the main flow chart shown in Fig. 6. In the embodiment
A, X and Y tone colors are assigned to the tone generation channels, as shown in Fig.
32, in the same manner as the embodiment A or C of the keyboard key processing.
[0221] It is checked whether or not an ON key designated by automatic performance data read
out from the control data/waveform data ROM 2121 (Fig. 2) plays a melody (or accompaniment
part) (S₂₃₀₁).
[0222] If YES in step S₂₃₀₁, i.e., if it is determined that the key plays the melody part,
it is checked if the first to fourth tone generation channels include an empty channel
(S₂₃₀₂).
[0223] If there is no empty channel, and NO is determined in step S₂₃₀₂, no assignment is
performed.
[0224] If there is an empty channel, and YES is determined in step S₂₃₀₂, tone color parameters
for the X tone color are set in the FM format shown in Fig. 12 in one tone generation
channel area of the first to fourth channels on the RAM 2061 (Fig. 2) to which the
ON key is assigned. More specifically, sound source method No. data S representing
the FM method is set in the first area of the corresponding tone generation channel
area (see the column of FM in Fig. 12). Then, the tone color parameters corresponding
to the tone color of the tone color No. presently stored in the X tone color No. register
(Fig. 23) on the RAM 2061 are read out from a data architecture portion shown in Fig.
22 of the control data/waveform data ROM 2121, and are set in the second and subsequent
areas of the tone generation channel area (S₂₃₀₃).
[0225] If NO in step S₂₃₀₁, it is checked if the fifth to eighth tone generation channels
include an empty channel (S₂₃₀₄).
[0226] If there is no empty channel, and NO is determined in step S₂₃₀₄, no assignment is
performed.
[0227] If there is an empty channel, and YES is determined in step S₂₃₀₄, tone color parameters
for the Y tone color are set in the DPCM format shown in Fig. 12 in one tone generation
channel area of the fifth to eighth channels on the RAM 2061 (Fig. 2) to which the
ON key is assigned. More specifically, sound source method No. data S representing
the DPCM method is set in the first area of the corresponding tone generation channel
area (see the column of DPCM in Fig. 12). Then, the tone color parameters corresponding
to the tone color of the tone color No. presently stored in the X tone color No. register
(Fig. 23) on the RAM 2061 are read out from a data architecture portion shown in Fig.
22 of the control data/waveform data ROM 2121, and are set in the second and subsequent
areas of the tone generation channel area (S₂₃₀₅).
[0228] Fig. 31 is an operation flow chart of an embodiment B of demonstration performance
processing in step S₅₀₆ in the main flow chart of Fig. 6. In the embodiment B, channels
to which each tone color is assigned are not fixed, and the X and Y tone colors are
sequentially assigned to empty channels, as shown in Fig. 33 like in the embodiment
B or D of the keyboard key processing.
[0229] In Fig. 31, it is checked if the first to eighth channels include an empty channel
(S₂₄₀₁). If there is an empty channel, tone color assignment is performed. The processing
operations in steps S₂₄₀₂ to S₂₄₀₄ are the same as those in steps S₂₃₀₂ to S₂₃₀₄ in
the embodiment A of the demonstration performance processing shown in Fig. 30.
Other Embodiments
[0230] In the embodiments A to D of the keyboard key processing described above, two tone
colors are switched to have a split point for key code or velocity values as a boundary,
and sound source methods are switched in units of tone colors in accordance with the
velocity or key code values. Contrary to this, the sound source methods may be switched
to have a split point as a boundary, and tone colors may be switched in units of sound
source methods in accordance with, e.g., velocity values.
[0231] The number of split points is not limited to one, and a plurality of tone colors
or sound source methods may be switched in regions having two or more split points
as boundaries.
[0232] Furthermore, performance data associated with the split point is not limited to a
key code or a velocity.
[0233] On the other hand, in the embodiments A and B of the demonstration performance processing,
different tone colors and sound source methods can be assigned to tone generation
channels in accordance with a melody or accompaniment part in a demonstration performance
(automatic performance) mode. However, the present invention is not limited to this.
For example, tone colors and sound source methods may be switched in accordance with
whether a player plays a melody or accompaniment part.
[0234] In the embodiments A and B of the demonstration performance processing, an assignment
state of tone generation is changed in a permanent combination of tone colors and
sound source methods in accordance with a melody or accompaniment part. However, like
in the keyboard key processing, only tone colors or sound source methods may be changed,
and the kind of parameters may be desirably selected.
Summary of the Second Embodiment
[0235] The summary of this embodiment will be described below.
[0236] Fig. 34 is a block diagram showing the overall arrangement of this embodiment. In
Fig. 34, components other than an external memory 1162 are constituted in one chip.
Of these components, two, i.e., master and slave CPUs (central processing units) exchange
data to share sound source processing for generating musical tones.
[0237] In, e.g., a 16-channel polyphonic system, 8 channels are processed by a master CPU
1012, and the remaining 8 channels are processed by a slave CPU 1022.
[0238] The sound source processing is executed in a software manner, and sound source methods
such as PCM (Pulse Code Moduration) and DPCM (Differential PCM) methods, and sound
source methods based on modulation methods such as FM and phase modulation methods
are assigned in units of tone generation channels.
[0239] A sound source method is automatically designated for tone colors of specific instruments,
e.g., a trumpet, a tuba, and the like. For tone colors of other instruments, a sound
source method can be selected by a selection switch, and/or can be automatically selected
in accordance with a performance tone range, a performance strength such as a key
touch, and the like.
[0240] In addition, different sound source methods can be assigned to two channels for one
ON event of a key. That is, for example, the PCM method can be assigned to an attack
portion, and the FM method can be assigned to a sustain portion.
[0241] Furthermore, in, e.g., the FM method, when software processing is executed by a versatile
CPU according to a sound source processing algorithm, it requires too much time. However,
this embodiment can also solve this problem.
Arrangement of The Second Embodiment
[0242] The second embodiment will be described below with reference to the accompanying
drawings.
[0243] In Fig. 34, the external memory 1162 stores musical tone control parameters such
as target values of envelope values, a musical tone waveform in the PCM (pulse code
modulation) method, a musical tone differential waveform in the DPCM (differential
PCM) method, and the like.
[0244] The master CPU (to be abbreviated to as an MCPU hereinafter) 1012 and the slave CPU
(to be abbreviated to as an SCPU hereinafter) 1022 access the data on the external
memory 1162 to execute sound source processing while sharing processing operations.
Since these CPUs 1012 and 1022 commonly use waveform data of the external memory 1162,
a contention may occur when data is loaded from the external memory 1162. In order
to prevent this contention, the MCPU 1013 and the SCPU 1022 outputan address signal
for accessing the external memory, and external memory control data from output terminals
1112 and 1122 of an access address contention prevention circuit 1052 via an external
memory access address latch unit 1032 for the MCPU, and an external memory access
address latch unit 1042 for the SCPU. Thus, a contention between addresses from the
MCPU 1012 and the SCPU 1022 can be prevented.
[0245] Data read out from the external memory 1162 on the basis of the designated address
is input from an external memory data input terminal 1152 to an external memory selector
1062. The external memory selector 1062 separates the readout data into data to be
input to the MCPU 1012 via a data bus MD and data to be input to the SCPU 1022 via
a data bus SD on the basis of a control signal from the address contention prevention
circuit 1052, and inputs the separated data to the MCPU 1012 and the SCPU 1022. Thus,
a contention between readout data can also be prevented.
[0246] After the MCPU 1012 and the SCPU 1022 perform corresponding sound source processing
operations of the input data by software, musical tone data of all the tone generation
channels are accumulated, and a left-channel analog output and a right-channel analog
output are then output from a left output terminal 1132 of a left D/A converter unit
1072 and a right output terminal 1142 of a right D/A converter unit 1082, respectively.
[0247] Fig. 35 is a block diagram showing an internal arrangement of the MCPU 1012.
[0248] In Fig. 35, a control ROM 2012 stores a musical tone control program (to be described
later), and sequentially outputs program words (commands) addressed by a ROM address
controller 2052 via a ROM address decoder 2022. This embodiment employs a next address
method. More specifically, the word length of each program word is, e.g., 28 bits,
and a portion of a program word is input to the ROM address controller 2052 as a lower
bit portion (intra-page address) of an address to be read out next. Note that the
SCPU 1012 may comprise a conventional program counter type CPU insted of control ROM
2012.
[0249] A command analyzer 2072 analyzes operation codes of commands output from the control
ROM 2012, and sends control signals to the respective units of the circuit so as to
execute designated operations.
[0250] When an operand of a command from the control ROM 2012 designates a register, the
RAM address controller 2042 designates an address of a corresponding internal register
of a RAM 2062. The RAM 206 stores various musical tone control data (to be described
later with reference to Figs. 49 and 50) for eight tone generation channels, and includes
various buffers (to be described later) or the like. The RAM 2062 is used in sound
source processing (to be described later).
[0251] When a command from the control ROM 2012 is an arithmetic command, an ALU unit 2082
and a multiplier 2092 respectively execute an addition/subtraction, and a multiplication
on the basis of an instruction from the command analyzer 2072.
[0252] On the basis of an internal hardware timer (not shown), an interrupt controller 2032
supplies a reset cancel signal
A to the SCPU 2012 (Fig. 34) and an interrupt signal to the D/A converter units 1072
and 1082 (Fig. 34) at predetermined time intervals.
[0253] In addition to the above-mentioned arrangement, the MCPU 1012 shown in Fig. 35 comprises
the following interfaces associated with various buses: an interface 2152 for an address
bus MA for addressing the external memory 1162 to access it; an interface 2162 for
the data bus MD for exchanging the accessed data with the MCPU 1012 via the external
memory selector 1062; an interface 2122 for a bus Ma for addressing the internal RAM
of the SCPU 1022 so as to execute data exchange with the SCPU 1022; an interface 2132
for a data bus D
OUT used by the MCPU 1012 to write data in the SCPU 1022; an interface 2142 for a data
bus D
IN used by the MCPU 1012 to read data from the SCPU 1022; an interface 2172 for a D/A
data transfer bus for transferring final output waveforms to the left and right D/A
converter units 1072 and 1082; and input and output ports 2102 and 2112 for exchanging
data with an external switch unit or a keyboard unit (Figs. 45, and 46).
[0254] Fig. 36 shows the internal arrangement of the SCPU 1022.
[0255] Since the SCPU 1022 executes sound source processing upon reception of a processing
start signal from the MCPU 1012, it does not comprise an interrupt controller corresponding
to the controller 2032 (Fig. 35), I/O ports, corresponding to the ports 2102 and 2112
(Fig. 35) for exchanging data with an external circuit, and an interface, corresponding
to the interface 2172 (Fig. 35) for outputting musical tone signals to the left and
right D/A converter units 1072 and 1082. Other circuits 3012, 3022, and 3042 to 3092
have the same functions as those of the circuits 2012, 2022, and 2042 to 2092 shown
in Fig. 35. Interfaces 3032, and 3102 to 3132 are arranged in correspondence with
the interface 2122 to 2162 shown in Fig. 35. Note that the internal RAM address of
the SCPU 1022 designated by the MCPU 1012 is input to the RAM address controller 3042.
The RAM address controller 3042 designates an address of the RAM 3062. Thus, accumulated
waveform data for eight tone generation channels generated by the SCPU 1022 and held
in the RAM 3062 are output to the MCPU 1012 via the data bus D
IN. This will be described later.
[0256] In addition to the above-mentioned arrangement, in this embodiment, function keys
8012, keyboard keys 8022, and the like shown in Figs. 45 and 46 are connected to the
input port 2102 of the MCPU 1012. Theses portions substantially constitute an instrument
operation unit.
[0257] The D/A converter unit as one characteristic feature of the present invention will
be described below.
[0258] Fig. 43 shows the internal arrangement of the left or right D/A converter unit 1027
or 1082 (the two converter units have the same contents) shown in Fig. 34. One sample
data of a musical tone generated by sound source processing is input to a latch 6012
via a data bus. When the clock input terminal of the latch 6012 receives a sound source
processing end signal from the command analyser 2072 (Fig. 35) of the MCPU 1012, musical
tone data for one sample on the data bus is latched by the latch 6012, as shown in
Fig. 44.
[0259] A time required for the sound source processing changes depending on the sound source
processing software program. For this reason, a timing at which each sound source
processing is ended, and musical tone data is latched by the latch 6012 is not fixed.
For this reason, as shown in Fig. 42, an output from the latch 6012 cannot be directly
input to a D/A converter 6032.
[0260] In this embodiment, as shown in Fig. 43, the output from the latch 6012 is latched
by a latch 6022 in response to an interrupt signal equal to a sampling clock interval
output from the interrupt controller 2032, and is output to the D/A converter 603
at predetermined time intervals.
[0261] Since a change in processing time can be absorbed using the two latches 6012 and
6022, no complicated control program for outputting musical tone data to a D/A converter
6032 is required.
Overall Operation of The Second Embodiment
[0262] The overall operation of this embodiment will be described below.
[0263] In this embodiment, basically, the MCPU 1012 is mainly operated, and repetitively
executes a series of processing operations in steps S402 to S410, as shown in the
main flow chart of Fig. 37. The sound source processing is performed by interrupt
processing. More specifically, the MCPU 1012 and the SCPU 1022 are interrupted at
predetermined time intervals, and each CPU executes sound source processing for generating
musical tones for eight channels. Upon completion of this processing, musical tone
waveforms for 16 channels are added, and are output from the left and right D/A converter
units 1072 and 1082. Thereafter, the control returns from the interrupt state to the
main flow. Note that the above-mentioned interrupt processing is periodically executed
on the basis of the internal hardware timer in the interrupt controller 2032 (Fig.
35). This period is equal to a sampling period when a musical tone is output.
[0264] The schematic operation of this embodiment has been described. The operation of this
embodiment will be described in detail below with reference to Figs. 37 to 40.
[0265] When the interrupt controller 2032 interrupts repetitively executed processing operations
in steps S402 to S410 in the main flow chart of Fig. 37, MCPU interrupt processing
shown in Fig. 38 and SCPU interrupt processing shown in Fig. 39 are simultaneously
started. "Sound source processing" in Figs. 38 and 39 is shown in Fig. 40.
[0266] The main flow chart of Fig. 37 shows a processing flow executed by the MCPU 1012
in a state wherein no interrupt signal is supplied from the interrupt controller 2032.
[0267] When the power switch is turned on, the system e.g., the contents of the RAM 2062
in the MCPU 1012 are initialized (S401).
[0268] The function keys externally connected to the MCPU 1012, e.g., tone color switches,
and the like (Fig. 65), are scanned (S402) to fetch respective switch states from
the input port 2102 to a key buffer area in the RAM 2062. As a result of scanning,
a function key whose state is changed is discriminated, and processing of a corresponding
function is executed (S403). For example, a musical tone number or an envelope number
is set, or if optional functions include a rhythm performance function, a rhythm number
is set.
[0269] Thereafter, states of ON keyboard keys are fetched in the same manner as the function
keys (S404), and keys whose states are changed are discriminated, thus executing key
assignment processing (S405).
[0270] When a demonstration performance key of the function keys 8012 (Figs. 45 and 46)
is depressed, demonstration performance data (sequencer data) are sequentially read
out from the external memory 1162 to execute, e.g., key assignment processing (S406).
When a rhythm start key is depressed, rhythm data are sequentially read out from the
external memory 1162 to execute, e.g., key assignment processing (S407).
[0271] Thereafter, timer processing is executed (S408). More specifically, time data which
is incremented by interrupt timer processing (S412) (to be described later) is compared
with time control sequencer data sequentially read out for demonstration performance
control or time control rhythm data read out for rhythm performance control, thereby
executing time control when a demonstration performance in step S406 or a rhythm performance
in step S407 is performed.
[0272] In tone generation processing in step S409, pitch envelope processing, and the like
are executed. In this processing, an envelope is added to a pitch of a musical tone
to be generated, and pitch data is set in a corresponding tone generation channel.
[0273] Furthermore, one flow cycle preparation processing is executed (S410). In this processing,
processing for changing a state of a tone generation channel assigned with a note
number corresponding to an ON event detected in the keyboard key processing in step
S405 to an "ON event" state, and processing for changing a state of a tone generation
channel assigned with a note number corresponding to an OFF event to a "muting" state,
and the like are executed.
[0274] The MCPU interrupt processing shown in Fig. 38 will be described below.
[0275] When the interrupt controller 2032 of the MCPU 1012 interrupts the MCPU 1012, the
processing in the main flow chart shown in Fig. 37 is interrupted, and the MCPU interrupt
processing in Fig. 38 is started. In this case, control is made to avoid contents
of registers to be subjected to write access in the main flow program in Fig. 37 from
being rewritten in the MCPU interrupt processing program. For this reason, the MCPU
interrupt processing uses registers different from those used in the main flow program.
As a result, register save/restoration processing normally executed at the beginning
and end of interrupt processing can be omitted. Thus, transition between the processing
of the main flow chart shown in Fig. 37 and the MCPU interrupt processing can be quickly
performed.
[0276] Subsequently, in the MCPU interrupt processing, sound source processing is started
(S411). The sound source processing is shown in Fig. 40.
[0277] Simultaneously with the above-mentioned operations, the interrupt controller 2032
of the MCPU 1012 outputs the SCPU reset cancel signal
A (Fig. 34) to the ROM address controller 3052 of the SCPU 1022, and the SCPU 1022
starts execution of the SCPU interrupt processing (Fig. 39).
[0278] Sound source processing (S415) is started in the SCPU interrupt processing almost
simultaneously with the source processing (S411) in the MCPU interrupt processing.
In this manner, since each of the MCPU 1012 and the SCPU 1022 simultaneously executes
sound source processing of eight tone generation channels, the sound source processing
for 16 tone generation channels can be executed in a processing time for eight tone
generation channels, and a processing speed can be almost doubled (the interrupt processing
will be described later with reference to Fig. 41).
[0279] In the interrupt timer processing in step S412, the value of time data (not shown)
on the RAM 2062 (Fig. 35) is incremented by utilizing the fact that the interrupt
processing shown in Fig. 38 is executed for every predetermined sampling period. More
specifically, a time elapsed from power-on can be detected based on the value of the
time data. The time data obtained in this manner is used in time control in the timer
processing in step S408 in the main flow chart shown in Fig. 37.
[0280] The MCPU 1012 then waits for an SCPU interrup processing end signal
B from the SCUP 1022 after interrupt timer processing in step S412 (S413).
[0281] Upon completion of the sound source processing in step S415 in Fig. 39, the command
analyzer 3072 of the SCPU 1022 supplies an SCPU processing end signal
B (Fig. 34) to the ROM address controller 2052 of the MCPU 1012. In this manner, YES
is determined in step S413 in the MCPU interrupt processing in Fig. 38.
[0282] As a result, waveform data generated by the SCPU 1022 are written in the RAM 2062
of the MCPU 1012 via the data bus D
IN shown in Fig. 34 (S414). The waveform data are stored in a predetermined buffer area
(a buffer
B to be described later) on the RAM 3062 of the SCPU 1022. The command analyzer 2072
of the MCPU 1012 designates addresses of the buffer area to the RAM address controller
3042, thus reading the waveform data.
[0283] In step S414', the contents of the buffer area
B are latched by the latches 6012 (Fig. 43) of the left and right D/A converter units
1072 and 1082.
[0284] The operation of the sound source processing executed in step S411 in the MCPU interrupt
processing or in step S415 in the SCPU interrupt processing will be described below
with reference to the flow chart of Fig. 40.
[0285] A waveform addition area on the RAM 2062 or 3062 is cleared (S416). Then, sound source
processing is executed in units of tone generation channels (S417 to S424). After
the sound source processing for the eighth channel is completed, waveform data obtained
by adding those for eight channels is obtained in the buffer area
B. These processing operations will be described in detail later.
[0286] Fig. 41 is a schematic flow chart showing the relationship among the processing operations
of the flow charts shown in Figs. 37, 38, and 39. As can be seen from Fig. 41, the
MCPU 1012 and the SCPU 1022 share the sound source processing.
[0287] Given processing
A (the same applies to
B,
C,...,
F) is executed (S501). This "processing" corresponds to, for example, "function key
processing", or "keyboard key processing" in the main flow chart shown in Fig. 37.
Thereafter, the MCPU interrupt processing and the SCPU interrupt processing are executed,
so that the MCPU 1012 andthe SCPU 1022 simultaneously start sound source processing
(S502 and S503). Upon completion of the SCPU interrupt processing of the SCPU 1022,
the SCPU processing end signal
B is input to the MCPU 1012. In the MCPU interrupt processing, the sound source processing
is ended earlier than the SCPU interrupt processing, and the MCPU waits for the end
of the SCPU interrupt processing the SCPU processing end signal
B is discriminated in the MCPU interrupt processing, waveform data generated by the
SCPU 1022 is supplied to the MCPU 1012, and is added to the waveform data generated
by the MCPU 1012. The waveform data is then output to the left and right D/A converter
units 1072 and 1082. Thereafter, the control returns to some processing
B in the main flow chart.
[0288] The above-mentioned operations are repeated (S504 to S516) while executing the sound
source processing for all the tone generation channels (16 channels as a total of
those of the MCPU 1012 and the SCPU 1022). The repetition processing continues as
long as musical tones are being produced.
Data Architecture in Source Processing
[0289] The sound source processing executed in step S411 (Fig. 38) and step S415 (Fig. 39)
will be described in detail below.
[0290] In this embodiment, as described above, the two CPUs, i.e., the MCPU 1012 and the
SCPU 1022 share the sound source processing in units of eight channels. Data for the
sound source processing for eight channels are set in areas corresponding to the respective
tone generation channels in the RAMs 2062 and 3062 of the MCPU 1012 and the SCPU 1022,
as shown in Fig. 47.
[0291] Buffers BF, BT, B, and M are allocated on the RAM, as shown in Fig. 50.
[0292] In each tone generation channel area shown in Fig. 47, an arbitrary sound source
method can be set by an operation (to be described in detail later), as schematically
shown in Fig. 48. When the sound source method is set, data are set in each tone generation
channel area in Fig. 47 in a data format of the corresponding sound source method,
as shown in Fig. 49. In this embodiment, as will be described later, different sound
methods can be assigned to the tone generation channels.
[0293] In Table 1 showing the data formats of the respective sound source methods shown
in Fig. 49, G indicates a sound source method number for identifying the sound source
methods.
A represents an address designated when waveform data is read out in the sound source
processing, and A
I, A₁, and A₂ represent integral parts of current addresses, and directly correspond
to addresses of the external memory 1162 (Fig. 34) where waveform data are stored.
A
F represents a decimal part of the current address, and is used for interpolating waveform
data read out from the external memory 1162.
[0294] A
E and A
L respectively represent end and loop addresses. P
I, P₁ and P₂ represent integral parts of pitch data, and P
F represents a decimal part of pitch data. For example, P
I = 1 and P
F = 0 express a pitch of an original tone, PI = 2 and PF = 0 express a pitch higher
than the original pitch by one octave, and P
I = 0 and P
F = 0.5 express a pitch lower by one octave.
[0295] X
P represents previous sample data, and X
N represents the next sample data. D represents a difference between two adjacent sample
data, and E represents an envelope value. Furthermore,
O represents an output value, and C rePresents a flag which is used when a sound source
method to be assigned to a tone generation channel is changed in accordance with performance
data, as will be described later.
[0296] Various other control data will be described in descriptions of the respective sound
source methods.
[0297] When data shown in Fig. 49 are stored in the RAMs 2062 and 3062 of the MCPU 1012
and the SCPU 1022, and the sound source methods (to be described later) are determined,
data are set in units of channels shown in Fig. 47 in the format shown in Fig. 49.
[0298] The sound source processing operations of the respective sound source methods executed
using the above-mentioned data architecture will be described below in turn. These
sound source processing operations are realized by analyzing and executing a sound
source processing program stored in the control ROM 2012 or 3012 by the command analyzer
2072 or 3072 of the MCPU 1012 or the SCPU 1022. Assume that the processing is executed
under this condition unless otherwise specified.
[0299] In the flow chart shown in Fig. 40, in the sound source processing (one of steps
S417 to S424) for each channel, the sound source method No. data G of the data in
the data format (Table 1) shown in Fig. 49 stored in the corresponding tone generation
channel of the RAM 2062 or 3062 is discriminated to determine sound source processing
of a sound source method to be described below.
Sound Source Processing Based on PCM Method
[0300] When the sound source method No. data G indicates the PCM method, sound source processing
based on the PCM method shown in the operation flow chart of Fig. 13 is executed.
Variables in the flow chart are data in a PCM format of Table 1 shown in Fig. 49,
which data are stored in the corresponding tone generation channel area (Fig. 47)
of the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
[0301] Of an address group of the external memory 1162 (Fig. 34) where PCM waveform data
are stored, an address where waveform data as an object to be currently processed
is stored is assumed to be (A
I, A
F) shown in Fig. 15.
[0302] Pitch data (P
I, P
F) is added to the current address (S1001). The pitch data corresponds to the type
of an ON key of the keyboard keys 8012 shown in Figs. 45 and 46.
[0303] It is then checked if the integral part A
I of the sum address is changed (S1002). If NO in step S1002, an interpolation data
value
O corresponding to the decimal part A
F of the address (Fig. 15) is calculated by arithmetic processing

using a difference D as a difference between sample data X
N and X
P at addresses (A
I+1) and A
I (S1007). Note that the difference D has already been obtained by the sound source
processing at previous interrupt timing (see step S1006 to be described later).
[0304] The sample data X
P corresponding to the integral part A
I of the address is added to the interpolation data value
O to obtain a new sample data value
O (corresponding to X
Q in Fig. 15) corresponding to the current address (A
I, A
F) (S1008).
[0305] Thereafter, the sample data is multiplied with the envelope value E (S1009), and
the content of the obtained data
O is added to a value held in the waveform data buffer
B (Fig. 50) in the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022 (S1010).
[0306] Thereafter, the control returns to the main flow chart shown in Fig. 37. The control
is interrupted in the next sampling period, and the operation flow chart of the sound
source processing shown in Fig. 13 is executed again. Thus, pitch data (P
I, P
F) is added to the current address (A
I, A
F) (S1001).
[0307] The above-mentioned operations are repeated until the integral part A
I of the address is changed (S1002).
[0308] Before the integral part is changed, the sample data X
P and the difference D are left unchanged, and only the interpolation data
O is updated in accordance with the address A
F. Thus, every time the address A
F is updated, new sample data X
Q is obtained.
[0309] If the integral part A
I of the current address is changed (S1002) as a result of addition of the current
address (A
I, A
F) and the pitch data (P
I, P
F) in step S1001, it is checked if the address A
I has reached or exceeded the end address A
E (S1003).
[0310] If YES in step S1003, the next loop processing is executed. More specifically, a
value (A
I - A
E) as a difference between the updated current address A
T and the end address A
E is added to the loop address A
L to obtain a new current address (A
I, A
F). A loop reproduction is started from the obtained new current address A
I (S1004). The end address A
E is an end address of an area of the external memory 1162 (Fig. 34) where PCM waveform
data are stored. The loop address A
L is an address of a position where a player wants to repeat an output of a waveform,
and known loop processing is realized by the PCM method.
[0311] If NO in step S1003, the processing in step S1004 is not executed.
[0312] Sample data is then updated. In this case, sample data corresponding to the new updated
current address A
T and the immediately preceding address (A
I-1) are read out as X
N and X
P from the external memory 1162 (Fig. 34) (S1005).
[0313] Furthermore, the difference so far is updated with a difference D between the updated
data X
N and X
P (S1006).
[0314] The following operation is as described above.
[0315] In this manner, waveform data by the PCM method for one channel is generated.
Sound Source Processing Based on DPCM Method
[0316] The sound source processing based on the DPCM method will be described below.
[0317] The operation principle of the DPCM method will be briefly described below with reference
to Fig. 16.
[0318] In Fig. 16, sample data X
P corresponding to an address A
I of the external memory 1162 (Fig. 34) is obtained by adding sample data corresponding
to an address (A
I-1) (not shown) to a difference between the sample data corresponding to the address
(A
I-1) and sample data corresponding to the address A
I.
[0319] A difference D with the next sample data is written at the address A
I of the external memory 1162 (Fig. 34). Sample data at the next address (A
I+1) is obtained by

.
[0320] In this case, if the decimal part of the current address is represented by A
F, as shown in Fig. 16, sample data corresponding to the current address A
F is obtained by

.
[0321] In this manner, in the DPCM method, a difference D between sample data corresponding
to the current address and the next address is read out from the external memory 1162
(Fig. 34), and is added to the current sample data to obtain the next sample data,
thereby sequentially forming waveform data.
[0322] The operation of the above-mentioned DPCM method will be described below with reference
to the operation flow chart shown in Fig. 14. Variables in the flow chart are DPCM
data in Table 1 shown in Fig. 49, which data are stored in the corresponding tone
generation channel area (Fig. 49) on the RAM 2062 or 3062 of the MCPU 1012 or the
SCPU 1022.
[0323] Of addresses on the external memory 1162 (Fig. 34) where DPCM differential waveform
data are stored, an address where waveform data as an object to be currently processed
is stored is assumed to be (A
I, A
F) shown in Fig. 16.
[0324] Pitch data (P
I, P
F) is added to the current address (A
I, A
F) (S1101).
[0325] It is then checked if the integral part A
I of the sum address is changed (S1102). If NO in step S1102, an interpolation data
value
O corresponding to the decimal part A
F of the address is calculated by arithmetic processing D × A
F using a difference D at the address A
I in Fig. 16 (S1114). Note that the difference D has already been obtained by the sound
source processing at the previous interrupt timing (see steps S1106 and S1110 to be
described later).
[0326] The interpolation data value
O is added to sample data X
P corresponding to the integral part A
I of the address to obtain a new sample data value
O (corresponding X
Q in Fig. 16) corresponding to the current address (A
I, A
F) (S1115).
[0327] Thereafter, the sample data value
O is multiplied with an envelope value E (S1116), and the obtained value is added to
a value stored in the waveform data buffer
B (Fig. 50) in the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022 (S1117).
[0328] Thereafter, the control returns to the flow chart shown in Fig. 37. The control is
interrupted in the next sampling period, and the operation flow chart of the sound
source processing shown in Fig. 14 is executed again. Thus, pitch data (P
I, P
F) is added to the current address (A
I, A
F) (S1101).
[0329] The above-mentioned operations are repeated until the integral part A
I of the address is changed.
[0330] Before the integral part is changed, the sample data X
P and the difference D are left unchanged, and only the interpolation data
O is updated in accordance with the address A
F. Thus, every time the address A
F is updated, new sample data X
Q is obtained.
[0331] If the integral part A
I of the present address is changed (S1102) as a result of addition of the current
address (A
I, A
F) and the pitch data (P
I, P
F) in step S1101, it is checked if the address A
I has reached or exceeded the end address A
E (S1103).
[0332] If NO in step S1103, sample data corresponding to the integral part A
I of the updated current address is calculated by the loop processing in steps S1104
to S1107. More specifically, a value before the integral part A
I of the current address is changed is stored in a variable "old A
I" (see the column of DPCM in Table 1 shown in Fig. 49). This can be realized by repeating
processing in step S1106 or S1113 (to be described later). The old A
I value is sequentially incremented in S1106, and differential waveform data in the
external memory 1162 (Fig. 34) addressed by the old A
I values are read out as D in step S1107. The readout data D are sequentially accumulated
on sample data X
P in step S1105. When the old A
I value becomes equal to the integral part A
I of the changed current address, the sample data X
P has a value corresponding to the integral part A
I of the changed current address.
[0333] When the sample data X
P corresponding to the integral part A
I of the current address is obtained in this manner, YES is determined in step S1104,
and the control starts the arithmetic processing of the interpolation value (S1114)
described above.
[0334] The above-mentioned sound source processing is repeated at the respective interrupt
timings, and when the judgment in step S1103 is changed to YES, the control enters
the next loop processing.
[0335] An address value (A
I-A
E) exceeding the end address A
E is added to the loop address A
L, and the obtained address is defined as an integral part A
I of a new current address (S1108).
[0336] An operation for accumulating the difference D several times depending on an advance
in address from the loop address A
L is repeated to calculate sample data X
P corresponding to the integral part A
I of the new current address. More specifically, sample data X
P is initially set as the value of sample data X
PL (see the column of DPCM in Table 1 shown in Fig. 49) at the preset loop address A
L and the old A
I is set as the value of the loop address A
L (S1110). The following processing operations in steps S1110 to S1113 are repeated.
More specifically, the old A
I value is sequentially incremented in step S1113, and differential waveform data on
the external memory 1162 (Fig. 34) designated by the incremented old A
I values read out as data D. The data D are accumulated on the sample data X
P in step S1112. When old A
I value becomes equal to the integral part A
I of the new current address, the sample data X
P has a value corresponding to the integral part A
I of the new current address after loop processing.
[0337] When the sample data Xp corresponding to the integral part A
I of the new current address is obtained in this manner, YES is determined in step
S1111, and the control enters the above-mentioned arithmetic processing of the interpolation
value (S1114).
[0338] As described above, waveform data by the DPCM method for one tone generation channel
is generated.
Sound Source Processing Based on FM Method (Part 1)
[0339] The sound source processing based on the FM method will be described below.
[0340] In the FM method, hardware or software elements having the same contents, called
"operators", as indicated by OP1 to OP4 in Figs. 51 to 54, are normally used, and
are connected based on connection rules indicated by algorithms 1 to 4 in Figs. 51
to 54, thereby generating musical tones. In this embodiment, the FM method is realized
by a software program.
[0341] The operation of this embodiment executed when the sound source processing is performed
using two operators will be described below with reference to the operation flow chart
shown in Fig. 17. The algorithm of the processing is shown in Fig. 18. Variables in
the flow chart are FM format data in Table 1 shown in Fig. 49, which data are stored
in the corresponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062
of the MCPU 1012 or the SCPU 1022.
[0342] First, processing of an operator 2 (OP2) as a modulator is performed. In pitch processing
(processing for accumulating pitch data for determining an incremental width of an
address for reading out waveform data stored in the waveform memory 1162), since no
waveform data interpolation is performed unlike in the PCM method, an address consists
of an integral address A₂, and has no decimal address. Further, modulation waveform
data are stored in the external memory 1162 (Fig. 34) at sufficiently fine incremental
widths.
[0343] Pitch data P₂ is added to the present address A₂ (S1301).
[0344] A feedback output F
O2 is added to the address A₂ as a modulation input to obtain a new address A
M2 which corresponds to phase of a sine wave (S1302). The feedback output F
O2 has already been obtained upon execution of processing in step S1305 (to be described
later) at the immediately preceding interrupt timing.
[0345] The value of a sine wave corresponding to the address A
M2 is calculated. In practice, sine wave data are stored in the external memory 1162
(Fig. 34), and are obtained by addressing the external memory 1162 by the address
A
M2 to read out the corresponding data (S1303).
[0346] Subsequently, the sine wave data is multiplied with an envelope value E₂ to obtain
an output O₂ (S1304).
[0347] Thereafter, the output O₂ is multiplied with a feedback level F
L2 to obtain a feedback output F
O2 (S1305). This output F
O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
[0348] The output O₂ is multiplied with a modulation level M
L2 to obtain a modulation output M
O2 (S1306). The modulation output M
O2 serves as a modulation input to an operator 1 (OP1).
[0349] The control then enters processing of the operator 1 (OP1). This processing is substantially
the same as that of the operator 2 (OP2) described above, except that there is no
modulation input based on the feedback output.
[0350] The current address A₁ of the operator 1 is added to pitch data P₁ (S1307), and the
sum is added to the above-mentioned modulation output M
O2 to obtain a new address A
M1 (S1308).
[0351] The value of sine wave data corresponding to this address A
M1 (phase) is read out from the external memory 1162 (Fig. 34) (S1309), and is multiplied
with an envelope value E₁ to obtain a musical tone waveform output O₁ (S1310).
[0352] The output O₁ is added to a value held in the buffer B (Fig. 50) in the RAM 2062
(Fig. 35) or the RAM 3062 (Fig. 36) (S1311), thus completing the FM processing for
one tone generation channel.
Sound Source Processing Based on TM (Triangular Wave Modulation) Method (Part 1)
[0353] The sound source processing based on the TM method will be described below.
[0354] The principle of the TM method is already described in the first embodiment. Therefore,
the description of the TM method itself is omitted.
[0355] The sound source processing based on the TM method will be described below with reference
to the operation flow chart shown in Fig. 19. In this case, the sound source processing
is also performed using two operators like in the FM method shown in Figs. 17 and
18, and the algorithm of the processing is shown in Fig. 20. Variables in the flow
chart are TM format data in Table 1 shown in Fig. 49, which data are stored in the
corre-sponding tone generation channel area (Fig. 47) on the RAM 2062 or 3062 of the
MCPU 1012 or the SCPU 1022.
[0356] First, processing of an operator 2 (OP2) as a modulator is performed. In pitch processing,
since no waveform data interpolation is performed unlike in the PCM method, an address
for addressing the external memory 1162 consists of only an integral address A₂.
[0357] The current address A₂ is added to pitch data P₂ (S1401).
[0358] A modified sine wave corresponding to the address A₂ (phase) is read out from the
external memory 1162 (Fig. 34) by the modified sine conversion f
c, and is output as a carrier signal O₂ (S1402).
[0359] Subsequently, a feedback output F
O2 (S1460) as a modulation signal, is added to the carrier signal O₂, and the sum signal
is output as a new address O₂ (S1403). The feedback output F
O2 has already been obtained upon execution of processing in step S1406 (to be described
later) at the immediately preceding interrupt timing.
[0360] The value of a triangular wave corresponding to the address O₂ is calculated. In
practice, triangular wave data are stored in the external memory 1162 (Fig. 34), and
are obtained by addressing the external memory 1162 by the address O₂ to read out
the corresponding data (S1404).
[0361] Subsequently, the triangular wave data is multiplied with an envelope value E₂ to
obtain an output O₂ (S1405).
[0362] Thereafter, the output O₂ is multiplied with a feedback level F
L2 to obtain a feedback output F
O2 (S1407). In this embodiment, the output F
O2 serves as an input to the operator 2 (OP2) at the next interrupt timing.
[0363] The output O₂ is multiplied with a modulation level M
L2 to obtain a modulation output M
O2 (S1407). The modulation output M
O2 serves as a modulation input to an operator 1 (OP1).
[0364] The control then enters processing of the operator 1 (OP1). This processing is substantially
the same as that of the operator 2 (OP2) described above, except that there is no
modulation input based on the feedback output.
[0365] The current address A₁ of the operator 1 is added to pitch data P₁ (S1408), and the
sum is subjected to the above-mentioned modified sine conversion to obtain a carrier
signal O₁ (S1409).
[0366] The carrier signal O₁ is added to the modulation output M
O2 to obtain a new value O₁ (S1410), and the value O₁ is subjected to triangular wave
conversion (S1411). The converted value is multiplied with an value E₁ to obtain a
musical tone waveform output O₁ (S1412).
[0367] The output O₁ is added to a value held in the buffer B (Fig. 50) in the RAM 2062
(Fig. 36) or the RAM 3062 (Fig. 36), thus completing the TM processing for one tone
generation channel.
[0368] The sound source processing operations based on four methods, i.e., the PCM, DPCM,
FM, and TM methods have been described. The FM and TM methods are modulation methods,
and, in the above examples, two-operator processing operations are executed based
on the algorithms shown in Figs. 18 and 20. However, in sound source processing in
an actual performance, more operators are used, and the algorithms are more complicated.
Figs. 51 to 54 show examples. In an algorithm 1 shown in Fig. 51, four modulation
operations including a feedback input are performed, and a complicated waveform can
be obtained. In each of algorithms 2 and 3 shown in Figs. 52 and 53, two sets of algorithms
each having a feedback input are arranged parallel to each other, and these algorithms
are suitable for expressing a change in tone color during, e.g., transition from an
attack portion to a sustain portion. An algorithm 4 shown in Fig. 59 has a feature
close to a sine wave synthesis method.
[0369] The sound source processing operations based on the FM and TM methods using four
operators shown in Figs. 51 to 54 will be described below in turn with reference to
Figs. 55 and 56.
Sound Source Processing Based on FM Method (Part 2)
[0370] Fig. 55 is an operation flow chart of normal sound source processing based on the
FM method corresponding to the algorithm 1 shown in Figs. 55 to 54. Variables in the
flow chart are stored in the corresponding tone generation channel area (Fig. 47)
on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022. Although the variables
used in Fig. 55 are not the same as data in the FM format of Table 1 in Fig. 49, they
are obtained by expanding the concept of the data format shown in Fig. 49, and only
have different suffixes.
[0371] First, the present address A₄ of an operator 4 (OP4) is added to pitch data P₄ (S1901).
The address A₄ is added to a feedback output F
O4 (S1905) as a modulation input to obtain a new address A
M4 (S1902). Furthermore, the value of a sine wave corresponding to the address M4 (phase)
is read out from the external memory 1162 (Fig. 34) (S1903), and is multiplied with
an envelope value E₄ to obtain an output O₄ (S1904). Thereafter, the output O₄ is
multiplied with a feedback level F
L4 to obtain a feedback output F
O4 (S1905). The output O₄ is multiplied with a modulation level M
L4 to obtain a modulation output M
O4 (S1906). The modulation output M
O4 serves as a modulation input to the next operator 3 (OP3).
[0372] The control then enters processing of the operator 3 (OP3). This processing is substantially
the same as that of the operator 4 (OP4) described above, except that there is no
modulation input based on the feedback output. The current address A₃ of the operator
3 (OP3) is added to pitch data P₃ to obtain a new current address A₃ (S1907). The
address A₃ is added to a modulation output M
O4 as a modulation input, thus obtaining a new address A
M3 (S1908). Furthermore, the value of a sine wave corresponding to the address A
M3 (phase) is read out from the external memory 1162 (Fig. 34) (S1909), and is multiplied
with an envelope value E₃ to obtain an output O₃ (S1910). Thereafter, the output O₃
is multiplied with a modulation level M
L3 to obtain a modulation output M
O3 (S1911). The modulation output M
O3 serves as a modulation input to the next operator 2 (OP2).
[0373] Processing of the operator 2 (OP2) is then executed. However, this processing is
substantially the same as that of the operator 3, except that a modulation input is
different, and a detailed description thereof will be omitted.
[0374] Finally, the control enters processing of an operator 1 (OP1). In this case, the
same processing operations as described above are performed up to step S1920. A musical
tone waveform output O₁ obtained in step S1920 is added to data stored in the buffer
B as a carrier (S1921).
Sound Source Processing Based on TM Method (Part 2)
[0375] Fig. 50 is an operation flow chart of normal sound source processing based on the
TM method corresponding to the algorithm 1 shown in Fig. 51. Variables in the flow
chart are stored in the corresponding tone generation channel area (Fig. 47) on the
RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022. Although the variables used in
Fig. 55 are not the same as data in the TM format of Table 1 in Fig. 49, they are
obtained by expanding the concept of the data format shown in Fig. 49, and only have
different suffixes.
[0376] The current address A₄ of the operator 4 (OP4) is added to pitch data P₄ (S2061).
A modified sine wave to the above-mentioned address A₄ (phase) is read out from the
external memory 1162 (Fig. 34) by the modified sine conversion f
c, and is output as a carrier signal O₄ (S2002). A feedback output F
O4 (see S2007) as a modulation signal is added to the carrier signal O₄, and the sum
signal is output as a new address O₄ (S2003). The value of a triangular wave corresponding
to the address O₄ (phase) is read out from the external memory 1162 (Fig. 34) (to
be referred to as a triangular wave conversion hereinafter) (S2004), and is multiplied
with an envelope value E₄, thus obtaining an output O₄ (S2005). Thereafter, the output
O₄ is multiplied with a modulation level M
L4 to obtain a modulation output M
O4 (S2006). The output O₄ is multiplied with a feedback level F
L4 to obtain a feedback output F
O4 (S2007). The modulation output M
O4 serves as a modulation input to the next operator 3 (OP3).
[0377] The control then enters processing of the operator 3 (OP3). This processing is substantially
the same as that of the operator 4 (OP4) described above, except that there is no
modulation input based on the feedback output. The current address A₃ of the operator
3 (OP3) is added to pitch data P₃ (S2008) and the sum is subject to modified sine
conversion to obtain a carrier signal O₃ (S2009). The carrier signal O₃ is added to
the above-mentioned modulation output M
O4 to obtain a new value O₃ (S2010), and the value O₃ is subject to triangular wave
conversion (S2011). The converted value is multiplied with an envelope value E₃ to
obtain an aoutput O₃ (S2012). The output O3 is multiplied with a modulation level
M
L3 to obtain a modulation output M
O3 (S2013). The modulation output M
O3 serves as a modulation input to the next operator 2 (OP2).
[0378] Processing of the operator 2 (OP2) is then executed. However, this processing is
substantially the same as that of the operator 3, except that a modulation input is
different, and a detailed description thereof will be omitted.
[0379] Finally, the control enters processing of an operator 1 (OP1). In this case, the
same processing operations as described above are performed up to step S2024. A musical
tone waveform output O₁ obtained in step S2024 is accumulated in the buffer B (Fig.
50) as a carrier (S2025).
[0380] The embodiment of the normal sound processing operations based on the modulation
methods has been described. However, the above-mentioned processing is for one tone
generation channel, and in practice, the MCPU 1012 and the SCPU 1022 each execute
processing for eight channels (Fig. 40). If a modulation method is designated in a
given tone generation channel, the above-mentioned sound source processing based on
the modulation method is executed.
Modification of Modulation Method (Part 1)
[0381] The first modulation of the sound source processing based on the modulation method
will be described below.
[0382] The basic concept of this processing is shown in the flow chart of Fig. 57.
[0383] In Fig. 57, operator 1, 2, 3, and 4 processing operations have the same program architecture
although they have different variable names to be used.
[0384] Each operator processing cannot be executed unless a modulation input is determined.
This is because a modulation input to each operator processing varies depending on
the algorithm, as shown in Figs. 51 to 54. Which operator processing output is used
as a modulation input or whether or not an output from its own operator processing
is fed back, and is used as its own modulation input in place of another operator
processing must be determined. In the operation flow chart shown in Fig. 57, such
determinations are simultaneously performed in algorithm processing (S2105), and the
connection relationship obtained by this processing determine modulation inputs to
the respective operator processing operations (S2102 to S2104). Note that a given
initial value is set as an input to each operator processing at the beginning of tone
generation.
[0385] When the operator processing and the algorithm processing are separated in this manner,
the program of the operator processing can remain the same, and only the algorithm
processing can be modified in correspondence with algorithms. Therefore, the program
size of the overall sound source processing based on the modulation method can be
greatly reduced.
[0386] A modification of the FM method based on the above-mentioned basic concept will be
described below. The operator 1 processing in the operation flow chart showing operator
processing based on the FM method in Fig. 57 is shown in Fig. 58, and an arithmetic
algorithm per operator is shown in Fig. 59. The remaining operator 2 to 4 processing
operations are the same except for different suffix numbers of variables. Variables
in the flow chart are stored in the corresponding tone generation channel (Fig. 47)
on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
[0387] An address A₁ corresponding to a phase angle is added to pitch data P₁ to obtain
a new address A₁ (S2201). The address A₁ is added to a modulation input M
I1, thus obtaining an address A
M1 (S2202). The modulation input M
I1 is determined by the algorithm processing in step S2105 (Fig. 57) at the immediately
preceding interrupt timing, and may be a feed back output F
O1 of its own operator, or an output M
O2 from another operator, e.g., an operator 2 depending on the algorithm. The value
of a sine wave corresponding to this address (phase) A
M1 is read out from the external memory 1162 (Fig. 34), thus obtaining an output O₁
(S2203). Thereafter, a value obtained by multiplying the output O₁ with envelope data
E₁ serves as an output O₁ of the operator 1 (S2204). The output O₁ is multiplied with
a feedback level F
L1 to obtain a feedback output F
O1 (S2205). The output 01 is multipled with a modulation level M
L1, thus obtaining a modulation output M
O1 (S2206).
[0388] A modification of the FM method based on the above-mentioned basic concept will be
described below. The operator 1 processing in the operation flow chart showing operator
processing based on the FM method in Fig. 57 is shown in Fig. 58, and an arithmetic
algorithm per operator is shown in Fig. 59. The remaining operator 2 to 4 processing
operations are the same except for different suffix numbers of variables. Variables
in the flow chart are stored in the corresponding tone generation channel (Fig. 47)
on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
[0389] The current address A₁ is added to pitch data P₁ (S2301). A modified sine wave corresponding
to the above-mentioned address A₁ (phase) is read out from the external memory 1162
(Fig. 34) by the modified sine conversion f
c, and is generated as a carrier signal O₁ (S2302). The output O₁ is added to a modulation
input M
I1 as a modulation signal, and the sum is defined as a new address O₁ (S2303). The value
of a triangular wave corresponding to the address O₁ (phase) is read out from the
external memory 1162 (S2304), and is multiplied with an envelope value E₁ to obtain
an output O₁ (S2306). Thereafter, the output O₁ is multiplied with a feedback level
F
L1 to obtain a feedback output F
O1 (S2306). The output O₁ is multiplied with a modulation level M
L1 to obtain a modulation output M
O1 (S2307).
[0390] The algorithm processing in step S2105 in Fig. 57 for determining a modulation input
in the operator processing in both the above-mentioned modulation methods, i.e., the
FM and TM methods will be described in detail below with reference to the operation
flow chart of Fig. 62. The flow chart shown in Fig. 62 is common to both the FM and
TM methods, and the algorithms 1 to 4 shown in Figs. 51 to 54 are selectively processed.
In this case, choices of the algorithms 1 to 4 are made based on an instruction (not
shown) from a player (S2400).
[0391] The algorithm 1 is of a series four-operator (to be abbreviated to as an OP hereinafter)
type, and only the OP4 has a feedback input. More specifically, in the algorithm 1,
a feedback output F
O4 of the OP4 serves as the modulation input M
I4 of the OP4 (S2401),
a modulation output M
O4 of the OP4 serves as a modulation input M
I3 of the OP3 (S2402),
a modulation output OP3 of the OP3 serves as a modulation input M
I2 of the OP2 (S2403),
a modulation output M
O2 of the OP2 serves as a modulation input M
I1 of the OP1 (S2404), and
an output O₁ from the OP1 is added to the value held in the buffer B (Fig. 50)
as a carrier output (S2405).
[0392] In the algorithm 2, as shown in Fig. 52, the OP2 and the OP4 have feedback inputs.
More specifically, in the algorithm 2,
a feedback output F
O4 of the OP4 serves as a modulation input M
I4 of the OP4 (S2406),
a modulation output M
O4 of the OP4 serves as a modulation input M
I3 of the OP3 (S2407),
a feedback output F
O2 of the OP2 serves as a modulation input M
I2 of the OP2 (S2408),
modulation outputs M
O2 and M
O3 of the OP2 and serve as a modulation input M
I1 of the OP1 (S2409), and
an output O₁ from the OP1 is added to the value held in the buffer B as a carrier
output (S2410).
[0393] In the algorithm 3, the OP2 and OP4 have feedback inputs, and two modules in which
two operators are connected in series with each other are connected in parallel with
each other. More specifically, in the algorithm 3,
a feedback output F
O4 of the OP4 serves as a modulation input M
I4 of the OP4 (S2411),
a modulation output M
O4 of the OP4 serves as a modulation input M
I3 of the OP3 (S2412),
a feedback output F
O2 of the OP2 serves as a modulation input M
I2 of the OP2 (S2413),
a modulation output M
O2 of the OP2 serves as a modulation input M
I1 of the OP1 (S2414), and
outputs O₁ and O₃ from the OP1 and OP3 are added to the value held in the buffer
B as carrier outputs (S2415).
[0394] The algorithm 4 is of a parallel four-OP type, and all the OPs have feedback inputs.
More specifically, in the algorithm 4,
a feedback output F
O4 of the OP4 serves as a modulation input M
I4 of the OP4 (S2416),
a feedback output F
O3 of the OP3 serves as a modulation input M
I3 of the OP3 (S2417),
a feedback output F
O2 of the OP2 serves as a modulation input M
I2 of the OP2 (S2418),
a feedback output F
O1 of the OP1 serves as a input M
I1 of the OP1 (S2419), and
outputs O₁, O₂, O₃, and O₄ from all the OPs are added to the value held in the
buffer B (S2420).
[0395] The sound source processing for one channel is completed by the above-mentioned operator
processing and algorithm processing, and tone generation (sound source processing)
continues in this state unless the algorithm is changed.
Modification of Modulation Method (Part 2)
[0396] The second modification of the sound source processing based on the modulation method
will be described below.
[0397] In the various modulation methods described above, processing time is increased as
the complicated algorithms are programmed, and as the number of tone generation channels
(the number of polyphonic channels) is increased.
[0398] In the second modification to be described below, the first modification shown in
Fig. 57 is further developed, so that only operator processing is performed at a given
interrupt timing, and only algorithm processing is performed at the next interrupt
timing. Thus, the operator processing and the algorithm processing are alternately
executed. In this manner, a processing load per interrupt timing can be greatly reduced.
As a result, one sample data per two interrupts is output.
[0399] This operation will be described below with reference to the operation flow chart
shown in Fig. 63.
[0400] In order to alternately execute the operator processing and the algorithm processing,
whether or not a variable S is zero is checked (S2501). The variable is provided for
each tone generation channel, and is stored in the corresponding tone generation channel
area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
[0401] If S = 0 at a given interrupt timing, the process enters an operator processing route,
and sets the variable S to a value "1" (S2502). Subsequently, operator 1 to 4 processing
operations are executed (S2503 to S2506). This processing is the Same as that in Figs.
58 and 59, or 60 and 61.
[0402] The process exits from the operator processing route, and executes output processing
for setting a value of the buffer BF (for the FM method) or the buffer BT (for the
TM method) (S2510). The buffer BF or BT is provided for each tone generation channel,
and is stored in the corresponding tone generation channel area (Fig. 47) on the RAM
2062 or 3062 of the MCPU 1012 or the SCPU 1022. The buffer BF or BT stores a waveform
output value after the algorithm processing. At the current interrupt timing, however,
no algorithm processing been executed, and the content of the buffer BF or BT is not
updated. For this reason, the same waveform output value as that at the immediately
preceding interrupt timing is output.
[0403] With the above processing, sound source processing for one tone generation channel
at the current interrupt timing is completed. In this case, data obtained by the current
operator 1 to 4 processing operations are stored in the corresponding tone generation
channel area (Fig. 47) on the RAM 2062 or 3062 of the MCPU 1012 or the SCPU 1022.
[0404] At the next interrupt timing, since the variable S is set to be 1 at the immediately
preceding interrupt timing, the flow advances to step S2507. The process then enters
an algorithm processing route, and sets the variable S to be a value "0". Subsequently,
the algorithm processing is executed (S2508).
[0405] In this processing, the data processed in the operator 1 to 4 processing operations
at the immediately preceding interrupt timing and stored in the corresponding tone
generation channel area (Fig. 47) are used, and processing for determining a modulation
input for the next operator processing is executed. In this processing, the content
of the buffer BF or BT is rewritten, and a waveform output value at that interrupt
timing can be obtained. The algorithm processing is shown in detail in the operation
flow chart of Fig. 64. In this flow chart, the same processing operations as in Fig.
62 are executed in steps denoted by the same reference numerals as in Fig. 62. A difference
between Figs. 62 and 64 is an output portion in steps S2601 to S2604. In the case
of algorithms 1 and 2, the content of the output O₁ of the operator 1 processing is
directly stored in the buffer BF or BT (S2601 and S2602). In the case of the algorithm
3, a value as a sum of the outputs O₁ and O₃ is stored in the buffer BF or BT (S2603).
Furthermore, in the case of the algorithm 4, a value as a sum of the output O₁ and
the outputs O₂, O₃, and O₄ is stored in the buffer BF or BT (S2604).
[0406] As described above, since the operator processing and the algorithm processing are
alternately executed at every other interrupt timing, a processing load per interrupt
timing of the sound source processing program can be remarkably decreased. In this
case, since an interrupt period need not be prolonged, the processing load can be
reduced without increasing an interrupt time of the main operation flow chart shown
in Fig. 37, i.e., without influencing the program operation. Therefore, a keyboard
key sampling interval executed in Fig. 37 will not be prolonged, and the response
performance of an electronic musical instrument will not be impaired.
[0407] The operations for generating musical tone data in units of tone generation channels
by the software sound source processing operations based on various sound source methods
have been described.
Function Key Processing
[0408] The operation of the function key processing (S403) in the main operation flow chart
shown in Fig. 37 when an actual electronic musical instrument is played will be described
in detail below.
[0409] In the above-mentioned sound source processing executed for each tone generation
channel, parameters corresponding to sound source methods are set in the formats shown
in Fig. 49 in the corresponding tone generation channel area (Fig. 47) on the RAM
2062 or 3062 (Figs. 35 and 36) by one of the function keys 8012 (Fig. 45) connected
to the operation panel of the electronic musical instrument via the input port 2102
(Fig. 35) of the MCPU 1012.
[0410] Fig. 65 shows an arrangement of some function keys 8012 shown in Fig. 45. In Fig.
65, some function keys 8012 are realized as tone color switches. When one of switches
"piano", "guitar",..., "koto" in a group
A is depressed, a tone color of the corresponding instrument tone is selected, and
a guide lamp is turned on. Whether the tone color of the selected instrument tone
is generated in the DPCM method or the TM method is selected by a DPCM/TM switch 27012.
[0411] On the other hand, when a switch "tuba" in a group B is depressed, a tone color based
on the FM method is designated; when a switch "bass" is depressed, a tone color on
both the PCM and TM methods is designated; and when a switch "trumpet" is depressed,
a tone color based on the PCM method is designated. Then, a musical tone based on
the designated sound source method is generated.
[0412] Figs. 66 and 67 show of sound source methods to the respective tone generation channel
region (Fig. 47) on the RAM 2062 or 3062 when the switches "piano" and "bass" are
depressed. When the switch "piano" is depressed, the DPCM method is assigned to all
the 8-tone polyphonic tone generation channels of the MCPU 1012 and the SCPU 1022,
as shown in Fig. 66. When the switch "bass" is depressed, the PCM method is assigned
to the odd-numbered tone generation channels, and the TM method is assigned to the
even-numbered tone generation channels, as shown in Fig. 67. Thus, a musical tone
waveform for one musical tone can be obtained by mixing tone waveforms generated in
the two tone generation channels based on the PCM and TM methods. In this case, a
4-tone polyphonic system per CPU is attained, and an 8-tone polyphonic system as a
total of two CPUs is attained.
[0413] Fig. 68 is a partial operation flow chart of the function key processing in step
S403 in the main operation flow chart shown in Fig. 37, and shows processing corresponding
to the tone color designation switch group shown in Fig. 65.
[0414] It is checked if a player operates the DPCM/TM switch 27012 (S2901). If YES in step
S2901, it is checked if a variable M is zero (S2902). The variable M stored on the
RAM 2062 (Fig. 35) of the MCPU 1012, and has a value "0" for the DPCM method; a value
"1" for the TM method. If YES in step S2902, i.e., if it is determined that the value
of the variable M is 0, the variable M is set to be a value "1" (S2903). This means
that the DPCM/TM switch 27012 is depressed in the DPCM method selection state, and
the selection state is changed to the TM method selection state. However, if NO in
step S2902, i.e., if it is determined that the value of the variable M is "1", the
variable M is set to be a value "0" (S2904). This means that the DPCM/TM switch 27012
is depressed in the TM method selection state, and the selection state is changed
to the DPCM method selection state.
[0415] It is checked if a tone color in the group
A shown in Fig. 65 is currently designated (S2905). Since the DPCM/TM switch 27012
is valid for tone co]ors of only group
A, only when a tone color in the group
A is designated, and YES is determined in step S2905, operations corresponding to the
DPCM/TM switch 27012 in steps S2906 to S2908 are executed.
[0416] It is checked if the variable M is "0" (S2906).
[0417] If YES in step S2906, since the DPCM method is selected by the DPCM/TM switch 27012,
DPCM data are set in the DPCM format shown in Fig. 49 in the corresponding tone generation
channel areas on the RAMs 2062 and 3062 (Figs. 35 and 36). More specifically, sound
source method No. data G indicating the DPCM method is set in the start area of the
corresponding tone generation channel area (see the column of DPCM in Fig. 49). Subsequently,
various parameters corresponding to currently designated tone colors are respectively
set in the second and subsequent areas of the corresponding tone generation channel
area (S2907).
[0418] If NO in step S2906, since the TM method is selected by the DPCM/TM switch 27012,
TM data are set in the TM format shown in Fig. 49 in the corresponding generation
channel areas. More specifically, sound source method No. data G indicating the TM
method is set in the start area of the corresponding tone generation channel area.
Subsequently, various parameters corresponding to currently designated tone colors
are respectively set in the second and subsequent areas of the corresponding tone
generation channel area (S2908).
[0419] A case has been exemplified wherein the DPCM/TM switch 27012 shown in Fig. 65 is
operated. If the switch 27012 is not operated and NO is determined in step S2901,
or if tone color of the group
A is not designated and NO is determined in step S2905, processing from step S2909
is executed.
[0420] It is checked in step S2909 if a change in tone color switch shown in Fig. 65 is
detected.
[0421] If NO in step S2909, since processing for the tone color switches need not be executed,
the function key processing (S403 in Fig. 37) is ended.
[0422] If it is determined that a change in tone color switch is detected, and YES is determined
in step S2909, it is checked if a tone color in the group B is designated (S2910).
[0423] If a tone color in the group B is designated, and YES is determined in step S2910,
data for the sound source method corresponding to the designated tone color are set
in the predetermined format in the corresponding tone generation channel areas on
the RAMs 2062 and 3062 (Figs. 35 and 36). More specifically, sound source method No.
data G indicating the sound source method is set in the start area of the corresponding
tone generation channel area (Fig. 49). Subsequently, various parameters corresponding
to the currently designated tone color are respectively set in the second and subsequent
areas of the corresponding tone generation channel area (S2911). For example, when
the switch "bass" in Fig. 65 is selected, data corresponding to the PCM method are
set in the odd-numbered tone generation channel areas, and data corresponding to the
TM method are set in the even-numbered tone generation channel areas.
[0424] If it is determined that the tone color switch in the group
A is designated, and NO is determined in step S2910, it is checked if the variable
M is "1" (S2912). If the TM method is currently selected, and YES is determined in
step S2912, data are set in the TM format (Fig. 49) in the corresponding tone generation
channel area (S2913) like in step S2908 described above.
[0425] If the DPCM method is selected, and NO is determined in step S2912, data are set
in the DPCM format (Fig. 49) in the corresponding tone generation channel area (S2914)
like in step S2907 described above.
Embodiment A of ON Event Keyboard Key Processing
[0426] The operation of the keyboard key processing (S405) in the main operation flow chart
shown in Fig. 37 executed when an actual electronic musical instrument is played will
be described below.
[0427] The first embodiment of ON event keyboard key processing will be described below.
[0428] In this embodiment, when a tone color in the group
A shown in Fig. 65 is designated, the sound source method to be set in the corresponding
tone generation channel area of the RAM 2062 or 3062 (Figs. 35 and 36) is automatically
switched in accordance with an ON key position, i.e., a tone range of a musical tone.
This embodiment has a boundary between key code numbers 31 and 32 on the keyboard
shown in Fig. 46. That is, when a key code of an ON key falls within a bass tone range
equal to or lower than the 31st key code, the DPCM method is assigned to the corresponding
tone generation channel. On the other hand, when a key code of an ON key falls within
a high tone range equal to or higher than the 32nd key code, the TM method is assigned
to the corresponding tone generation channel. When a tone color in the group B in
Fig. 65 is designated, no special keyboard key processing is executed.
[0429] Fig. 69 is a partial operation flow chart of the keyboard key processing in step
S405 in the main operation flow chart of Fig. 37.
[0430] It is checked if a tone color in the group
A is currently designated (S3001).
[0431] If NO in step S3001, and a tone color in the group B is currently designated, special
processing in Fig. 69 is not performed.
[0432] If YES in step S3001, and a tone color in the group
A is currently designated, it is checked if a key code of a key which is detected as
an "ON key" in the keyboard key scanning processing in step S404 in the main operation
flow chart shown in Fig. 37 is equal to or lower than the 31st key code (S3002).
[0433] If a key in the bass tone range equal to or lower than the 31st key code is depressed,
and YES is determined in step S3002, it is checked if the variable M is "1" (S3003).
The variable M is set in the operation flow chart shown in Fig. 68 as a part of the
function key processing in step S403 in the main operation flow chart shown in Fig.
37, and is "0" for the DPCM method; "1" for the TM method, as described above.
[0434] If YES (M = "1") in step S3003, i.e., if it is determined that the TM method is currently
designated as the sound source method, DPCM data in Fig. 49 are set in a tone generation
channel area of the RAM 2062 or 3062 (Figs. 35 and 36) where the ON key is assigned
so as to change the TM method to the DPCM method as a sound source method for the
bass tone range (see the column of DPCM in Fig. 49). More specifically, sound source
method No. data G indicating the DPCM method is set in the start area of the corresponding
tone generation channel area. Subsequently, various parameters corresponding to the
currently designated tone color are respectively set in the second and subsequent
areas of the corresponding tone generation channel area (S3004). Thereafter, a value
"1" is set in a flag C (S3005). The flag C is a variable (Fig. 49) stored in each
tone generation channel area on the RAM 2062 (Fig. 35) of the MCPU 1012, and is used
in OFF event processing to be described later with reference to Fig. 71.
[0435] If it is determined that a key in the high tone range equal to or higher than the
31st key code is depressed, and NO is determined in step S3002, it is checked if the
variable M is "1" (S3006).
[0436] If NO (M = "0") in step S3006, i.e., if it is determined that the DPCM method is
currently designated as the sound source method, TM data in Fig. 49 are set in a tone
generation channel area of the RAM 2062 or 3062 (Figs. 35 and 36) where the ON key
is assigned so as to change the DPCM method to the TM method as a sound source method
for the high tone range (see the column of TM in Fig. 49). More specifically, sound
source method No. data G indicating the TM method is set in the start area of the
corresponding tone generation channel area. Subsequently, various parameters corresponding
to the currently designated tone color are respectively set in the second and subsequent
areas of the corresponding tone generation channel area (S3007). Thereafter, a value
"2" is set in a flag C (S3008).
[0437] In the above-mentioned processing, if NO in step S3003 and if YES in step S3006,
since the desired sound source method is originally selected, no special is executed.
Embodiment B of ON Event Keyboard Key Processing
[0438] The second embodiment of the ON event keyboard key processing will be described below.
[0439] In the embodiment B of the ON event keyboard key processing, when a tone color in
the group
A in Fig. 65 is designated, a sound source method to be set in the corresponding tone
generation channel area (Fig. 47) on the RAM 2062 or 2062 (Figs. 35 and 36) of the
MCPU 1012 or the SCPU 1022 is automatically switched in accordance with an ON key
speed, i.e., a velocity. In this case, a switching boundary is set at a velocity value
"64" half the maximum value "127" of the MIDI (Musical Instrument Digital Interface)
standards. That is, when the velocity value of an ON key is equal to or larger than
64, the DPCM method is assigned; when the velocity of an ON key is equal to or smaller
than 64, the TM method is assigned. When a tone color in the group B in Fig. 65 is
designated, no special keyboard key processing is executed.
[0440] Fig. 70 is a partial operation flow chart of the keyboard key processing in step
S405 in the main operation flow chart shown in Fig. 37.
[0441] It is checked if a tone color in the group
A in Fig. 65 is currently designated (S3101).
[0442] If NO in step S3101, and a tone color in the group B is presently selected, the special
processing in Fig. 69 is not executed.
[0443] If YES in step S3101, and a tone color in the group
A is presently selected, it is checked if the velocity of a key which is detected as
an "ON key" in the keyboard key scanning processing in step S404 in the main operation
flow Chart Shown in Fig. 37 is equal to or larger than 64 (S3102). Note that the velocity
value "64" corresponds to "mp (mezzo piano)" of the MIDI standards.
[0444] If it is determined that the velocity value is equal to or larger than 64, and YES
is determined in step S3102, it is checked if the variable M is "1" (S3102). The variable
M is set in the operation flow chart shown in Fig. 68 as a part of the function key
processing in step S403 in the main operation flow chart shown in Fig. 37, and is
"0" for the DPCM method; "1" for the TM method, as described above.
[0445] If YES (M = "1") in step S3103, and the TM method is currently designated as the
sound source method, DPCM data in Fig. 49 are set in a tone generation channel area
of the RAM 2062 or 3062 (Figs. 35 and 36) where the ON key is assigned so as to change
the TM method to the DPCM method as a sound source method for a fast ON key operation
(S3104), and a value "1" is set in the flag C (S3105).
[0446] If it is determined that the velocity value is smaller than 64 and NO is determined
in step S3102, it is further checked if the variable M is "1" (S3106).
[0447] NO (M = "0") in step S3106, and the DPCM method is currently designated as the sound
source method, TM data in Fig. 49 are set in a tone generation channel area of the
RAM 2062 or 3062 where the ON key is assigned so as to change the DPCM method to the
TM method as a sound source method for a slow ON key operation (S3107). Thereafter,
a value "2" is set in the flag C (S3108).
[0448] In the above-mentioned processing, if NO in step S3103 and if YES in step S3106,
since the desired sound source method is originally selected, no special processing
is executed.
Embodiment of OFF Event Keyboard Processing
[0449] The embodiment of the OFF event keyboard key processing will be described below.
[0450] According to the above-mentioned ON event keyboard key processing, the sound source
method is automatically set in accordance with a key range (tone range) or a velocity.
Upon an OFF event, the set sound source method must be restored. The embodiment of
the OFF event keyboard key processing to be described below can realize this processing.
[0451] Fig. 71 is a partial operation flow chart of the keyboard key processing in step
S405 in the main operation flow chart shown in Fig. 37.
[0452] The value of the flag C set in the tone generation channel area on the RAM 2062 or
3062 (Figs. 35 and 36), where the key determined as an "OFF key" in the keyboard key
scanning processing in step S404 in the main operation flow chart of Fig. 37 is assigned,
is checked. The flag C is set in steps S3005 and S3008 in Fig. 69, or in step S3105
or S3108 in Fig. 70, has an initial value "0", is set to be "1" when the sound source
method is changed from the TM method to the DPCM method upon an ON event, and is set
to be "2" when the sound source method is changed from the DPCM method to the TM method.
When the sound source method is left unchanged upon an ON event, the flag C is left
at the initial value "0".
[0453] If it is determined in step S3201 in the OFF event processing in Fig. 71 that the
value of the flag C is "0", since the sound source method is left unchanged in accordance
with a key range or a velocity, no special processing is executed, and normal OFF
event processing is performed.
[0454] If it is determined in step S3201 that the value of the flag C is "1", the sound
source method is changed from the TM method to the DPCM method upon an ON event. Thus,
TM data in Fig. 49 is set in the tone generation channel area on the RAM 2062 or 3062
(Fig. 35 or 36) where the ON key is assigned to restore the sound source method to
the TM method. More specifically, sound source No. data G indicating the TM method
is set in the start area of the corresponding tone generation channel area. Subsequently,
various parameters corresponding to the presently designated tone color are respectively
set in the second and subsequent areas of the corresponding tone generation channel
area (S3202).
[0455] If it is determined in step S3201 that the value of the flag C is "2", the sound
source method is changed from the DPCM method to the TM method. Thus, DPCM data in
Fig. 49 is set in the tone generation channel area on the RAM 2062 or 3062 where the
ON key is assigned to restore the sound source method from the TM method to the DPCM
method. More specifically, sound source method No. data G indicating the DPCM method
is set in the start area of the corresponding tone generation channel area. Subsequently,
various parameters corresponding to the presently designated tone color are respectively
set in the second and subsequent areas of the corresponding tone generation channel
area (S3203).
[0456] After the above-mentioned operation, the value of the flag C is reset to "0", and
the processing in Fig. 71 is completed. Subsequently, normal OFF event processing
(not shown) is executed.
Other Embodiments
[0457] In the above embodiments of the present invention described above, as shown in Fig.
34, the two CPUs, i.e., the MCPU 1012 and the SCPU 1022 share processing of different
tone generation channels. However, the number of CPUs may be one or three or more.
[0458] If the control ROMs 2012 and 3012 shown in Figs. 35 and 36, and the external memory
1162 are constituted by, e.g., ROM cards, various sound source methods can be presented
to a user by means of the ROM cards.
[0459] Furthermore, the input port 2102 of the MCPU 1012 shown in Fig. 35 can be connected
to various other operation units in addition to the instrument operation unit shown
in Fig. 45. Thus, various other electronic musical instruments can be realized. In
addition, the present invention may be realized as a sound source module for executing
only the sound source processing while receiving performance data from another electronic
musical instrument.
[0460] Various methods of assigning sound source methods to tone generation channels by
the function keys 8012 or the keyboard keys 8022 in Fig. 45 including those based
on tone colors, tone ranges, and velocities, may be proposed.
[0461] In addition to the FM and TM methods, the present invention may be applied to various
other modulation methods.
[0462] In the modulation method, the above embodiment exemplifies a 4-operator system. However,
the number of operators is not limited to this.
[0463] In this manner, according to the present invention, a musical tone waveform generation
apparatus can be constituted by versatile processors without requiring a special-purpose
sound source circuit at all. For this reason, the circuit scale of the overall musical
tone waveform generation apparatus can be reduced, and the apparatus can be manufactured
in the same manufacturing technique as a conventional microprocessor when the apparatus
is constituted by an LSI, thus improving the yield of chips. Therefore, manufacturing
cost can be greatly reduced. Note that a musical tone signal output unit can be constituted
by a simple latch circuit, resulting in almost no increase in manufacturing cost after
the output unit is added.
[0464] When the modulation method is required to be changed between a phase modulation method
and a frequency modulation method, or when the number of polyphonic channels is required
to be changed, a sound source processing program to be stored in a program storage
means need only be changed to meet the above requirements. Therefore, development
cost of a new musical tone waveform generation apparatus can be greatly decreased,
and a new sound source method can be presented to a user by means of, e.g., a ROM
card.
[0465] In this case, since a data architecture for attaining a data link between a performance
data processing program and a sound source processing program via musical tone generation
data on a data storage means, and a program architecture for executing the sound source
processing program at predetermined time intervals while interrupting the performance
data processing program are realized, two processors need not be synchronized, and
the programs can be greatly simplified. Thus, complicated sound source processing
such as the modulation method can be executed with a sufficient margin.
[0466] Furthermore, since a change in processing time depending on the type of modulation
method or a selected musical tone generation algorithm in the modulation method can
be absorbed by a musical tone signal output means, no complicated timing control program
for outputting a musical tone signal to, e.g., a D/A converter can be omitted.
[0467] Furthermore, the present invention has, as an architecture of the sound source processing
program, a processing architecture for simultaneously executing algorithm processing
operations as I/O processing among operator processing operations before or after
simultaneous execution of at least one operator processing as a modulation processing
unit. For this reason, when one of a plurality of algorithms is selected to execute
sound source processing, a plurality of types of algorithm processing portions are
prepared, and need only be switched as needed. Therefore, the sound source processing
program can be rendered very compact. The small program size can greatly contribute
to a compact, low-cost musical tone waveform generation apparatus.