(19)
(11) EP 0 396 141 A2

(12) EUROPEAN PATENT APPLICATION

(43) Date of publication:
07.11.1990 Bulletin 1990/45

(21) Application number: 90108393.1

(22) Date of filing: 03.05.1990
(51) International Patent Classification (IPC)5G10H 5/00, G10L 5/04
(84) Designated Contracting States:
DE FR GB

(30) Priority: 04.05.1989 US 347367

(71) Applicant: Schneider, Florian
D-4000 Düsseldorf (DE)

(72) Inventors:
  • Schneider, Florian
    D-4000 Düsseldorf (DE)
  • Ott, Gert Joachim
    D-7530 Pforzheim (DE)
  • Jalass, Gert
    D-8000 München 70 (DE)

(74) Representative: Lehn, Werner, Dipl.-Ing. et al
Hoffmann, Eitle & Partner, Patentanwälte, Postfach 81 04 20
81904 München
81904 München (DE)

   


(54) System for and method of synthesizing singing in real time


(57) A system for synthesizing singing in real time, comprises a plurality of manually actuatable keys (1) for producing different first electrical signals corresponding to each of the plurality of keys actuated and varying in real time with duration and frequency of actuation, a musical instrument digital interface (MIDI) (15) receptive of the electrical signals from the plurality of manually actuatable keys (1) for generating standard data output signals in real time for musical notes corresponding to the first electrical signals, a phoneme speech synthesizer (17) receptive of phoneme codes for generating real time analog signals corresponding to a singing voice and a translator (10, 12, 13) receptive of the MIDI output signals for converting same in real time to phoneme codes and for applying same to the speech synthesizer (17) in real time.




Description

BACKGROUND OF THE INVENTION



[0001] The present invention relates to voice synthesizing and in particular to a system for the synthesizing of singing in real time.

[0002] Speech synthesizing systems are known wherein the singing of a song is simulated. For example, two systems of this type are shown in US-A-4,731,847 and US-A-4,527,274.

[0003] These known systems, while being capable of synthesizing the singing of a song, are not capable of doing so in real time wherein the frequency, duration, pitch, etc. of the synthesized voice can be varied in response to a real time manually actuatable input from, for example, a keyboard.

SUMMARY OF THE INVENTION



[0004] The main object of the present invention is to provide a system for and method of synthesizing the singing of song in real time.

[0005] The solution of such problem is apparent from claims 1 and 6, respectively, whereas further developments of the invention can be taken from claims 1 to 5 and 7 to 10.

[0006] By making use of musical instrument digital interface hardware (MIDI) and a speech processor integrated circuit, it is possible to create a singing voice directly from a keyboard input in real time, that is, whith the duration and frequency at which the keyboard keys are actuated controlling the time, loudness and pitch of the singing voice.

[0007] By using the MIDI device to generate the codes from a keyboard input, any sppec synthesizing device can be used in the system according to the present invention.

[0008] The way that the sounds of the speech synthesizier are activated by the MIDI commands is totally open in terms of input procedure and independent of the type of hardware or manufacturer as far as the MIDI protocol is used.

[0009] As opposed to the conventional systems, the sounds can be played in real time. Thus the duration of the sounds is dependent upon how long a key on the keyboard is depressed and is not determined by preprogrammed duration as described in US-A-4,527,274. Whereas prior art systems did not make it possible to play along with music, the present invention enabels one to do so. In fact, the quality of sounds can be manipulated in real time by means of changing the vocal tract length, the dynamic values can be altered in real time, the interpolation timing or transfer function between one sound and the following sound can be altered in real time, the pitch of the sounds can be changed in real time independently from the above-mentioned parameters and the envelope of the sounds which includes the attack and decay are programmable.

[0010] The sound and pitch range of the synthesizer can be further enhanced and improved by inserting an external sound source into the chip and replacing the internal excitation function. The external sound can be easily taken from any sound source allowing one to feed different wave forms or noise to simulate whispering. Also chords or any composed sound could be fed into the speech synthesizer filters. Thus both the sound characteristics of the speech synthesizer and the external sound sources can be drastically changed. The expressive subtleties of singing, such as vibrato and portamento, are also obtainable and which are not possible in the prior art devices.

[0011] The present invention also enables one to store the MIDI generated data in any MIDI memory device and which can be loaded and transferred onto storage media because of the international MIDI standard. Thus no specific computer software or hardware is needed besides the ability to understand MIDI codes.

[0012] The speech synthesizer and MIDI combination can be used to generate a large amount of electronic sounds and organ like timbres. The combination can also be used to generate spectres with vocal qualities resembling a voice choir and additionally one can program sequences in such a specific way that speech like sounds or even intelligible speech can be generated.

[0013] The attributes of the MIDI commands allow synchronization of high resolution with the MIDI time code and this means that one can perfectly synchronize one or more MIDI speech synthesizers with one another and with existing or prerecorded music. In addition, with multitrack recording, a capella music or a synthetic vocal orchestra can be realized.

[0014] With any sequencer, synthesized lyrics can be programmed and edited to that a synthetic voice sings along with the music, synchronized by the MIDI time code. Because the synchronization proportionally relates to the speed of the music, the strings of speech sounds can be speeded up or slowed down in relation to the rhythm and the result is a 100% accurate timing which can hardly be achieved by a human singer.

[0015] Moreover, a phoneme editor compiler can be utilized to easily and accurately synchronize the system.

[0016] In accordance with the invention, the system comprises the combination of the MIDI interface and speech synthesis in a musical context.

[0017] The sounds are configured on a MIDI keyboard according to the sounding properties of the speed sounds from dark to bright vowels, followed by the voiced consonants and voiceless consonants and finally plosives so that a great variety of musically useful timbers like formants, noise bands, stationary sounds and percussive sounds can be generated to create the electronic sounds covering a broad spectrum.

[0018] Different vocal qualities can be obtained and the speech synthesizer's array of narrow band pass filters allow a wide range from subtle coloring to extreme distortion of sound sources.

[0019] By using more than one speech synthesizer at a time, choir or organ-like effects can be played.

[0020] According to the invention, five modes of operation are possible:

1. Sequencer mode wherein the speech sounds can be played either by a keyboard or up to N-identical or N-different sounds can be called up by any MIDI sequencer.

2. Polyphonic mode wherein a keyboard generates N-identical speech sounds which can be played in different pitches and selected by the MIDI program change sequences.

3. Monophonic mode wherein N-difference speech sounds can be combined like an organ register and played by a single key at a time.

4. Filter mode wherein the filters of the speech synthesizer are in stationary mode for the applied external sound sources and the filter parameters are controlled by the MIDI program change.

5. Split mode wherein a combination of sequencer mode and N-voice polyphonic mode is achieved.



[0021] In addition to this, the speech synthesizer vocal tract length or filter frequency and interpolation speed can be controlled by the MIDI pitch bend and modulation wheel or any other MIDI controller. Additionally, envelope parameters for all sounds except the plosives, in this case attack time and decay time can be determined by specific MIDI control change sequences. By using an external sound source instead of the internal excitation signal, the quality of the speech sounds can be altered, enhanced and improved.

[0022] In accordance with the invention, the MIDI implementation is such that keys 36-93 control speech sounds on channels 1, 3, 5 and 7 whereas the pitch control is controlled on channels 2, 4, 6 and 8. The pitch bend wheel is used to control vocal tract length, the modulation wheel controls speed of filter interpolation and the velocity controls dynamic loudness control.

[0023] These and other features and advantages of the present invention will be seen from the following detailed description in conjunction with the attached referenced drawings, wherein:

BRIEF DESCRIPTION OF THE DRAWINGS



[0024] 

Fig. 1 is a block diagram of a system according to the invention;

Fig. 2 is an illustration of a MIDI keyboard and the voice sounds associated with each key;

Fig. 3 is a circuit diagram of an analog circuit of a speech synthesizer; and

Fig. 4 is a circuit diagram of a digital circuit of a speech synthesizer.


DETAILED DESCRIPTION OF THE INVENTION



[0025] Referring to Fig. 1, it is to be noted that the system employs a phoneme speech synthesizer 17 now produced by Artic, formerly by the Votrax Division of the Federal Screw Works, Troy, Mich, USA, specifically, the Votrax SC-02 speech synthesizer and the data sheet for that synthesizer is incorporated herein by reference. Also incorporated by reference is the MIDI specification 1.0 by the International MIDI Association, Sun Valley, CA, USA, which is incorporated in the MIDI interface 15.

[0026] The system also includes a processing unit is shown that consists of CPU 10, which may be a 6502 CPU, a ROM 12 for data storage purposes, which may be a 6116, and Address Decoding Logic 13. These parts are connected via a common computer bus 14 containing address, data and command lines. Timing is controlled by a clock generator 18.

[0027] Also connected to the compuer bus are MIDI interface 15, mainly an ACIA 6850, and a buffer register 16, consisting of two 75LS245.

[0028] One or more speech processors chips 17 are connected to the buffer 16. These chips are provided with additional audio circuitry, the audio output 19 which buffers the audio output of the SC-02 and makes it suitable for audio amplifiers, and the audio input 20 at pins 3 and 5 which allows an external audio signal, e.g. from a synthesizer, to be fed into the SC-02 thus replacing the internal tone generator. The tone source can be selected using an appropriate switch 22.

[0029] The main taks of the processing unit is to receive MIDI data from interface 15, to translate it in several ways into SC-02 specific data and to output these data via the buffer 16 to the SC-02 17. The SC-02 provides 8 registers, of which 5 are used in the singing process Phoneme, Inflection, Articulation, Amplitude and Filter Frequency. The kind of interpreting and translating of the received MIDI data into data for these registers can depend on the switch position of the mode selection 21 or can be controlled by specific MIDI data, e.g. Program Change events.

[0030] In one embodiment, the MIDI Note On events receivd on a MIDI channel N from keyborad 1 shown in Fig. 2, turn on a specific phoneme by writing the number of the phoneme into the Phoneme register of the SC-02. The phoneme number is generated by a translation table which translates the Note Number into a phoneme number. The Velocity of the Note on envent is used to affect the Amplitude register of the SC-02. Note Off events received on channel N are used to turn off the phoneme by writing the code for the phoneme 'Pause' into the phoneme register. Note On events received on channel N + 1 are used to select the appropriate singing frequency. The Note Numbers of these events are translated into values for the Inflection register of the SC-02, which enables the SC-02 to produce singing sounds in a wide range based on a tuning frequency of 440 Hz for a′. Pitch Wheel Change events are translated into values for the Filter Frequency and Continuous Controller events for Modulation Wheel are translated into values for the Articulation register. This way of interpretation allows the user to make the device sing with full control of the relevant parameters. For convenience purpose, the user could prerecord the events using a MIDI compatible sequencer or special event editing software on a MIDI compatible computer.

[0031] In a second embodiment, the MIDI Note On events received on a MIDI channel N are used to select the appropriate singing frequency. The Note Numbers of these events are translated into values for the Inflection register like above. The velocity of the Note On event is used to affect the Amplitude register of the SC-02. Note Off events received on channel N are used to turn off the phoneme by writing the value 0 into the Amplitude register. Program Change events turn on a specific phoneme by writing the number of the phoneme into the Phoneme register of the SC-02. The phoneme number is generated by a translation table which translates the Program Number into a phoneme number. Pitch Wheel Change events are translated into values for the Filter Frequency and Continuous Controller events for Modulation Wheel are translated into values for the Articulation register. This way of interpretation allows the user to play the SC-02 using MIDI compatible keyboard 1 in a way similar to a common expander device with special voice-like sounds.

[0032] Other ways of interpreting MIDI date are possible and useful, especially if an implementation of the invention employs more than one SC-02 sound processor, which is easily possible and enables the user to play like on a polyphonic keyboard, or to produce choir-like singing.

[0033] Two preferred implementations of the invention include using a single SC-02 with 2 modes of interpreting the MIDI events, and using four SC-02 chips with 6 modes of interpreting the MIDI events.

[0034] A MIDI-compatible sequencer-program is designed for the system using a phoneme editor-compiler synchronizing syllables of speech to MIDI-timecode.

[0035] Any text-editor can be used to write a "phonetic score" which determines all parameters for the artificial singer like pitch, dynamic, timing, etc.

[0036] The digital controlled analogue chip is using the concept of phoneme synthesis, which makes it relatively easy to program for real time application.

[0037] The sounding elements of speech or singing are defined as vowels, voiced and unvoiced consonants and plosives, a variety of timbres like formants, noise bands and percussive sounds are generated.

[0038] The 54 phonemes of the chip are configures to match with 54 MIDI-notes on any MIDI-keyboard from 36 to 93 in groups of vowels, consonants etc. as shown in Fig. 2. The speech synthesizer functions like a MIDI-expander, to be "played" with a keyboard or driven by the "speech-sequencer" program. In addition to this the chip is suitable as a multi bandpass-filter to process external sound sources.

[0039] Any text to be uttered or sung is a potential "phonetic score". First a text is analyzed by its phonemic structure, through a text to speech conversion either automatically by rules, or by manual input.

EXAMPLE



[0040] 



[0041] The words, syllables, or phonemes strings have to be divided or chopped in more or less equal frames based on the 24 pulses of the MIDI-clock.

Example:



[0042] 



[0043] The "Frames" are marked by bars or slashes //// and funtion as sync marks.

[0044] The program calculates a timebase of 24 divided by the number of phonemes, so that one syllable consisting of 7 and another consisting of 2 phonemes have the same duration. If the automatic conversion does not sound satisfying, subtle variations are possible through editing.

[0045] The next parameter is the dynamic value :L: loudness



[0046] Values range from 0 to 9, configured either to a linear or lograithmic equivalent of the 1- 127 velocity scale. The last parameter is pitch or tone "T"



[0047] The complete score could be:



[0048] Automatic compression is another option as well as looping and reversing sequenses



[0049] Additionally the filter frquency and interpolation rate are controlled in realtime by MIDI pitchbend- and modulation wheel.

[0050] It will be appreciated that the instant specification, example and claims are set forth by way of illustration and not limitation, and that various modifications, and changes may be made without departing from the spirit and scope of the present invention.


Claims

1. A System for synthesizing singing in real time, comprising:
a plurality of manually actuatable means for producing different first electrical signals corresponding to each of the plurality of means actuated and varying in real time with duration and frequency of actuation;
a musical instrument digital interface receptive of the electrical signals from the plurality of manually actuatable means for generating standard data output signals in real time for musical notes corresponding to the first electrical signals;
at least one phoneme speech synthesizer receptive of phoneme codes for generating real time analog signals corresponding to a signing voice; and
means receptive of the interface output signals for converting same in real time to phoneme codes and for applying same to the speech synthesizer in real time.
 
2. The system according to claim 1, wherein the plurality of actuatable means includes means for varying the resulting vocal tract length in real time.
 
3. The system according to claim 1, wherein the plurality of actuatable means includes means for varying the resulting interpolation timing in real time.
 
4. The system according to claim 1, wherein the plurality of actuatable means includes means for varying the resulting pitch in real time.
 
5. The system according to claim 1, comprising a plurality of speech synthesizers, each corresponding to a different singing voice.
 
6. A method of synthesizing singing in real time, comprising:
producing different first electrical signals corresponding to each of a plurality of manually actuatable means actuated and varying the signals in real time with duration and frequency of actuation;
generating standrad musical instrument digital interface (MIDI) data output signals in real time for musical notes corresponding to the first electrical signals;
for converting the MIDI output signals in real time to phoneme codes; and
generating real time analog audio signals corresponding to a singing voice in a phoneme speech synthesizer from the phoneme codes.
 
7. The method according to claim 6, wherein the generating of first electrical signals includes varying the electrical signals in real time to vary the resulting vocal tract length in real time.
 
8. The method according to claim 6, wherein the generating of first electrical signals includes varying the electrical signals in real time to vary the resulting interpolation timing in real time.
 
9. The method according to claim 6, wherein the generating of first electrical signals includes varying the electrical signals in real time to vary the resulting pitch in real time.
 
10. The method according to claim 6, wherein the step of generating audio signals comprises providing a plurality of speech synthesizers, each corresponding to a different singing voice.
 




Drawing