(19)
(11) EP 0 079 578 A1

(12) EUROPEAN PATENT APPLICATION

(43) Date of publication:
25.05.1983 Bulletin 1983/21

(21) Application number: 82110372.8

(22) Date of filing: 10.11.1982
(51) International Patent Classification (IPC)3G10L 5/06
(84) Designated Contracting States:
DE FR GB

(30) Priority: 18.11.1981 JP 183635/81

(71) Applicant: NIPPONDENSO CO., LTD.
Kariya-shi Aichi-ken (JP)

(72) Inventors:
  • Nojiri, Tadao
    Oobu-shi Aichi-ken (JP)
  • Asada, Hiroshige
    Mizuho-ku Nagoya-shi (JP)
  • Teraura, Nobuyuki
    Kagiya-machi Tokai-shi Aichi-ken (JP)

(74) Representative: KUHNEN, WACKER & PARTNER 
Alois-Steinecker-Strasse 22
85354 Freising
85354 Freising (DE)


(56) References cited: : 
   
       


    (54) Continuous speech recognition method and device


    (57) In a system for recognizing a continuous speech pattern, based on a speech pattern entered through a microphone (101), a feature vector ai is extracted by a feature extraction unit (102), and a feature vector

    is read out from a reference pattern memory (140). A first recursive operation unit (DPM1) (105) computes a set of similarity measures Sn(i, j) between the feature vectors. A maximum similarity measure at a time point i is determined and produced by a first decision unit (DSC,) (106) and is stored in a maximum similarity memory (MAX) (107). A second recursive operation unit (DMP2) (109) computes a reversed similarity measure. Based on a computed result g(v, l) and the output from said maximum similarity measure memory (107), a second decision unit (110) determines a boundary vmax. A word Wu based on data u = vmax - 1 obtained by the boundary v is stored as nx (x = 1,..., Y) in an order reversing unit (REV) (111). The order reversing unit (111) finally reverses the order of data and produces an output nx (x = 1,..., Y).




    Description

    Background of the Invention



    [0001] The present invention relates to a continuous speech recognition method and device for automatically recognizing a plurality of concatenated words such as numericals and for producing an output in accordance with the recognized content.

    [0002] Speech recognition devices have been considered to be effective means for performing man/machine communication. However, most of the devices which have been developed so far have a disadvantage in that only isolated or discrete words can be recognized, so that data input speed is very low. In order to solve the above problem, a continuous speech pattern device which uses a two-level dynamic programming (to be referred to as a two-level DP) algorithm is described in Patent Disclosure DE-A 26 10 439 In the principle of this algorithm, pattern strings obtained by concatenating several reference patterns in all possible orders are defined as reference pattern strings of a continuous speech. An input pattern as a whole is mapped onto the reference pattern strings. The number of reference patterns and their arrangement are determined to maximize the overall similarity measure between the input pattern and the reference pattern strings. Thus, speech recognition is performed. In practice maximization is achieved by two stages of maximization; of individual words and of word strings. These maximizations can be performed utilizing the DP algorithm.

    [0003] The two-level DP algorithm will be described in detail below.

    [0004] Let feature vectors αi be

    then, a speech pattern A is defined as a time series of αi:

    where I is the duration of the speech pattern A, and Q is the number of components of the feature vectors. Thus, the speech pattern A is regarded as the input pattern.

    [0005] Assume that N reference patterns Bn(n = 1, 2,..., N) are defined as a set of words to be recognized. Each reference pattern Bn has Jn feature vectors as follows:

    where the feature vector

    is a similar vector to the feature vector αi, as follows:



    [0006] The partial pattern of the input pattern A which has a starting point ℓ and an endpoint m on the time base i can be expressed as follows:

    for 1 ≤ ℓ < m < I

    [0007] Between the partial pattern A(ℓ m) and the reference pattern Bn, a function j(i) which establishes a correspondence between a time base i of the input pattern and a time base j of the reference pattern is optimally determined. Partial matching is performed wherein a maximum value S(A, m)'Bn) of the sum of similarity measures s(αi,

    (to be referred to as s (i, j)) between vectors which are defined by i and j(i) is computed by the DP algorithm. In the first stage, a partial similarity measure S<ℓ, m> as the maximum value of S(A(ℓ, m), Bn) is determined for n which is computed by sequentially changing the starting point 1 and the endpoint m, and a partial determination result n<ℓ, m> for providing the maximum value is also determined. Overall matching is performed at the second stage wherein the number Y of words included in the input pattern and boundaries ℓ(1), ℓ(2),..., ℓ(Y-1) which number (Y - 1) are optimally determined, and wherein the number Y of words and boundaries ℓ̂(1), ℓ̂ (2),..., ℓ̂(Ŷ -1) are obtained to maximize the sum of the partial similarity measures during continuous and nonoverlapping duration. The sum is given by the following relation:

    The boundaries ℓ̂ (1), ℓ̂ (2),..., ℓ̂ (Ŷ - 1) and the partial determination result n<1, m> determine n<1, ℓ̂ (1)>,



    [0008] The definition of the similarity measure is given by a function which maps the time base j of a reference pattern B and the time base i of the input pattern A in order to correct deviation between the time bases of the input pattern A given by relation (2) and the reference pattern B given by relation (3) as follows:

    Assume that the similarity measure s(i, j) is exemplified by the following relation:

    The similarity measure between the input pattern A and the reference pattern B is given as follows:

    It is impossible to obtain a maximum value of relation (9) by computing all the possibilities for j = j(i). Instead, the DP algorithm is utilized as follows. Let the initial conditions be:

    g(I, J) is computed in a range of i = 2 to I and j = 1 to J by the following recursive relation:

    Therefore, S(A, B) of relation (9) is given by:

    The deviation of the time base in practice may not exceed 50% in practice, so that a hatched region bounded by lines 11 and 12 and a line 15 indicated by "i = j" in Fig. 1 need only be considered. Therefore, recursive relation (11) need only be applied in the range:

    The above hatched region is called an adjustment window.

    [0009] The partial similarity measure of the endpoint m in the range indicated by reference numeral 14 in Fig. 1 is obtained in correspondence with one starting point ℓ. The hatched region in Fig. 1 is defined by all the computations "(2 * r + 1) * Jn" for one starting point.

    [0010] When relation (13) is used as a condition for the alignment range of time bases i and j, the total computation C1 by the DP algorithm for the similarity measure s(i, j) is approximated as follows, even if only the partial similarity measure "S<ℓ, m>" is to be obtained in the first stage:

    where I is the duration of the input pattern, N is the number of reference patterns, and J is the average duration of the reference patterns. For the second stage, data of the partial similarity measure "S<£, m>" and the partial determination result "n<£, m>" must be stored. The storage capacity M1 is obtained by the following approximation:

    If the following conditions are given:

    then

    In order to manufacture a real time speech recognition device which provides a recognition result within 0.5 seconds after an utterance is completed, a total of "5,250,000" computations must be completed within 2.3 seconds (= 0.5 + 120 x 0.015), provided that the durations I and J are respectively 15 msec and the full duration from the utterance to the response is used for computation. Thus, high speed computation of about 0.4 µsec for each computation is required. Even if parallel processing is performed, a large scale device is needed, resulting in high cost.

    Summary of the Invention



    [0011] It is, therefore, an object of the present invention to provide a continuous speech recognition device which performs real time speech recognition and pattern matching even in a low-speed processor and which is small in size and low in cost.

    [0012] It is another object of the present invention to provide a continuous speech recognition device which requires about half of the conventional storage capacity when total number of computations corresponds to "J * N * I".

    [0013] Assume that the reference pattern string B has concatenated Y reference patterns Bnl, Bn2,..., Bnx-1,..., Bny. The reference pattern string B is given as follows:

    Symbol ⊕ denotes that feature vectors of each reference pattern are ordered in accordance with the time sequence. Therefore,

    According to the principle of the present invention, the reference pattern string B given by relation (18) is optimally matched with the input pattern A given by relation (2), in the same manner as in the conventional two-level DP algorithm, to determine the words n "n1, n2, nX-1, nX,..., nY for optimal matching. Therefore, the input pattern A is determined to comprise words "n1, n2,..., nX-1, nX,..., ny". In this case, the number Y of words is also optimally determined.

    [0014] The concatenated words of the input pattern A are recognized by determining the number of reference patterns with maximum similarities and the types of words.

    Brief Description of the Drawings



    [0015] 

    Fig. 1 graphically illustrates a computation range of a two-level DP algorithm as a conventional continuous speech recognition means;

    Fig. 2 is a graph for explaining a first step of continuous speech recognition according to the present invention;

    Fig. 3A graphically illustrates a computation range of a recursive relation with a slope constraint when a starting point and an endpoint are fixed;

    Fig. 3B is a view of an example of the computation range of the recursive relation with a slope constraint;

    Fig. 4 is a graph for explaining a decrease in total computation of continuous speech recognition according to the present invention;

    Fig. 5A is a graph for explaining the details of the first step;

    Fig. 5B is an enlarged view of the cross-hatched region in Fig. 5A;

    Fig. 6 is a.graph for explaining the detailed computation of the recursive relation;

    Fig. 7 is a graph for explaining a second step of continuous speech recognition according to the present invention;

    Fig. 8 is a block diagram of a continuous speech recognition device according to a first embodiment of the present invention;

    Fig. 9A is a block diagram of a first recursive computation section (DPM1) of the device shown in Fig. 8;

    Fig. 9B is a timing chart of control signals of the DPM- of the device shown in Fig. 8;

    Fig. 10 is a block diagram of another example of a DPM1;

    Fig. 11 is a block diagram of a continuous speech recognition device according to a second embodiment of the present invention;

    Figs. 12A and 12B are flowcharts for explaining the mode of operation of the continuous speech recognition device of the second embodiment;

    Fig. 13 is a flowchart for explaining Process 1 in Fig. 12A;

    Fig. 14 is a flowchart for explaining Process 2 in Fig. 12A; and

    Fig. 15 is a flowchart for explaining Process 3 in Fig. 12A.


    Detailed Description of the Preferred Embodiments


    I. General Description



    [0016] In order to fully understand the present invention, the speech recognition algorithm of the present invention is compared with a two-level DP algorithm.

    [0017] The two-level DP algorithm comprises first-stage matching in which the partial similarity measure is computed by all possible combinations of the starting points and the endpoints to determine the partial . determination results, and second-stage matching in which a boundary is determined for providing the maximum overall similarity measure by utilizing dynamic programming of all possible combinations of the partial determination results. According to the present invention, however, the overall similarity measure is not maximized at the second stage. The maximum overall similarity measure is obtained at the first-stage matching in the following manner. Assume that the time point "i = p" (1 < p ≤ I) of the input pattern A is defined as the boundary between two words, and that a maximum similarity measure "Dp = S<l, p>" is obtained from an optimum combination of the reference words and a partial pattern "A(l, p)" of the input pattern A. The maximum similarity measure Dq between a partial pattern "A(l, q) of the input pattern A whose endpoint is a time point "i = q" (1 < p < q ≦ I) and the optimum combination of the reference words is given as follows:

    In this case, n is stored as W , where

    "S(A(p+1, q), Bn)" indicates the similarity measure between a partial pattern "A(p+1, q)" with the endpoint q and the reference pattern Bn of the word n. This computation is the same as that of the partial similarity measure of the two-level DP algorithm. According to the present invention, however, the partial similarity pressure is not independently obtained, but is obtained in the braces of the right-hand side of relation (20). In relations (20) and (21), when the condition "D0 = 0" is given, Dq and W are obtained for 1 < q < I since the maximum similarity measure "Dp (p < q) is given. The maximum overall similarity measure "S<l, I> is obtained as DI. Thus, the first step of continuous speech recognition is completed. Thereafter, the second step is performed in which the number Y of words which constitute a permutation/combination "B = BnlBn2 ⊕...⊕BnY" and their words "n1, n2,..., nY" are determined. In this procedure, a final word nY is obtained as WI. However, only D. and W. are stored in order to greatly decrease the required memory capacity and total computation of the first step. For this reason, the boundaries of the words "nY, nY-1,..., n2, n1" are backtracked beginning from the time point "i = I". A recognized word Wu is output with the time point "i = I" defined as a starting point u. Dp matching is performed in the reverse direction only for the recognized word Wu. An endpoint v is obtained which maximizes the sum of Dv-1 and a similarity measure "S(A(u, v), BWu)" obtained by a backtracking partial pattern "A(u, v)" from the starting point u to the endpoint v and the reversely concatenated reference pattern string B as follows:

    where ARGMAX is the endpoint v which gives a maximum v value in the braces of relation (22).

    [0018] The vmax is the starting point (endpoint in the reverse D matching) of the word Wu. Let the starting point u now be defined as an endpoint (starting point in reverse Dp matching) of an immediately preceding word:

    When tracking is repeated from "u = vmax-1" to "u = 0", all the recognized words are obtained in the reverse order. The obtained reverse ordered word string is reversed again, so that the input word string is regarded as recognized.

    [0019] The fundamental principle of the algorithm according to the present invention is described above. However, relation (20) cannot be computed for all p, q and n in the first step due to the amount of total computation. If maximization at the boundary p is first performed, relation (20) can be rewritten as follows:

    Terms in the brackets of relation (24) can be substituted by a conventional dynamic programing algorithm in which the starting point is free as "(p + 1)" whose initial value is Dp, and the endpoint q is fixed.

    [0020] The above case is described with reference to Fig. 2. If the maximum similarity measure Dp in a duration having the time point "i = p" as the endpoint is given as the initial value, the sum of products for maximizing the sum of the similarity measure s (i, j) of the feature vectors a and βj of a grid point (i, j) of a path 26 leading from a starting point 28 as (p + 1, 1) to an endpoint 29 as (q, Jn) is obtained by the DP algorithm as "Dp + S(A(P+1, q), Bn)".

    [0021] In the two-level DP algorithm, the adjustment window bounded by the lines 11 and 12 given by relation (12) and shown in Fig. 1 is arranged as a range of the (i, j) plane for computing the similarity measure sn(i, j) in order to eliminate wasteful computation and abrupt adjustment of the time bases. However, according to the present invention, the adjustment window is not arranged, but two slope constraints are included as the recursive relations in the DP algorithm. There are various examples of slope constraints; the following is a typical example: Initial values:

    where denotes a value computable by a given processor which has the negative sign and which has the maximum absolute value. This value is always smaller than other values to be compared with. The following recursive relation is solved for "i = 1 to I" and "j = 1 to J":

    As shown in Fig. 3B, there are three paths from different starting points to a point 31 (i, j): a path 37 from a point 32 (i - 2, j - 1) to the point (i, j) via a point 33(i - 1, j); a path 38 from a point 34 (i - 1, j - 1) to the point 31 (i, j); and a path from a point 35 (i - 1, j - 2) to the point 31 (i, j) via a point 36 (i, j - 1). Among these three paths, the longest path is selected. In the path 37, an increment of 2 along the time base i of the input pattern corresponds to an increment of 1 along the time base j of the reference pattern, so that a slope of a segment connecting the point 34 and the point 31 is 1, while a slope of a segment connecting the point 35 and the point 31 is 2.

    [0022] As shown in Fig. 3A, by utilizing recursive relation (31) for obtaining an optimal path 40 from a starting point (1, 1) to an endpoint 46 (I, J), the search range of the (i, j) plane is a triangular region which is bounded by a line 42 with a slope of 1/2 at minimum and a line 41 with a slope of 2 at maximum and which is defined by three points 45, 47 and 48. Since the endpoint 46 (1, J) is known, the search region is restricted by a line 43 with a slope of 1/2 and a line 44 with a slope of 2. As a result, the search region corresponds to the hatched region of a parallelogram bounded by the lines 41, 42, 43 and 44. Thus, the recursive relation itself has a slope constraint, so that abrupt adjustment of time bases can be prevented without arranging the adjustment window.

    [0023] Dynamic programming will be described in which an endpoint is fixed, while its starting point is free. As shown in Fig. 2, since the endpoint 29 is fixed as (q, Jn), the similarity measures sn(i, j) is computed in a hatched region surrounded by a line 24 with a slope of 1/2 and a line 25 with a slope of 2 by using recursive relation (31) to find an optimal path to reach the endpoint 29.

    [0024] All the points 28 (p + 1, 1) from a point 21 (q - 2·Jn, 1) to a point 22 (q - Jn/2, 1) are candidates for the starting point. If the following condition is given:

    recursive relation (31) is solved from "i = k", so that

    Thus, recursive relation (31) is solved for j = 1 to Jn by increasing i to q in unitary increments. The final result g (g, Jn) of the recursive relation indicates:



    [0025] When the continuous speech recognition processing is performed by relation (20) with relations (30) to (34), and when relation (31) is computed within a range of hatched region in Fig. 2 every time the endpoint q is obtained, the overall similarity measure between vectors and the total computation C2 of the recursive region are given as follows:

    where J is the mean value of Jn. The result is substantially the same as relation (14) but is larger than that.

    [0026] If a point 50 (q, Jn) in Fig. 4 is defined as the endpoint, total computation for the region bounded by lines 52 and 53 corresponds to 3/4·Jn2. Even if a point 51 (q + 1, Jn) which is next to the point 50 is defined as the endpoint, the total computation for the region bounded by lines 54 and 55 corresponds to 3/4·Jn2. However, as may be apparent from the graph, the hatched region surrounded by the lines 54 and 53 can provide the same vector similarity measure regardless of the endpoints 50 and 51. Computation of "(3/4·Jn2) * 2" substantially corresponds to (3/4·Jn2 + Jn/2). This result can be applied to every endpoint, so that the overlapped portion can be computed once. Therefore, the total computation C3 is given as follows:

    The above relation indicates the inside of a parallelogram having as vertexes points 60 (1, 1), 61 (Jn/2, Jn), 62 (I, Jn). and 63 (I - Jn/2, 1). In this case, candidates for the starting point are all the points p' (1 < p' < I - Jn/2) from the point 60 (1, 1) to the point 63 (I - Jn/2, 1), while candidates for the endpoint are all the points q' (Jn/2 < q' ≦ I) from the point 61 (Jn/2, Jn) to the point 62 (I, Jn) in this dynamic programming.

    [0027] Each combination of starting point and endpoint is not independently computed. All the combinations are simultaneously computed, thus greatly decreasing the total computation. Further, even if the parallel computation is performed, abrupt alignment of time bases may not occur since the recursive relation includes a two-side slope constraint.

    [0028] According to the dynamic programming as described above, recorsive relation (31) is computed for the time base j (1 to Jn) of each reference pattern for all words n. The computation is continued for the time base i of the input pattern.

    [0029] Referring to Fig. 5A, assume that the computation of recursive relation (31) is completed when "i = p". In other words, the computation of the recursive relation for the inside region of the parallelogram having as vertexes points 60 (1, 1), 61 (Jn/2, Jn), 68 (p, Jn), and 65 (p, 1) is completed. Further, intermediate results of the recursive relation, such as gn(i, j) and sn(i, j) are stored. Furthermore, the following relation:

    is calculated for "i = 1 to p". For computing the relation for "i = p + 1", let the initial value for every word be defined as Dp according to relation (33), so that:

    Therefore, the following recursive relation is computed for "j = 1 to Jn" :

    Therefore, gn(p + 1, J ) is obtained for each word. n n Now, let the maximum value for n be Dp+1.

    The above processing is shown in Fig. 5B. Fig. 5B is an enlarged view of the cross-hatched region in Fig. 5A. The intermediate results Di are included in the upper and lower sides, so that two strings are aligned. However, these are illustrated for descriptive convenience; these are in practice the same.

    [0030] Take up the word n for an example. Since the computation is completed when "i = p" along the time axis, "gn (i, j)" and "sn(i, j)" at each point between Dp and (i, j) in Fig. 5B are computed for "1 < i < p" and "1 < j < Jn". = n

    [0031] According to relation (38),



    [0032] Therefore, if "j = 1" is given, the recursive relation at the point (p + 1, 1) is given as follows:





    [0033] Similarly, if "j = 2" is given, the recursive relation at the point (p + 1, 2) is given as follows:



    [0034] Furthermore, if "j = Jn" is given, the recursive relation at the point (p + 1, Jn) is given as follows:



    [0035] The above computation is performed for all the reference patterns, which number N. Among the obtained results "g1(p+1, J1), g2(p+1, J2),..., gn (p+1, Jn),..., gN(p+1, JN)", the maximum result Dp+1 is defined as follows:



    [0036] In the above description, for each i, the total number of computations is "J * N", so that the total computation C3 substantially numbers "I * J * N". Further, since a memory area M3 must store all "gn(i, j)" and "sn(i, j)", it is defined as follows:



    [0037] The memory area M3 is very large as indicated by relation (47). However, only "g(i - 1), j)", "g(i - 2, j)", "s(i - 1, j)", "s(i, j)" and "g(i, j)" are required to compute the ith-step recursive relation, so that a memory area M'3 is given in practice as follows:



    [0038] Relation (48) can be rewritten for further convenience. Let "h(i, j)" be defined as follows:



    [0039] Thus, recursive relation (31) can be rewritten as follows:

    or



    [0040] Referring to Fig. 6, "h(i, j)" at the point (i, j) is defined by relation (49) and is a sum of "g(i-1, j-1)" and 2 x "s(i, j)" as indicated by an arrow 85.

    [0041] The first element of the maximum value of relation (31) is "{g(i-1, j-2) + 2·s(i, j-1)}" as indicated by an arrow 86. However, according to the definition given by relation (49), the first element becomes "h(i, j-1)". The third element of the maximum value of relation (31) is "{g(i-2, j-1) + 2·s(i-1, j)}, as indicated by an arrow 81, but becomes "h(i-1, j). Thus, relation (51) selects the maximum value among "{h(i, j-1) + s(i, j)} indicated by an arrow 84, "h(i, j)" indicated by an arrow 83, and "{h(i-1, j) + s(i, j)}" indicated by arrow 82.

    [0042] According to relations (49) and (51), the used memory.areas are of three types: "h(i-1, j)", "g(i, j)" and "h(i, j)" for j = 1 to Jn. If temporary memory n registers TEMPI, TEMP2 and TEMP3 are used, relations (49) and (51) can be interpreted as follows:

    (a) Using "TEMP1 = g(0)", "TEMP2 = h(0)" and .the initial value, repeat steps (b) to (f) for "j = 1 to Jn".

    (b) TEMP3 = h(j)

    (c) h(j) = TEMPI + 2 * s(i, j)

    (d) TEMPI = g(i)

    (f) TEMP2 = h(j)


    where h(j) is a substitute for h(i-1, j) and h(i, j), and g(j) is the same as g(i, j).

    [0043] As is apparent from the above description, used memory areas M"3 are of two types: h(j) and g(j). Therefore,

    The memory area M"3 of the algorithm according to the present invention is smaller than the memory area (relation (15)) of the two-level DP algorithm.

    [0044] The detailed procedures of the first step or step 1 until Di and Wi tables are prepared are as follows:

    (Step 1-1)



    [0045] Clear the Di table for "i = 1 to I" by "- ∞" so that "D0 = 0" is obtained. The complete working area of each word is defined as "- ∞".

    [0046] Let gn(j) and hn(j) be "- ∞", for n = 1 to N, j = 1 to Jn and i = 1.

    (Step 1-2)



    [0047] Let the word n be 1.

    (Step 1-3)



    [0048] Let TEMPI be Di-1 (= gn(0)) and TEMP2 be - ∞(= hn(0)).

    (Step 1-4)



    [0049] Repeat step 1-5 for j = 1 to JN.

    (Step 1-5)



    [0050] Let TEMP3 be hn(j). Perform the following operations:

    hn(j) = TEMPI + 2 * sn(i, j)

    TEMPI = gn(j)

    and

    TEMP2 = hn(j) .


    (Step 1-6)



    [0051] If gn(JN) is smaller than Di, go to step 1-7. If not, let Di be gn(JN) and Wi be n.

    (Step 1-7)



    [0052] Let n be n + 1. If n is equal to or smaller than N, go to step (1-3).

    (Step 1-8)



    [0053] Let i be i + 1. If i is equal to or smaller than I, go to step (1-2).

    [0054] Thus, all the intermediate results Di and Wi are obtained by steps 1-1 to.1-8. Note that TEMPI, TEMP2 and TEMP3 are temporary memory registers and that sn(i, j) is the similarity measure between the input vector a. and the reference vector

    of the nth word. It should also be noted that gn(j) and hn(j) are memory sections for storing intermediate results of the recursive relation for each word having a length (JN).

    [0055] If the similarity computation of step 1-5 and the recursive computation are performed without being limited by lines 68 and 69 shown in Fig. 5A, the total number of computations is given by the following relation:

    Therefore, a memory area M4 required for the total computation is given as follows:

    If the result obtained by relation (16) is substituted into relation (54),

    The total computation of the algorithm according to the present invention is 1/25 that of the two-level DP algorithm, and the memory area is about 1/2 thereof.

    [0056] The second step of the algorithm according to the present invention will now be described. The intermediate results Di and Wi are obtained in a range of 1 < i < I in the first step. The overall maximum similarity measure DI is given by the reference pattern string Bn whose permutation/combination B is given as follows:

    The last word nY of relation (60) is WI. If the boundary between the word nY and a word nY-1 immediately before the word nY is determined, the word immediately preceding the word nY-1 is readily determined from Wi. If this process is repeated up to the starting point "i = 1" of the input pattern, the concatenated input word pattern "n1, n2,...., nY-1, nY" is obtained in the reverse order.

    [0057] The above operation is described with reference to Fig. 7. Backtracking is performed from the endpoint as indicated by "i = I" of the input pattern to the word WI indicated by reference numeral 96. Assume that the boundary between the xth word of the word string and the (x - l)th word thereof is known, and that the endpoint (starting point in the reversed dynamic programing matching) of the (x - l)th word is defined as "i = u". The (x - l)th word becomes W indicated by reference numeral 95. The partial similarity measure S(A(u, v), B Wu) is computed between the reference pattern BWu which is obtained by reverse ordering the reference pattern BWu of the word Wu in the opposite direction of "j = JWu to 1" and the reversed partial input pattern A(u, v) which is backtracked from the starting point u to the endpoint v. The partial similarity is calculated by the dynamic programming as with relation (31) above. In practice, backtracking is performed starting from a point 91 (u, JWu) to a point 92 (v, 1) in a region 99 to search for a path with a maximum value. The sum of the similarity measure S(A, v), BWu) obtained upon search of the above-mentioned path and a Dv-1 indicated by reference numeral 93 is maximized. That is:

    The endpoint v which maximizes the value computed in expression (61) is obtained within the region 99 for all possible endpoints (v, 1). The obtained v is defined as vmax. The vmax is regarded as the boundary of the (x - l)th word and the (x - 2)th word. Let u be

    Relations (61) and (62) are repeatedly computed until "u = 0". The recognized words W are sequentially obtained in the reverse order. The "p", "q" and "n" in the brackets of relation (24) are substituted by "v - 1", "u" and "Wu" in relation (61). That is:

    Further, since the word n for maximizing relation (24) is determined as Wu, the result obtained by relation (61) is the same as D indicated by reference numeral 90.

    [0058] In practice, when the type (e.g., symmetry) of recursive relation and computation errors of the speech recognition device are considered, the value obtained by relation (61) may not be the same as Du. Therefore, the maximum value is first computed, and then v for giving the maximum value is determined to be the boundary.

    [0059] The detailed procedures of the second step or step 2 will now be described below:

    (Step 2-1)



    [0060] Let u and x be I and 1, respectively.

    (Step 2-2)



    [0061] Produce Wu as the recognized word nx.

    (Step 2-3)



    [0062] Initialize the working area of dynamic programming as follows:



    Furthermore, let TEMPI = g(JWu + 1), and DMAX be 0, and TEMP2 = h(Jwu + 1) be - ∞, respectively.

    (Step 2-4)



    [0063] Let i be u.

    (Step 2-5)



    [0064] Repeat step 2-6 for "j = JWu to 1".

    (Step 2-6)



    [0065] TEMP3 = h(j)

    TEMPI = g(j)

    TEMP2 = h(j)

    (Step 2-7)



    [0066] If g (1) + Di-1 is smaller than DMAX, go to step 2-8. If DMAX is equal to g(1) + Di-1, Vmax = 1 is obtfined. (Step 2-8)

    [0067] Let i be i + 1. If i is equal to or greater than u - 2 - JWu, go to step 2-5.

    (Step 2-9)



    [0068] Since the boundary vmax of the words is obtained, let x and u be x + 1 and vmax - 1, respectively. If u is greater than zero, go to step 2-2.

    (Step 2-10)



    [0069] If "Y = x - 1" is given, nx (x = 1 to Y) as the recognized word string comprising Y words of the input pattern is concatenated in a reverse manner: "nY, nY-1,..., n2, n1".

    [0070] Note that TEMP1, TEMP2 and TEMP3 are the same as those used in step 1, that g(j), h(j) are the same as those used in step 1, and that DMAX is the memory section for the maximum value of relation (61).

    [0071] The total computation C5 in step 2 is given as follows, since the word in boundary search is known:

    where Y is the number of words included in the input pattern A. "Y = 4" as the mean value of the number Y of the words and "J = 35" are substituted into relation (64) :

    The total computation C5 in step 2 is smaller than 2% the total computation C4 in step 1 which is given by relation (55). The total computation of the algorithm according to the present invention is approximately given by C4.

    [0072] As described above, when feature vectors a and are similar to each other, a maximum value is obtained as the similarity measure. However, a distance |α - β| is decreased if they are similar to each other. Therefore, if the distance is used, the maximum value is regarded as the minimum value, so that "- ∞" is replaced with "+ ∞".

    [0073] In the first step, the maximum similarity measure is obtained by the input pattern A and an optimal combination of the reference patterns. In the second step, utilizing the intermediate results Di and Wi obtained in the first step, backtracking from the endpoint of the input pattern A is performed through a matching path obtained by the maximum similarity measure. Therefore, the continuous speech recognition device according to the present invention can determine the boundaries, order, and number of words in a concatenated word string.

    II. Preferred Embodiments



    [0074] Fig. 8 shows the overall arrangement of the continuous speech recognition device according to a first embodiment of the present invention. An utterance is entered through a microphone 101. An input speech signal is supplied to a feature extracting unit 102. The frequency of the speech signal is analyzed by a Q-channel analyzing filter so as to time sample the output level of each channel. Thus, a feature vector α (= ali, a2i'..., aQi) is produced. The feature vector is supplied to an input pattern buffer 103 so as to store the input pattern A for "i = 1 to I". The number I of vectors included in the input pattern A is determined in the feature extracting section 102. Reference numeral 104 denotes a reference pattern buffer for storing N reference patterns "Bn (n = 1, 2,..., N). The reference pattern Bn(

    ,

    ,...,

    ) includes the Q-degree vector

    = (

    ,

    ,...,

    which comprises JN reference pattern lengths. The feature vector α produced from the input pattern buffer 103 in response to a signal i and the feature n vector

    produced from the reference pattern buffer 104 in response to signals j and n are supplied to a recursive computation section 105 (to be referred to as a DPM1105 hereinafter) in which a similarity measure sn(i, j) between vectors is computed. With an initial value signal Di-1, relation (51) is computed for "j = 1 to Jn". A similarity measure of each word for the input pattern A(l, i) is obtained as gn(i, Jn). The similarity measure g (i, Jn) from the DPM1 105 is n n supplied to a first decision section 106 (to be referred to as a DCS1 106 hereinafter) which executes step 1-6. The similarity measure g (i, Jn) is compared with a maximum similarity measure Di at a time point i. If the similarity measure gn(i, Jn) is greater than the maximum similarity measure Di, the maximum similarity measure Di is modified to be the similarity measure g (i, Jn). In this case, n is stored as Wi, Reference numeral 107 denotes a maximum similarity storage section (to be referred to as a D 107 hereinafter) which stores the maximum similarity measure Di with an endpoint of the time point i defined by relation (24). A maximum value produced by the DCS1 106 is stored in the D 107. A word number n which gives the maximum similarity measure Di produced from the DCS1 106 is written and stored in a terminal word memory 108 (to be referred to as a W 108 hereinafter).

    [0075] Reference numeral 109 denotes a DPM2 for computing a backtracked similarity measure S(A(u, v), BWu) = g(v, 1). An output g(v, 1) from the DPM2 109 and the Dv-1 from the D 107 are supplied to a DCS2 110 wherein a boundary point vmax for maximizing the result of relation (61) is determined and produced thereby. The word number Wu based on data u = vnax - 1 obtained by the boundary point vmax is stored as nX (x = 1, 2,..., X) in an order reversing section 111 (to be referred to as an REV 111 hereinafter). The REV 111 produces the words ny (y = 1, 2,..., Y) by reversing the time sequence. Reference numeral 112 denotes a control unit for controlling the overall operation. The control unit 112 produces various signals which control the feature extracting unit 102 to the REV 111 described above.

    [0076] In the continuous speech recognition device with the above arrangement, in the speech signal entered through the microphone 101, an output from the Q-channel analyzing filter is sampled by a sampling signal t from the control unit 112 and is produced as the Q-degree vector a = (a1, a2,..., aQ). The feature extracting unit 102 supplies, to the control unit 112, a detection signal which represents the starting point and the endpoint of the utterance and the number I of vectors a from the beginning to the end. The input pattern buffer 103 stores the feature vector αi from the feature extracting unit 102 in accordance with signals i (= 1, 2,..., I) from the control unit 112. For descriptive convenience, assume that all input patterns are stored in the input pattern buffer 103. In the control unit 112, in accordance with step 1-1, intermediate result memory registers gn(j) and h (j) in the DPM1 105 and the D 107 are initialized. The control unit 112 sequentially generates signals i (i = 1 to I). In response to each signal i, signals n are produced from 1 to N. In response to each signal n, signals j (j = 1 to Jn) are produced wherein Jn is the pattern length of each word n.

    [0077] The input pattern buffer 103 produces the feature vector αi specified by the signal i from the control unit 112. The reference pattern buffer 104 produces the feature vector

    specified by the word selection sigr,als n and j from the control unit 112. The DPM1. 105 which receives output signals from these pattern buffers 103 and 104 updates gn(j) and hn(j) by the recursive computation of relation (51). The previous vaalue of each word which is produced by the intermediate result memory register gn(j) or hn (j), and the similarity measure s (i, j) between the feature vectors α. and

    , are used for updating, provided that the initial value of the recursive relation is defined as the maximum value Di-1 at the previous unit time, that is, at (i - 1) . produced by the D 107. When "j = JN", the DCS1 106 compares the similarity measure g (i, J ) with the n n maximum similarity measure Di having that endpoint which corresponds to the time point i until the word (n - 1). If the similarity measure g(i, J ) is greater n than the maximum similarity measure Di, the similarity measure g(i, J ) is regarded as a maximum similarity n measure. At this time, the word number n is stored as Wi in the W 108. When the above processing is completed for n = 1, 2,... N, the number of the signals i is incremented by one. The incrementation is repeated for the number of times corresponding to the number I of the input patterns, that is, the incrementation is repeated for i = 1, 2,..., I, thereby obtaining all the intermediate results Di and W. for i = 1, 2,..., I.

    [0078] When the above operation is completed, the control unit 112 produces u = 1 as the initial value. Thus, the recognized word signal Wu is read out from the W 108. The control unit 112 initializes the intermediate result memories g(i) and h(j) in the DPM2 109 and a maximum value detecting register DMAX arranged in the DPM2 to detect the maximum value of relation (61) in accordance with step 2-3. The number of signals v is decreased one by one from u to (u - 2JWu) by the control unit 112. Further, in response to each signal v, the number of signals j is decreased one by one from JWu to 1.

    [0079] A vector a is read out from the input pattern buffer 103 in response to the signal v. A vector

    is read out from the reference pattern buffer 104 in accordance with the signal j and the word signal Wu.

    [0080] The DPM2 109 performs step 2-6 for "j = 1" using the intermediate result memory registers g(j) and h(j) and a similarity measure s(v, j) between vectors. When "j = 1", the DCS2 110 compares the previous maximum value DMAX of "v = u to (v + 1)" with the sum of the output g(v, 1) from the DPM2 109 and the maximum similarity measure Dv-1 with the endpoint (v - 1). If the sum is greater than the maximum value DMAX, the sum {Dv-1 + g(v, 1)} is stored as the maximum value, and the signal v is stored as vmax. The above processing until v = u -2·JWu. With the vmax thus obtained, an output u = vmax - 1 is supplied to the control unit 112. The control unit 112 repeats the above operation until "u = 0". The word signal Wu sequentially obtained is stored in the REV 111 as nx (x = 1, 2,..., Y). When "u = 0" is performed, the REV 111 produces the inverted output ny (y = 1, 2,..., Y) as "n1 = nY, n2 = nY-1,..., nY = n1".

    [0081] In the above embodiment, speech recognition operation is started after the input pattern A is completely stored in the input pattern buffer 103. However, as shown in steps 1-1 to 1-8, when one input vector a is entered, steps 1-2 to 1-7 are simultaneously performed. The entire duration from the utterance input to the recognition result response can be used for the speech recognition processing, thus shortening the response time. Further, the DPM1 105 parallel processes data of words n at high speed. The DPM1 105 and the DPM2 109 perform the identical operation as shown in steps 1-5 and 2-6. Since the second step cannot be executed until the first step is completely finished, the second step may be performed in the DPM1. Thus, the DPM2 may be omitted.

    [0082] The.microphone 101 may be arbitrarily selected to be a telephone receiver or the like. Furthermore, in the above embodiment, all reference numerals 101 to l12 denote hardware. However, part or all of the processing performed by units 101 to l12 may be performed under program control. Further, the feature extraction unit 102 comprises a frequency analyzing filter. However, any unit can be used for extracting a parameter which represents the speech features such as a linear prediction coefficient and a partial correlation coefficient. The similarity measure between the vectors may be represented by correlation, a distance, or the like.

    [0083] The configuration of the DPM1 105 which is the principle part of the continuous speech recognition device of the first embodiment is shown in Fig. 9A. The configuration shown in Fig. 9A aims at calculating relation (51). Reference numeral 120 denotes a similarity measure operation unit for computing the similarity measure s (i, j) between the vectors α. and

    . Reference numeral 121 denotes a temporary memory register (to be referred to as a TEMPI 121 hereinafter) which receives gn(i-1, j) and produces gh(i-1, j-1). If computation is started with "j = 1" in the TEMPI 121, Di is preset as the initial value. Reference numeral 122 denotes a temporary memory register (to be referred to as a TEMP2 122 hereinafter) which receives hn(i, j) and produces hn(i, j-1). If computation is started with "j = 1", "- ∞" is preset as the initial value. Reference numeral 123 denotes a temporary memory register (to be referred to as a TEMP3 123 hereinafter) which temporarily stores hn(i-1, j).

    [0084] The output sn(i, j) from the similarity measure operation unit 120 is supplied to a double multiplier circuit 124 which produces 2·sn(i, j). The output gn(i-1, j-1) from the TEMPI 121 is added to the output 2·sn(i, j) from the double multiplier 124 by an adder 125. The adder 125 produces an output {gn(i-1, j-1) + 2·sn(i, j)), that is, hn(i, j). The output hn(i, j-1) from the TEMP2 122 and the output sn(i, j) from the similarity measure operation unit 120 are added by an adder 126. The adder 126 produces an output {hn(i, j-1) + sn(i, j)}. Furthermore, the output h (i-1, j) from the TEMP3 123 is added to the output sn(i, j) from the similarity measure operation unit 120 by an adder 127. The adder 127 produces an output {hn(i-1, j) + sn (i, j)}. Reference numeral 128 denotes a maximum value detector (to be referred to as a MAX 128 hereinafter) which selects a maximum value among the output hn(i, j) from a memory 130 to be described later and outputs from the adders 126 and 127. An output from the MAX 128 is supplied to a memory 129. The memory 129 stores data as follows:

    and



    [0085] Data read out from the memory 129 is supplied to the TEMPI 121. The output hn(i, j) from the adder 125 is stored in the memory 130. The storage contents are as follows:

    and



    [0086] Data read out from the memory 130 is supplied to the TEMP2 122 and to the MAX 128. Reference numeral 131 denotes a recursive control unit (to be referred to as a DPM 131 hereinafter) which controls the DPM1 105 and the DPM2 109. The DPM 131 supplies timing signals Tl, T2, T3, T4 and T5 to the TEMPI 121, the TEMP2 122 and the TEMP3 123. Further, the timing signals Tl, T2, T3, T4 and T5 are supplied to the memories 129 and 130 as write signals. The timing signals Tl to T5 are produced respectively for each signal j in the order shown in Fig. 9B. A timing signal TO is a preset signal and is used to preset Di instead of gn(i-1, 0) in the TEMPI 121 and - ∞ instead of hn(i, 0) immediately before computation with "i = 1" is started.

    [0087] In the DPM1 105 with the above arrangement, the operation for the hatched region in Fig. 5B will be described. All the operations for every word number n are completed up to the time point "i = p" of the input pattern A. Therefore, Di is obtained for "0 < i < p". Further, the memory 129 stores data gn(p, j) (n = 1, 2,..., N; and j = 1, 2,..., Jn), and the memory 130 stores data h (p, j) (n = 1, 2,..., N; and j = 1, 2,..., Jn) .

    [0088] An index j for taking out the vector

    of the word number n and the reference pattern Bn is specified by the control unit 112 shown in Fig. 8. For performing the processing for the cross-hatched region in Fig. 5B, the TEMPI 121 and the TEMP2 122 are initially set by Dp and - ∞, respectively at time t0 in response to the timing signal T0. In synchronism with the timing signal T3, the output hn(1), that is, hn(p, 1) from the memory 130 is written as "j = 1" in the TEMP3 123 at time tl. A similarity measure sn(p+1, 1) between vectors αp+1 and

    which are specified by "i = p + 1" and "j = 1" is computed by the similarity measure operation unit 120. The computed result is doubled by the double multiplier circuit 124 which then produces an output 2·sn(p+1, 1). The output 2·sn(p+1, 1) is added to the output gn(p, 0) = Dp from the TEMPI 121 by the adder 125. The adder 125 produces an output hn(p+l, 1) which is written in a register h (1) of the memory 130 in synchronism with the timing signal T5 at time t2. At time t3, the output gn(1), that is, gn(p, 1) from the memory 129 is written in the TEMPI 121 in synchronism with the timing signal Tl.

    [0089] An output hn(p, 0) = - ∞ from the TEMP2 122 is added to an output sn(p+1, 1) from the similarity measure operation unit 120 by the adder 126. An output hn(p, 1) from the TEMP3 123 and the the output sn(p+1, 1) are added in the adder 127. The MAX 128 selects the maximum value among the outputs from the adders 126 and 127 and the memory 130:



    [0090] This maximum value is written as gn(1) = gn(p+1, 1) in the memory 129 in synchronism with the timing signal T4 at time t4. At time t5, the output h (1) = hn(p+1, 1) from the memory 130 is written in synchronism with timing signal T2. Thus, the cycle of "j = 1" is completed.

    [0091] The next cycle is started by data "j = 2" from the control unit 112. At time t6, an output hn(2), that is, hn(p, 2) from the memory 130 is written in the TEMP3 123. At time t7, the output gn(p, 1) from the TEMPI 121 and the output 2·s (p+1, 2) from the douple multiplier circuit 124 is written as hn(2) in the memory 130. At time t8, gn(2) = gn(p, 2) is then written in the memory 129. At time t9, the maximum value among the output h (2) = hn(p+1, 2) from the memory 130, the output {hn(p+1, 1) + sn(p+1, 2)} from the adder 126, and the output {hn(p, 2) + s (p+1, 2)} from the adder 127 is written as gn(2) = gn(p+1, 2) in the memory 129. At time t10, the output hn(2) = hn(p+1, 2) from the memory 130 is written in the TEMP2 122. Thus, the cycle of "j = 2" is completed. When the above cycle is repeated until j = J , the output gn(Jn) from the memory 129 becomes gn(p+1, Jn). This output is supplied to the DSC1 106 in Fig. 8.

    [0092] The above processing is performed by relation (51) which is the modified relation (31). In addition to relation (51), there exists, for example, the following recursive relation with slope constraints:



    [0093] The slopes of relation (70) are 2/3, 1, and 2, which indicates that the input pattern length may be changed with +50% of the reference pattern length. The allowable pattern length range of relation (70) is narrower than that of relation (31) which has a range of -50% to +100% for slopes 1/2, 1 and 2. In the same manner that relation (31) is modified to obtain relation(51), let fn(i, j) be defined as follows:

    Relation (70) can be modified as follows:

    Or

    Step 1 is thus modified as follows: (Step 1'-1)

    [0094] Let's define:

    D0 = 0

    Di = - ∞ for i = 1 to I



    [0095] Further, let i be 1.

    (Step l'-2)



    [0096] Let n be 1.

    (Step l'-3)



    [0097] Let's define:

    TEMPI = Di-1 (= gn(0)) and

    TEMP2 = - ∞ (= hn(0))


    (Step 1'-4)



    [0098] Repeat step 1'-5 for j = 1 to Jn, (Step 1'-5)

    TEMP3 = hn(j)

    TEMP1 = gn(j)

    TEMP2 = hn(j)


    (Step l'-6)



    [0099] If gn(Jn) is smaller than Di, go to step 1'-7. If not, let Di and Wi be gn(Jn) and n, respectively. (Step 1'-7)

    [0100] Let n be n +1. If n is equal to or smaller than N, go to step 1'-3.

    (Step l'-8)



    [0101] Let i be i + 1. If i is equal to or smaller than I, go to step l'-2.
    The above program sequence is performed by the circuit shown in Fig. 10. The circuit is operated at the same timings as in Fig. 9B. The reference numerals used in Fig. 9B denote the same parts in Fig. 10, and a detailed description thereof will be omitted.

    [0102] Examples of recursive relations without slope constraints are as follows:

    Or

    The second example is described in Patent Disclosure DE-A 26 10 439 . Since neither recursive relation has a slope constraint, an aligning window as bounded by lines 11 and 12 shown in Fig. 1 is required for avoiding the abrupt alignment of the time bases.

    [0103] In both above recursive relations, the abrupt alignment occurs locally. According to experiments in speech recognition, unsatisfactory results have been reported.

    [0104] Fig. 11 shows a continuous speech recognition device according to a second embodiment of the present invention. An utterance entered as an analog speech signal through a microphone 101 is converted to a digital signal in an A/D converter 141. Reference numeral 142 denotes a data memory for storing data of the input pattern A, the reference pattern B, the intermediate results gn(j), hn(j), g(j) and h(j), the maximum similarity measure Di, and the terminal word Wi. Reference numeral 143 denotes a program memory. The speech signal converted to a digital signal is coupled to a CPU 144. The program from the program memory 143 is executed in the CPU 144.

    [0105] In order to describe the above processing in detail, the speech signal entered from the microphone 101 is converted to a digital signal by the A/D converter 141. The digital signal is fetched in the CPU 144 with a predetermined time interval, for example, 100 p sec.and is then stored in the data memory 142. When 150 digital signals are written in the data memory 142, the CPU 144 performs fast Fourier transform (FFT) to obtain the power spectrum which is multiplied by 16 triangular windows. Thus, the same result as in a 16-channel frequency analyzing bandpass filter are obtained. The result is defined as the input vector a. One hundred and fifty pieces of data are obtained every 15 msec. The time interval of 15 msec. is defined as one frame.

    [0106] The mode of operation of the CPU 144 by the program stored in the program memory 143 will be described with reference to flowcharts in Figs. 12 to 15.

    [0107] A variable i1 used in the flowcharts is an index representing an address for storing the vector a computed in the interrupt processing. A variable ℓ is a counter for counting low-power frames for detecting a terminal end in an interrupt loop. A variable I indicates the number of vectors a from the starting point to the endpoint. A variable i2 is an index for reading out the input vector a in the speech recognition processing. In the low-power frame during the continuous word pattern, process 2 (corresponding to step 1) is not performed, and the flow advances. A variable i3 is an index for reading out the input vector a for executing process 2. Di, g (j), hn(j)' Wi, g(j) and h(j) are data stored in the data memory 142; Di is the maximum similarity measure with the ith frame as an endpoint; gn(j) and hn(j) are registers for intermediate results of the recursive relation for the word having the word number n in process 2; Wi is the terminal word of the word string for giving the maximum similarity measure Di; and g(j) and h(j) are registers for storing the intermediate results of the recursive relation in process 3 (corresponding to step 2). A variable j is an index for reading out the vector βj of the reference pattern. A variable n indicates a word number. A constant J indicates a time duration (the number of frames) of the word having the word number n. A constant N is the number of reference patterns. Variables TEMPI, TEMP2 and TEMP3 are temporary memory registers of the DPM1 105. A variable u is an index for providing a starting point of a partial pattern of the reverse pattern matching in process 3. A variable v is an index for providing an endpoint of the partial pattern of the reverse pattern matching. DMAX is a register for storing a detected maximum value given by relation (61). vmax is a register for storing the index v which gives DMAX. A variable x is an index for storing a recognized word number nx. s (i, j) is the similarity measure between the vectors αi and

    . Symbol - indicates the maximum negative value obtained in the CPU 144.

    [0108] The main program starts with start step 200 as shown in Fig. 12A. In step 201, a flag is initially set to be low so as to indicate detection of the starting point and endpoint of the utterance. Thus, the interrupt from the A/D converter 141 is enabled every 100 p sec. In the following steps, the interrupt processing for data fetching, computation of the feature vector a, and detection of the starting point and the endpoint are performed in parallel with the speech recognition processing.

    [0109] Interrupt processing steps 220 to 233 are first described with reference to Fig. 12B. When the interrupt occurs, interrupt process step 220 is initiated. In step 221, digital data from the A/D converter 141 is stored in the data memory 142. It is determined in step 222 whether or not the number of data has reached 150. If NO in step 222, the interrupt process is terminated in return step 223. When 150 pieces of data are written in the CPU 144, the computation of the vector a is performed in step 223. It is then checked in step 224 if the starting point detection flag is set to logical level "0". If YES in step 224, it is determined in step 127 whether or not 16 the sum of powers (e.g., the sum E a of the elements of the vector a) is higher than a threshold value. If NO in step 227, the interrupt process is interrupted in step return step 233. However, if YES in step 227, it is determined that the starting point is detected. In step 228, the starting point flag is set to logical level "1" and the index i1 is set to logical level "1"; vector αil is defined as a1 and is stored in the input pattern buffer A. In step 229, the counter 1 is set to logical level "0", so that in return step 223 the interrupt process is interrupted.

    [0110] Meanwhile, when the starting point detecting flag is set to logical level "1" in step 224, the index i1 is increased by one in step 225 and is defined as the input vector αil and is stored in the input pattern buffer A. In decision step 226, if the power of the input vector is higher than the threshold value, the flow advances to step 229. Otherwise, the input vector is regarded as a low-power frame, and the counter ℓ is increased by one.

    [0111] In step 231 which represents a decision box, it is checked whether or not the count of the counter ℓ has reached 20, that is, 20 low-power frames have been continuously counted. If NO in step 231, the flow advances to return step 223. Otherwise, it is determined that the input utterance is completed, so that in step 232 the number of effective vectors a from the starting point to the endpoint is defined as I1, and the endpoint detecting flag is set to logical level "1". An interrupt from the A/D converter 141 is prohibited, and in step 233 the interrupt process is interrupted.

    [0112] By the above interrupt process, the vectors a are fetched in the input pattern buffer A every 15 msec.

    [0113] The steps after step 202 in the main program will be described below. In process 1 (steps 240 to 245 in Fig. 13) in step 202, initialization corresponding to step 1-1 is performed.

    [0114] In decision step 203, it is waited until the starting point detecting flag is set to logical level "1". When the flag is set to logical level "1", a speech input is regarded as started. In step 204, the indexes i2 and i3 are initialized to be logical level "1". In decision step 205, indexes i1 and i2 which are used for the interrupt process are compared. If the index i2 is equal to or smaller than i1, the flow advances to step 206 which represents a decision box. If the power of the vectors αi2 is smaller than the threshold value, i2 is regarded as a low-power frame during the speech input. In step 207, the index i2 is increased by 1. Subsequently, in decision step 208, the logical status of the endpoint detecting flag is checked. If the flag is set to logical level "0", the endpoint is regarded as not being detected. Then, the flow returns to step 205.

    [0115] However, in step 206, if the power of the vectors is higher than the threshold value, the flow advances to step 212 in which the index i2 is increased by one. Process 2 which corresponds to steps 1-2 to 1-7 is performed in step 213. In step 214, the index i3 is increased by one. In step 215, the indexes i3 and i2 are compared. If the index i3 is smaller than the index i21 the flow returns to step 213 and process 2 is continued. However, if the index i3 is equal to or greater than the index i2, the flow returns to step 205.

    [0116] If the endpoint detecting flag is set to logical level "1" in step 208, the indexes i3 and i2 are compared in step 209. If process 2 in step 213 is not completed within 15 msec., fetching of vectors a may be performed prior to completion thereof. Therefore, if the endpoint is detected, an unevaluated input vector a may be present in the CPU 144. Thus, the indexes i3 and i2 are compared. If the index i3 is equal to or smaller than the number I1 of input vectors, the same operation as in steps 213 and 214 is performed in steps 210 and 211. However, in step 209, if it is determined that the index i3 is greater than I1, all the input vectors are regarded as evaluated, and process 3 (correspond to step 2) in step 216 is then performed to obtain the recognized word string nx. In step 217, the order of the word string nx is reversed to obtain the reversed word string ny. Thus, the continuous speech recognition process is completed.

    [0117] The detail of process 2 is illustrated in Fig. 14. Step 251 corresponds to step 1-2; step 252 corresponds to step 1-3; steps 253, 256 and 257 correspond to step 1-4; steps 254 and 255 correspond to step 1-5; steps 258 and 259 correspond to step 1-6; and step 261 corresponds to step 1-7.

    [0118] Fig. 15 shows the detail of process 3. Step 271 corresponds to step 2-1; step 272 corresponds to step 2-2; steps 273 and 274 corresponds to step 2-3; and step 275 correspond to step 2-4. Symbol i used in the second step is replaced with v in the flowchart in Fig. 15. Steps 276, 279 and 280 correspond to step 2-5; steps 277 and 278 correspond to the recursive computation in step 2-6; steps 281 and 282 correspond to the maximum value detection in step 2-7; steps 283 and 284 correspond to step 2-8; step 285 corresponds to step 2-9; and step 286 corresponds to step 2-10.

    [0119] In the second embodiment, process 3 in step 216 which corresponds to step 2 is performed such that the endpoint is detected when 20 low-power frames are continuously counted. However, the detection may be performed simultaneously when the first low-power frame is detected. If an effective power frame is entered before the endpoint is detected, the results by process 3 become invalid. With the above arrangement, the result n (or n ) is known when the endpoint is detected, thus requiring shorter response time.

    [0120] According to the experimental results of the continuous speech recognition process according to the continuous speech recognition device of the present invention, for a total of 160 numbers (40 numbers for each of 2 to 5 digits), 96.3% of numerals were recognized. For a total of 560 words (each digit of the number being defined as one word), 99.2% of words were recognized. The above figures correspond to the result in which 99.5% of 1000 separately occurring discrete words were recognized. Thus, the continuous speech recognition means according to the present invention is very effective.

    [0121] It is understood from the above description that the following advantages are provided according to the continuous speech recognition device of the present invention:

    (1) In order to obtain the maximum similarity measure D between the partial pattern A(l, q) (of the input pattern A) which has as its starting point "i = 1" and as its endpoint "i = q", and a suitable combination of the reference patterns, the partial pattern A(l, q) is divided into sub-partial patterns A(l, p) and A(p+l, q). The maximum similarity measure Dp of A(l, p) is added to the similarity measure S(A(p+1, q), Bn) obtained by matching between A(p+l, q) and the reference pattern Bn. The maximum value of the sum with respect to p is obtained by the dynamic programing algorithm. Further, a means for determining the maximum value Dq with respect to n only once performs the computation of the similarity measure s(αi,

    ) between vectors with respect to a combination of i, j and n. Therefore, the number of total computations in the algorithm of the present invention is about 1/25 the number in the conventional algorithm.

    (2) When one input vector a (I ≤ q < I) is entered, all the words with word number n (1 to N) and time bases j (1 to Jn) of each word can be computed to obtain all the D and Wq, so that 98% of the total computation in the first step is parallel processed upon entry of the speech input. Therefore, the time duration from utterance to recognition response can be effectively used.

    (3) According to the table of the maximum similarity measure Di and the end word Wi (1 ≤ i < I) which are obtained in the first step, the endpoint (i = I) of the input pattern is defined as the first boundary. For these words determined only by the boundaries, backtracking is performed. Therefore, the boundary of the immediately previous word can be readily obtained by the dynamic programing algorithm.

    (4) In the dynamic programing algorithm in which the boundary is obtained in the second step, since one of the two words at the boundary (endpoint of the input pattern) is determined, the total amount of computations is very small.

    (5) Since it is possible to complete computation of the first step at the same time as the utterance is completed, and since the total number of computations in the second step is as small as 2% of that in the first step, response can be made as a whole at the same time as the utterance is completed.

    (6) In the pattern matching device having intermediate result memory registers gn(J), hn(j) and so on and TEMP1, TEMP2 and TEMP3, the total memory area is small as indicated by relation (52) or (54) compared with the case in which part or all of gn(i, j) and sn(i, j) are stored as indicated by relation (47) or (48). Pattern matching can be readily performed with hardware.

    (7) Since the amount of data processing of the algorithm of the present invention is 1/25 that of the conventional algorithm, low-speed elements can be used, resulting in low lost.

    (8) If the same elements used in the conventional device are used for the device of the present invention, the number of reference patterns can be increased by 25 times. Therefore, the type of recognized words can be increased.

    (9) Since the memory area of the device of the present invention is half of that of the conventional device, the device as a whole is low in cost and small in size.

    (10) Although parallel processing can be considered for high speed processing, the device of the present invention can perform the total computation during the time taken for one vector input in the conventional device, the total number of computations being {(2 * r + 1)* N}, and the total memory area being {(2 * r + 1)2*N*2+ M1} . Further, if this device is used for parallel processing, the total number of computations is 1/25 that of the conventional device, . and the total memory area is about 1/17 that of the conventional device.

    (11) The number Q of degrees of the feature vector, and the number of vectors of the input and reference patterns per unit time are generally increased to improve speech recognition. If the algorithm of the present invention is applied to the conventional device, and if the number of recognized words and the response time are the same as in the conventional case, the product of the number of vectors per unit time and the number Q of degrees of the vector can be increased by 25 times. Thus, higher recognition efficiency can be expected.

    (12) Since a recursive relation with a two-side slope constraint is used, no abrupt adjustment of the time bases occurs even if the dynamic programing algorithm is simultaneously performed for the plurality of starting points and endpoints. Thus, the total amount of computations is greatly decreased.

    (13) The maximum similarity measure D , between the partial pattern A(l, p) of the input pattern having as endpoint p the immediately previous frame of each starting point (p + 1) and a combination of proper reference patterns, is used as an initial point of each of a plurality of starting points. A combination of reference patterns which optimally approximates the partial pattern A(l, q) having a given endpoint q is obtained by, summation of the initial value D with similarity measures S(A(p+1, q), Bn) of the partial pattern A(p+l, q) and each reference pattern Bn. Thus, continuous speech recognition can be performed with a substantially same amount of processing as in the case of discrete word recognition.




    Claims

    1. A method for recognizing a continuous speech pattern, comprising a combination of a first step and a second step,

    said first step comprising maximizing, by dynamic programming with respect to a boundary p, a sum of: a partial pattern A(1, p) = α1, α2,..., αp (for 1 < p ≤ I) having as endpoint a time point i = p of an input pattern A = (α1, α2,..., αi,..., αI) expressed as a time sequence of a feature vector a; maximum similarity measures Dp = 0 (for p = 0) and Dp = S(A(1, p), Bp) (for P = 1 to I) between said partial pattern A(l, p) and an optimal combination Bp of reference patterns B =

    ,

    ,...,

    ,...,

    which are preset with respect to word numbers n (n = 1,..., N); a function j(i) between a partial pattern A(p+1, q) = αp+1, αp+2,..., αi,..., ap (for 0 ≦ q ≦ 1) and said reference patterns Bn, which establishes a correspondence between a time base i of said partial pattern A(p+1, q) and a time base j of said reference patterns Bn to obtain a maximum value S(A(p+1, s), Bn) of a similarity measure S(αi,

    ) between vectors a. and

    , so that a maximum value with respect to said word numbers n among results obtained by maximization is defined as:

    and said word numbers n as data Wp for providing said maximum similarity measures D are sequentially changed for q = 1 to I to obtain all of said similarity measures and word data Di and Wi (for i = 1 to I), and

    said second step comprising producing, by dynamic programming, a similarity measure S(A(u, v), B

    ) between a reversed partial pattern A(u, v) having as starting point u said time point i = I of said input pattern A and an endpoint v (for I ≧ u > v > 1) and using a word W as a recognized word, and a reference pattern

    which is a time sequentially reversed pattern of a reference pattern

    ; and determining a boundary v as vmax which maximizes a sum of said similarity measure S(A(u, v), B

    ) and said similarity measure Dv-1, determination of said boundary v being sequentially repeated by giving a condition u = vmax - 1 to obtain said recognized word Wu, and an order of said recognized words W being reversed.


     
    2. A method according to claim 1, wherein said maximum similarity measure Dq is computed by maximization with respect to said boundary p by:


     
    3. A continuous speech recognition device comprising a combination of:

    an input pattern buffer for storing an input pattern A = (al, α2,..., αi,..., αI) which is represented by a time sequence of a feature vector a;

    a reference pattern memory for storing a reference pattern Bn =

    ,

    ,...,

    ,...,

    which is preset with respect to each word n (n = 1,..., N);

    a maximum similarity measure memory for storing a maximum similarity measure Dq between a partial pattern A(1+q) (for 1 ≦ q ≦ I) having as starting point a time point i = 1 and as endpoint a time point i.= q of said input pattern A, and an optimal combination of reference patterns;

    a terminal word memory for storing a word number W q of a terminal word of said reference pattern which provides said maximum similarity measure D ;

    a pattern matching unit which maximizes, by dynamic programming, a sum of a maximum value S(A, (p+1 ,q), Bn) and said maximum similarity measure Dp, said maximum value S(A(p+1, q), Bn) being a sum of similarity measures s(αi,

    ) defined by i and j(i), by properly determining a function j(i) which establishes a correspondence between a time base i of said partial pattern A(p+1, q) and a time base j of said reference patterns Bn with respect to word numbers n between a partial pattern A(p+1, q) = αp+1, αp+2,..., αi,...., αp (for 1 ≦ p+1 < q < I) having a time point i = p+1 as a starting point and a time point i = q as an endpoint, and said reference patterns Bn;

    a first decision unit for determining said maximum similarity measure Dq so as to obtain a maximum value of a resultant MAX{Dp + S(A(p+1, q), Bn)} with respect to all of said word numbers n, and for determining said word number n which provides said maximum value as said terminal word W ;

    a second decision unit for determining a boundary v as v max which provides a maximum value of a sum of a similarity measure S(A(u, V), B

    ) and a similarity measure Dv-1 and for simultaneously backtracking boundaries from u = vmax - 1 as a starting point of an immediately previous word to a starting point of said input pattern A, said similarity measure S(A(u, v), B

    ) being obtained, by dynamic programming, between a reversed partial pattern A(u, v) = (αu, αu-1,..., αv+1v) having said time point i = I as a starting point u of said input pattern A and as an endpoint said boundary v, and a reference pattern B

    which is a reversed pattern of a reference pattern B

    of said terminal word Wu at said starting point u; and

    an order reversing unit for rearranging said terminal words Wu at said starting point u of said reference pattern B

    which are sequentially obtained, and for producing an output in the same order as an input speech pattern.


     
    4. A device according to claim 3, wherein said pattern matching unit comprises: a first similarity measure operation unit for computing said similarity measures s(αi,

    ) between said feature vectors a. and

    specified by said time points i and j and said word number n; and a predetermined number of intermediate result memories (e.g., g(j), hn(j) and fn(j)) for storing intermediate results for j and n of each path computed by a recursive relation with respect to said time points i which are so far specified.
     
    5. A device according to claim 4, wherein said similarity measure operation unit comprises an input pattern buffer for storing said input pattern represented by a time sequence of said feature vectors αi, and a reference pattern memory for storing said reference pattern B represented by a time sequence of said feature vectors

    having the same word number n.
     
    6. A device according to claim 4, wherein said pattern matching unit comprises means for computing, for each of said time points i, a plurality of sums among said initial value Di-1 externally given for j = 1 when a recursive relation having a two-side slope constraint is computed for a word having said word number n, said similarity s(αi,

    ), and path data which is respectively read out from said intermediate result memories g (j), h (j) and fn(j), and for writing a maximum value in said intermediate result memory gn(j) and other results in said intermediate result memories hn(j) and fn(j), so that a result g (J ) of said recursive relation which is computed for j with respect to each of said time points i and n up to Jn of said feature vectors of said reference pattern corresponds to a similarity between said partial pattern A(i, j) having said endpoint i and said reference pattern having said word of said word number n as said terminal word.
     




    Drawing








































    Search report