BACKGROUND OF THE INVENTION
Field of the Invention:
[0001] This invention relates to an approximate reasoning apparatus.
Description of the Related Art:
[0002] Approximate reasoning in a method of revising or altering the results of reasoning
depending upon information quantities of factors used in order to derive the results
of reasoning is known. (For example, see "AN EXPERT SYSTEM WITH THINKING IN IMAGES",
by Zhang Hongmin, Preprints of Second IFSA Congress, Tokyo, July 20 - 25, 1987, p.
765.)
[0003] This approximate reasoning method involves using a membership function given for
every conclusion relative to a factor to calculate the information quantity of every
factor (i.e., the information identifying capability of a factor), and revising or
altering the results of reasoning (namely the possibility that a conclusion will hold)
depending upon information quantities of factors used in order to derive the conclusion
(wherein the revision or alteration involves taking the product of possibility and
information quantity), thereby improving the capability to identify the results of
reasoning.
[0004] With this conventional method of approximate reasoning, however, experts are required
when constructing or revising the knowledge base, and this is very troublesome. In
addition, performing maintenance on the knowledge base is difficult.
SUMMARY OF THE INVENTION
[0005] Accordingly, an object of the present invention is to make it possible to improve
and newly construct a knowledge base using case-history data.
[0006] According to the present invention, the foregoing objects are attained by providing
an approximate reasoning apparatus comprising knowledge memory means for storing knowledge
which represents relationships between factors and conclusions already established,
synthesized knowledge memory means for storing knowledge synthesized with regard to
the same factors and conclusions, case-history data memory means for storing data
representing relationships between factors and conclusions which actually have occurred,
and knowledge synthesizing/revising means which synthesizes knowledge that concerns
the same factors and conclusions stored in the knowledge memory means, and which,
by using the data stored in the case-history data memory means, re-synthesizes the
synthesized knowledge that concerns this data and the same factors and conclusions,
wherein the the knowledge re-synthesized by the knowledge synthesizing/revising means
is used to update the synthesized knowledge in the synthesized knowledge memory means
that concerns this knowledge and the same factors and conclusions.
[0007] In accordance with the present invention, data representing the relationships between
factors and conclusions which have occurred is accumulated in memory, thereby making
it possible to revise a knowledge base, which has already been established (e.g.,
at the design stage), using the accumulated data. Since the knowledge base is revised
using data representing the relationships between factors and conclusions which actually
have occurred, more accurate approximate reasoning becomes possible. In addition,
since revision of the knowledge base is performed automatically, maintenance of the
knowledge base is possible without the aid of experts.
[0008] In a case where the approximate reasoning apparatus of the present invention is used
in fault diagnosis, the data representing the relationships between the factors and
conclusions which have occurred is data representing the relationships between the
types of faults and the mechanical symptoms (symptoms that can be perceived by the
five senses, measured values sensed by a sensor, etc.) prevailing at the time of the
fault, and this data is recorded in a fault report, maintenance report or the like.
Alternatively, the data can be collected automatically and stored in a memory.
[0009] The approximate reasoning apparatus described above further comprises approximate
reasoning means for computing the possibility of a conclusion by applying factor-input
data to the knowledge stored in the synthesized knowledge memory means.
[0010] The approximate reasoning means includes degree-of-membership computing means for
converting inputted data into degree of membership using a membership function represented
by the above-mentioned knowledge, dynamic information quantity computing means for
obtaining a dynamic information quantity for every factor using this degree of membership,
and possibility computing means for obtaining the possibility of a conclusion using
the degree of membership and the dynamic information quantity.
[0011] When the knowledge stored in the synthesized knowledge memory means is updated, the
approximate reasoning means performs re-calculation based upon the knowledge after
updating. This makes possible more correct reasoning.
[0012] The approximate reasoning apparatus further includes static information quantity
computing means for computing a static information quantity of each factor based upon
the synthesized knowledge. The static information quantity of a factor indicates the
capability of a membership function of a factor to identify a conclusion.
[0013] The static information quantity computing means re-computes the static information
quantity of each factor in relation to the re-synthesized knowledge when the synthesized
knowledge is re-synthesized.
[0014] The approximate reasoning apparatus further includes clarity computing means for
computing the clarity of each factor for every conclusion using the static information
quantities computed by the static information quantity computing means.
[0015] The clarity computing means re-computes clarity using the static information quantities
obtained by re-computation when the synthesized knowledge is re-synthesized.
[0016] The approximate reasoning apparatus further comprises adding means for computing
the clarity of every conclusion by adding the clarity of factors, for which data has
actually been inputted, using the clarity obtained from the clarity computing means.
[0017] The reliability of a conclusion can be determined from the added clarities.
[0018] According to another aspect of the invention, the approximate reasoning apparatus
comprises case-history data memory means for storing the relationship between factors
and conclusions which have occurred, data synthesizing means for synthesizing, as
one item of data, a plurality of items of data regarding the same factors and conclusions
stored in the case-history data memory means, synthesized data memory means for storing
data synthesized by the data synthesizing means, and approximate reasoning means for
computing the possibility of a conclusion by applying factor-input data to the synthesized
data stored in the synthesized data memory means.
[0019] In accordance with the present invention, data representing the relationship between
factors and conclusions which have occurred in the past is accumulated in memory,
and a knowledge base is created based upon the accumulated data. This makes possible
the construction of a knowledge base automatically without the aid of experts.
[0020] The approximate reasoning means includes degree-of-membership computing means for
converting inputted data into degree of membership using a membership function represented
by the above-mentioned synthesized data, dynamic information quantity computing means
for obtaining a dynamic information quantity for every factor using this degree of
membership, and possibility computing means for obtaining the possibility of a conclusion
using the degree of membership and the dynamic information quantity.
[0021] The approximate reasoning apparatus further includes static information quantity
computing means for computing a static information quantity of each factor based upon
the synthesized data, clarity computing means for computing the clarity of each factor
for every conclusion using the static information quantities computed by the static
information quantity computing means, and adding means for computing the clarity of
every conclusion by adding the clarity of factors, for which data has actually been
inputted, using the clarity obtained from the clarity computing means.
[0022] Other features and advantages of the present invention will be apparent from the
following description taken in conjunction with the accompanying drawings, in which
like reference characters designate the same or similar parts throughout the figures
thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023]
Fig. 1 is a block diagram illustrating an example of the overall construction of an
approximate reasoning apparatus according to a first embodiment of the present invention;
Fig. 2 is a graph depicting a Gaussian distribution;
Figs. 3a through 3c are graphs showing the manner in which a membership function is
formed;
Figs. 4a and 4b are graphs illustrating membership functions obtained for each factor;
and
Figs. 5a and 5b are graphs illustrating the manner in which degree of membership is
obtained;
Fig. 6 is a diagram showing the contents of a memory unit storing case-history data;
Fig. 7 is a block diagram illustrating an example of the overall construction of an
approximate reasoning apparatus according to a second embodiment of the present invention;
and
Figs. 8a through 8d are graphs showing distributions of case-history data.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
(1) Overall construction of the approximate reasoning apparatus
[0024] Figs. 1 and 7 illustrate examples of the overall construction of an approximate reasoning
apparatus. Fig. 1 illustrates an approximate reasoning apparatus according to a first
embodiment, and Fig. 7 illustrates an approximate reasoning apparatus according to
a second embodiment.
[0025] In Fig. 1, the approximate reasoning apparatus of the first embodiment comprises
a knowledge memory unit 11, a knowledge synthesizing/revising unit 12, a synthesized
knowledge memory unit 13, a factor-value input unit 14, a degree-of-membership computing
unit 15, a dynamic information quantity computing unit 16, a possibility computing
unit 17, a possibility display unit 18, a static information quantity computing unit
19, a clarity computing unit 20, a clarity memory unit 21, a clarity adding unit 22,
a clarity display unit 23, and a case-history data memory unit 31.
[0026] In Fig. 7, the approximate reasoning apparatus of the second embodiment is not provided
with the knowledge memory unit 11 shown in Fig. 1. Further, the knowledge synthesizing/revising
unit 12 and synthesized knowledge memory unit 13 of Fig. 1 are replaced by data synthesizing
unit 12A and a synthesized data memory unit 13A, respectively. Other components in
Fig. 7 are the same as shown in Fig. 1.
[0027] Thus, the approximate reasoning apparatus of the first and second embodiments possess
many common elements, and these common elements constitute the principal portions
of the approximate reasoning apparatus. Accordingly, these common elements will be
described first with reference to Fig. 1, then the structural components and operations
peculiar to the first and second embodiments will be described individually.
(2) Knowledge memory unit
[0028] The knowledge memory unit 11 stores knowledge, which has been inputted by an expert
or the like (at a design stage, for example), in a form which indicates the relationships
between factors and conclusions. This unit is capable of storing the knowledge of
a plurality of experts.
[0029] Examples of the knowledge of two experts ex₁, ex₂ stored in the knowledge memory
unit 11 are illustrated below in the form of rules.
[0030] Expert ex₁:
[0031] Expert ex₂:
[0032] Here f₁ and f₂ are factors, which shall be referred to as factor 1 and factor 2,
respectively, hereinafter. Further c1 and c2 are conclusions, which shall be referred
to as conclusion 1 and conclusion 2, respectively.
[0033] Further, a, b expressed such that 1 ≦ f₁ ≦ b holds shall be referred to as minimum
and maximum values, respectively, hereinafter.
[0034] The foregoing rules become as follows for each expert when expressed in the form
of a table:

(3) Knowledge synthesizing/revising unit
[0035] The knowledge synthesizing/revising unit 12 combines the knowledge of the plurality
of experts, which has been stored in the knowledge memory unit 11, into a single body
of knowledge.
[0036] The knowledge synthesizing/revising unit 12 functions also to revise the combined
(synthesized) knowledge using case-history data stored in the case-history data memory
unit 31. This will be described in detail later in the section dealing with the first
embodiment.
[0037] Though there are various methods of synthesizing knowledge, here the mean value and
standard deviation of a plurality of experts are calculated with regard to the maximum
and minimum values of each factor participating in each conclusion.
[0038] Knowledge synthesizing processing will now be described taking as an example knowledge
which derives the conclusion c₁ from the factor f₁ of the two experts mentioned above.
[0039] When rules for obtaining conclusion 1 (c₁) from factor 1 (c₁) are extracted from
the above-mentioned rules [Eq. (1) and Eq. (3)], they are expressed as follows:
[0040] The mean value m
min of the minimum values and the mean value m
max of the maximum values are calculate.

[0041] The standard deviation σ
min of the minimum values and the standard deviation σ
max of the maximum values are calculated.

[0042] When such processing for combining the knowledge of the experts is carried out for
all minimum and maximum values of each factor participating in each conclusion with
regard to the above-mentioned rules [Eqs. (1) through (4)], the following table is
obtained:

[0043] Generally, in approximate reasoning, a membership function is given for a factor.
As one example, a method will be described in which a membership function is obtained
by a Gaussian distribution using the knowledge of experts combined as set forth above.
[0044] A membership function is expressed by the following equation using the mean value
m
min of minimum values, the mean value m
max of maximum values, the standard deviation σ
min of minimum values and the standard deviation σ
max of maximum values:

where
- x :
- value of data inputted to factor
- Φ(x) :
- degree to which input data belongs to factor (degree of membership)
- Gauss (x) :
- value of Gaussian distribution in input x
[0045] Fig. 2 illustrates an example of a Gaussian distribution. In this Gaussian distribution,
only the left half is used in order to form the membership function. The position
of x in Φ(x) = 0.5 is decided by m
min or m
max, and the slope is decided by σ
min or σ
max.
[0046] As one example, a membership function for obtaining the conclusion c₁ from factor
f₁ is formed as in the manner of Fig. 3c from Fig. 3a using a value calculated from
Eq. (7) through Eq. (10). In this case, Eq. (11) becomes as follows:

[0047] Fig. 3a represents the first term on the right side of Eq. (11) or (12), Fig. 3b
represents the second term on the right side of Eq. (11) or (12), and Fig. 3c represents
the result of subtracting the second term from the first term, namely a membership
function expressed by Eq. (11) or (12).
[0048] Figs. 4a and 4b illustrate examples of membership functions for obtaining the conclusions
c₁, c₂ with regard to the factors f₁, f₂ formed based upon the combined knowledge
shown in Table 3.
(4) Synthesized knowledge memory unit
[0049] The synthesized knowledge memory unit 13 stores the mean values and standard deviation
values, which have been calculated by the knowledge synthesizing/revising unit 12,
in the form shown in Table 3. Since the combining of knowledge is not required to
be carried out whenever reasoning is performed, the results of calculated are thus
stored in advance. Then, by reading out the values from the memory unit 13 and using
them whenever reasoning is performed, reasoning processing can be executed at high
speed.
(5) Factor-value input unit
[0050] The factor-value input unit 14 is for reading in input data, which is entered for
every factor, from a keyboard, a communication interface device, a memory, a file,
etc. The inputted data is applied to the degree-of-membership computing unit 15. In
addition, the factor-value input unit 14 provides the clarity adding unit 22 with
information indicating whether data relating to each factor has been entered. The
factor data can be given not only as definite values but also as linguistic values
or membership functions.
(6) Degree-of-membership computing unit
[0051] The degree-of-membership computing unit 15 calculates the degree to which the data
inputted from the factor-value input unit 14 belongs to each membership function (or
conclusion). More specifically, the degree of membership is obtained as Φ(x) by substituting
the input data as the variable x on the right side of Eq. (11) in a case where the
input data is a definite value. Of course, it is not absolutely necessary to use an
arithmetic expression of this kind. In a case where the input data is a linguistic
value or membership function, the degree of membership would be calculated using a
MIN-MAX operation, by way of example.
(7) Dynamic information computing unit and static information computing unit
[0052] Let x₁ represent the factor value (input data) of factor f₁, and let x₂ represent
the factor value of factor f₂. These items of data are inputted from the factor-value
input unit 14.
[0053] Degrees of membership m₁₁, m₁₂, m₂₁, m₂₂ are decided as follows, as shown in Figs.
5a and 5b:
- m₁₁:
- degree of membership of input data x₁ in conclusion c₁
- m₁₂:
- degree of membership of input data x₁ in conclusion c₂
- m₂₁:
- degree of membership of input data x₂ in conclusion c₁
- m₂₂:
- degree of membership of input data x₂ in conclusion c₂
[0054] These degrees of membership are calculated by the degree-of-membership computing
unit 15 when the items of input data x₁, x₂ are applied thereto.
[0055] The concept of fuzzy entropy will now be considered.
[0056] Fuzzy entropy Ef1 when the input x₁ is applied is defined as follows:

[0057] Fuzzy entropy is a type of index of information identification capability. The greater
the clarity with which a conclusion can be identified when the input data x₁ is applied,
the smaller the value of fuzzy entropy. Conversely, the greater the degree of ambiguity
involved in identifying a conclusion, the larger the value of fuzzy entropy becomes.
In other words, the greater the difference between the degree of membership m₁₁ of
the input data x₁ in the conclusion c₁ and the degree of membership m₁₂ of the input
data x₁ in the conclusion c₂, the smaller the value of fuzzy entropy; the smaller
the difference, the greater the value of fuzzy entropy.
[0058] Similarly, fuzzy entropy Ef2 when the input x₂ is applied is given by the following
equation:

[0059] The range of possible values of fuzzy entropy Ef is as follows:
- n:
- number of conclusions in terms of factors
[0060] In this example, the number of conclusions in terms of factor 1 (f₁) is two (c₁,
c₂), and therefore the maximum value of fuzzy entropy Ef is log (2).
[0061] Next, a dynamic information quantity 1f
1D(x₁) which prevails when the input data x₁ is applied is obtained using fuzzy entropy
Ef1. Here the dynamic information quantity 1f
1D(x₁) is the identification capability of a factor for deciding a conclusion when reasoning
is performed. The greater the difference between the degree of membership m₁₁ of the
input data x₁ in the conclusion c₁ and the degree of membership m₁₂ of the input data
x₁ in the conclusion c₂, the larger the value of the dynamic information quantity;
the smaller the difference, the smaller the value of the dynamic information quantity.
[0062] The dynamic information quantity 1f
1D(x₁) regarding the factor f₁ is defined as the result obtained by subtracting the
fuzzy entropy Ef1, which prevails when the input data x₁ is applied, from the maximum
fuzzy entropy.

[0063] Similarly, the dynamic information quantity which prevails when the input data x₂
is applied is as follows, with regard to the factor f₂:

[0064] The dynamic information quantity computing unit 16 calculates the dynamic information
quantity for every factor, in accordance with Eqs. (15) and (16), using the degrees
of membership obtained by the degree-of-membership calculating unit 15.
[0065] The dynamic information quantity depends upon the input data x₁, x₂, as mentioned
above. On the other hand, a static information quantity is independent of the input
data. The result obtained by subtracting the average of fuzzy entropies within the
range of a factor from the maximum fuzzy entropy shall be the static information quantity
of the entire factor. For example, the static information quantity with regard to
factor 1 is given by the following equation:

[0066] Similarly, the static information quantity with regard to factor 1 is given by the
following equation:

where
- m₁₁(x):
- degree of membership of input data x in conclusion c₁ with regard to factor f₁
- m₁₂(x):
- degree of membership of input data x in conclusion c₂ with regard to factor f₁
- m₂₁(x):
- degree of membership of input data x in conclusion c₁ with regard to factor f₂
- m₂₂(x) :
- degree of membership of input data x in conclusion c₂ with regard to factor f2

calculation for varying x at an interval δ in the range 0 - 100 of the factor, computing
fuzzy entropy for each x, and obtaining the average of these entropies (where 0 <
δ ≦ 100)
[0067] As will be understood from Eqs. (17) and (18), the greater the overlapping between
membership functions of factors, the smaller the static information quantities of
factors. Conversely, the smaller the overlapping between membership functions of factors,
the greater the static information quantities of factors. In other words, the static
information quantity indicates the capability of a membership function of a factor
to identify a conclusion.
[0068] The static information quantity computing unit 19 computes and stores a static information
quantity for every factor, in accordance with Eqs. (17) and (18), from membership
functions obtained by combined knowledge. The static information quantity is independent
of input data and therefore need be computed only once.
(8) Possibility computing unit
[0069] For each and every conclusion, an information quantity of a factor is calculated
such that the sum total of information quantities of factors participating in the
conclusion becomes 1 and the relative strengths of the information quantities of these
factors do not change. This calculated information quantity is referred to as weight.
[0070] For example, when the above-described dynamic information quantities are used, each
weighting is as follows:
Weight of factor 1 with respect to conclusion 1:
Weight of factor 2 with respect to conclusion 1:
Weight of factor 1 with respect to conclusion 2:
Weight of factor 2 with respect to conclusion 2:
[0071] Next, the products of these weights and degrees of memberships are computed, these
are totaled for every conclusion, and the result is outputted as the possibility of
a conclusion.
[0072] For instance, in the above example, we have the following:
[0073] The possibility computing unit 17 performs the foregoing computations and calculates
the possibility of every conclusion.
(9) Possibility display unit
[0074] The possibility display unit 18 displays, for every conclusion, the possibility computed
by the possibility computing unit 17. The display of possibility can be presented
for all conclusions, or one or a plurality of possibilities can be displayed for a
conclusion or conclusions for which the possibility is high. In addition, possibilities
can be transmitted to another apparatus by communication or stored in a memory or
file.
(10) Clarity computing unit
[0075] The clarity computing unit 20 computes the clarity of each factor for each and every
conclusion. Here the clarity of each factor for each and every conclusion is taken
to be an indication of the relative identification capability of each factor when
the possibility of a certain conclusion is decided. Accordingly, the identification
capabilities of a plurality of factors for deciding a certain conclusion can be compared
depending upon clarity so that which factor possesses a high identification capability
(i.e., which factor possesses a large quantity of information) can be understood.
A method of calculating clarity will now be described.
[0076] First, the relationship among conclusions, factors and static information quantities
is shown in Table 4.

[0077] As will be understood from Table 4, the identification capabilities of a plurality
of factors for deciding each conclusion can be compared depending upon static information
quantities as well. However, since relative identification capability is difficult
to grasp intuitively in this form, the static information quantity is normalized for
each and every conclusion, as shown in the following table, and the normalized value
is adapted as clarity Cℓ of each factor for each and every conclusion.

In Table 5, we have
[0078] Thus, in the clarity computing unit 20, the clarity of each factor is calculated
for each and every conclusion.
(11) Clarity memory unit
[0079] The clarity memory unit 21 stores the clarity of each factor for every conclusion
calculated by the clarity computing unit 20. The computation of clarity need not be
performed each time reasoning is carried out. Accordingly, the clarity calculated
is stored in the clarity memory unit 21 in advance when knowledge is combined, and
a value that has been stored in the clarity memory unit 21 is read out whenever reasoning
is executed. This makes it possible to achieve high-speed reasoning processing.
(12) Clarity adding unit
[0081] The clarity adding unit 22 calculates the clarity of a factor for which data has
actually been inputted. Here, for the sake of reasoning actually carried out, the
sum total of clarities of factors for which data has been inputted is calculated.
The sum total of clarities indicates the clarity of the result of reasoning. It can
be said that the greater the clarity, the greater the information quantity for deriving
the result of reasoning. Accordingly, clarity can be used as an index for judging
the reliability of the result of reasoning itself.
[0082] Clarity regarding the result of reasoning is calculated as follows:
a) In a case where data is inputted with regard to only factor 1 (f₁)
♢ clarity regarding results of reasoning of of conclusion 1 (c₁):

♢ clarity regarding results of reasoning of of conclusion 2 (c₂):

b) In a case where data is inputted with regard to only factor 2 (f₂)
♢ clarity regarding results of reasoning of of conclusion 1 (c₁):

♢ clarity regarding results of reasoning of of conclusion 2 (c₂):

c) In a case where data is inputted with regard to both factor 1 (f₁) and factor 2
(f₂)
♢ clarity regarding results of reasoning of of conclusion 1 (c₁):

♢ clarity regarding results of reasoning of of conclusion 2 (c₂):

[0083] Thus the range of possible values of clarity Cℓ of results of reasoning is
In other words, in a case where reasoning is performed upon entering data regarding
all factors capable of being used to deduce a certain conclusion in a body of knowledge
given for reasoning is carried out, the clarity of the conclusion will be 1.0. In
a case where data is inputted with regard to only some factors among the factors capable
of being used to deduce a certain conclusion, clarity takes on a value between 0.0
and 1.0. If many factors having a high degree of clarity among the usable factors
are employed in such case, the clarity of the conclusion also will be high and the
results of reasoning will have a high reliability.
(13) Clarity display unit
[0084] The clarity display unit 23 displays the clarity of the results of reasoning (one
example of which is possibility, described above) calculated by the clarity adding
unit 22. Clarity can be displayed along with the results of reasoning. Alternatively,
clarity can be transmitted to another apparatus or stored in a memory or file.
[0085] The display of clarity is presented with regard to all conclusions of the results
of reasoning. Accordingly, in a case where a plurality of conclusions exists, the
clarity corresponding to each conclusion is displayed.
[0086] Thus, whenever data is inputted, the information quantity of a factor to which the
inputted data belongs is calculated and the clarity regarding the results reasoning
is displayed, thereby making it possible for the user to judge the reliability of
the results of reasoning.
(14) First Embodiment
[0087] The approximate reasoning apparatus according to the first embodiment shown in Fig.
1 has the case-history data memory unit 31, in which the relationships between factors
and conclusions which have occurred are stored. The relationships between factors
and conclusions can be stored at every occurrence, or they can be stored collectively
after several occurrences.
[0088] An example of the case-history data is illustrated in Fig. 6. Here it is shown that
conclusion c₁ occurs at the first occurrence, and that the values of factors f₁ and
f₂ at this time are 30 and 60, respectively. Further, it is shown that conclusion
c₂ occurs at the second occurrence, and that the value of factors f₂ and f₃ at this
time are 10 and 30, respectively.
[0089] If this approximate reasoning apparatus is applied to a fault diagnosing system,
relationships between factors and conclusions occur when a fault develops. Conclusions
correspond to the types of faults, and factors correspond to the symptoms (states
perceived by the five senses, output signals from various sensors, etc.) at such time.
A record (fault data) representing these relationships between faults and their symptoms
can be obtained from a repair report, a maintenance report, a fault report, etc. The
case-history data memory unit 31 would be referred to as a fault data memory unit
in such case.
[0090] The storing of this fault data in the memory unit 31 can be accomplished by a human
being entering the data using an input unit, or by an on-line method in which the
data is entered from item of equipment undergoing diagnosis.
[0091] The knowledge synthesizing/revising unit 12 revises the synthesized knowledge, which
has been stored in the synthesized knowledge memory unit 13, by the case-history data
stored in the case-history data memory unit 31. It will suffice to revise only the
synthesized knowledge that relates to the case-history data.
[0092] The method of revision is the same as the synthesizing method described above, and
the case-history data may be thought of as the knowledge of one or a plurality of
experts.
[0093] By way of example, if only case-history data at a first occurrence exists, this case-history
data is related to conclusion c₁ and factors f₁, f₂, and therefore the synthesized
knowledge regarding this conclusion and these factors is revised. At the first use
of the case-history data, it cannot be determined whether this is a minimum or maximum
value, and therefore this case-history data is employed as both values.
[0094] First, with regard solely to conclusion c₁ and factor f₁, Eqs. (7) and (8) respectively
expressing the mean value m
min of the minimum values and the mean value m
max of the maximum values are revised as follows:

[0095] The equations (9) and (10) respectively expressing the standard deviation σ
min of the minimum values and the standard deviation σ
max of the maximum values are revised as follows:

[0096] The synthesized data relating to conclusion c₁ and factor f₂ is revised in the same
manner.
[0097] Since the case-history data at the second occurrence is not related to the conclusion
c₁, the synthesized knowledge relating to the conclusion c₁ is not revised by this
case-history data. The synthesized data relating to conclusion c₂ is revised by the
case-history data at the second occurrence.
[0098] When case-history data from first through third occurrences exists, the synthesized
knowledge expressing the relationship between c₁ and f₁ is revised as follows: With
regard to factor f₁, there are a first factor value 30 and a third factor value 60.
Accordingly, the smaller, namely 30, is considered to be the minimum value of the
third expert, and the larger, namely 60, is considered to be the maximum value.
[0099] Eqs. (7) and (8) respectively expressing the mean value m
min of the minimum values and the mean value m
max of the maximum values are revised as follows:

[0100] The equations (9) and (10) respectively expressing the standard deviation σ
min of the minimum values and the standard deviation σ
max of the maximum values are revised as follows:

[0101] Thus, in a case where there are plural items of case-history data regarding specific
factors relating to a certain conclusion, these items of data are arranged in order
of increasing size, the items of data constituting the smaller half are treated as
minimum values, and the items of data constituting the larger half are treated as
maximum values. Further, in a case where the items of case-history data are odd in
number, the item of data whose size is exactly in the middle is adopted as both a
minimum value and a maximum value. This item of data whose size is exactly in the
middle may be included in either the group of minimum values or the group of maximum
values.
[0102] When the synthesized knowledge is thus revised by the case-history data to which
it corresponds, the knowledge after revision is stored in the synthesized knowledge
memory unit 13 in place of the earlier synthesized knowledge, and it is applied to
the static information quantity computing unit 19. Accordingly, from this point onward,
approximate reasoning is performed used the revised knowledge. In other words, the
possibility of a conclusion is computed using the revised knowledge. In addition,
re-computation of static information quantities, clarity, etc., is performed solely
in portions relating to the revised knowledge by making use of the revised knowledge.
[0103] Since knowledge is thus revised automatically by case-history data, more accurate
reasoning becomes possible and the system will come to possess an automatic learning
function.
(15) Second Embodiment
[0104] In the second embodiment illustrated in Fig. 7, the knowledge memory unit 11 shown
in Fig. 1 is not provided. The case-history data memory unit 31 is the same as that
shown in Fig. 1. Though the data synthesizing unit 12A and synthesized data memory
unit 13A basically are the same as the knowledge synthesizing/revising unit 12 and
synthesized knowledge memory unit 13 described above, the units 12A, 13A of this embodiment
differ in that they deal with case-history data and not with the knowledge of experts.
[0105] The approximate reasoning apparatus of the second embodiment does not require expert
knowledge, and all rules and membership functions for approximate reasoning are created
from case-history data.
[0106] It is assumed that case-history data of the kind shown in Fig. 6 has been stored
in the case-history data memory unit 31. Since the first through fifth items of case-history
data, with the exception of the second item, are related to the conclusion c₁ and
factors f₁, f₂, a rule concerning the conclusion c₁ and factors f₁, f₂ can be created
by these items of case-history data.
[0107] Figs. 8a through 8d illustrate distributions of case-history data related to the
conclusion c₁ and factor f₁. Fig. 8a represents only the first item of data, Fig.
8b the first and third items of data, and Fig. 8c the first, third and fourth items
of data. Fig. 8d represents all data with the exception of the second item. If the
number of items of case-history data is increased in this fashion, the distribution
thereof becomes correspondingly clearer. By adopting this method, therefore, a membership
function of the factor f₁ for deducing the conclusion c1 is created.
[0108] The method of synthesizing the case-history data is the same as that described above.
The items of data are arranged in one row in order of size and divided in half, with
one group having the smaller values and the other group the larger values, data (mean
value, standard deviation) relating to the minimum value is created from the group
of smaller data values, and data relating to the maximum value is created from the
group of larger data values. Such data synthesis requires three or more items of case-history
data. If there are just two items of case-history data, the smaller is adopted as
the minimum value and the larger as the maximum value.
[0109] The items of case-history data relating to conclusion c₁ and factor f₁ shown in Fig.
6 are 30, 60, 20 and 50. When these are arranged in order of size, we have 20, 30,
50, 60. The smaller of these values, namely 20, 30, are adopted as minimum values
(which correspond to the minimum values in the knowledge of the experts), and the
larger, namely 50, 60, are adopted as maximum values (which correspond to the maximum
values in the knowledge of the experts).
[0110] The mean value m
min of the minimum values and the mean value m
max of the maximum values are calculated as follows using these items of case-history
data:

[0111] The standard deviation σ
min of the minimum values and the standard deviation σ
max of the maximum values are calculated as follows:

[0112] Data expressing the relationship between conclusion c₁ and factor f₂ and between
other conclusions and factors is synthesized in the same way.
[0113] The data obtained by synthesizing the case-history data is as shown in Table 6 below.
Here the items of data for factor f₂ relating to conclusion c₁ are only three in number,
and therefore the two smaller items are treated as minimum values and the largest
item is treated as the maximum value. With regard to conclusion c₂, there is only
one item of case-history data concerning each of factors f₂, f₃, and therefore these
items of data are employed as the minimum and maximum values.

[0114] The above-described data synthesizing method is expressed generally as follows:
[0115] Let n represent the number of items of case-history data relating to the relationship
between a certain conclusion and factor. These items of data are arranged in ascending
order. Let the stored data be as follows:
where d₁ is the item of data having the smallest value, and d
n is the item of data having the largest value.
[0116] The mean value m
min and standard deviation σ
min of the minimum values are given by the following equations:

where n₁ is a value obtained by rounding off n/2 to the nearest whole number.
[0117] The mean value m
max and standard deviation σ
max of the maximum values are given by the following equations:

[0118] The synthesized data thus created is stored in the synthesized data memory unit 13A.
The above-described approximate reasoning is then performed based upon this synthesized
data. That is, the possibility of a conclusion, static information quantities and
clarities.
[0119] It goes without saying that the units 11 - 23, 12A, 13A and 31 can be realized by
a computer which includes a memory and a display device. For example, the knowledge
synthesizing unit 12, data synthesizing unit 12A and the various arithmetic units
15, 16, 17, 19, 20, 22 can be ideally implemented by a CPU which operates in accordance
with a program.
[0120] As many apparently widely different embodiments of the present invention can be made
without departing from the spirit and scope thereof, it is to be understood that the
invention is not limited to the specific embodiments thereof except as defined in
the appended claims.