Fuzzy Logic And Uncertainty In AI
Written by Mike James   
Friday, 20 August 2021
Article Index
Fuzzy Logic And Uncertainty In AI
Multi-valued logic
Fuzzy factors in Prolog


Kleene's three-valued logic

For example, there is Kleene's three-valued logic which uses True, False and Undecided.

True and false have their usual interpretation but now a fact can be undecided. The basic idea behind Kleene's logic is that any statement that can be proved from its components is assigned that truth value, even if it involves components that are undecided.

For example, in Kleene's logic the usual operations AND, OR and NOT become:

A AND B | T  F  U
   T    | T  F  U
   F    | F  F  F
   U    | U  F  U
A OR B  | T  F  U
   T    | T  T  T        
   F    | T  F  U 
   U    | T  U  U

A    | NOT A
 T    | F
 F    | T
 U    | U

There are no real philosophical difficulties with Kleene's logic as long as we interpret U as unproven.

In other words, it's not that the fact is genuinely neither true nor false it's just that we don't know which it is at the moment.

There are versions of three-valued logic which postulate that some facts may truly be intermediate between true and false but these seem to be of little use in AI.

Once you have got used to the idea of a three-valued logic it doesn't seem so shocking if someone suggests a four- or five-valued logic. It all comes down to a reasonable interpretation of the new truth values.

However you still may not be prepared for the idea of 'fuzzy logic' where there are an infinite number of truth values!



Getting fuzzy

Fuzzy logic comes into being by way of the observation that we use terms such as 'small', 'large', 'hot', 'cold' etc. without being at all precise.

Traditional logic maintains that you should be able to say whether 'X is hot' is true or false. However you might be prepared to say that '100C is hot' is true but what about '40C is hot' or '39C is hot' etc.. There is a sense in which the statement '39C is hot' is true but not as true as '100C is hot' is true.

What we are working towards is the idea that the truth value of such statements really is somewhere between true and false. If you represent true by 1 and false by 0 then fuzzy logic differs from conventional logic by allowing truth values between 0 and 1.

For example, you might say that '35C is hot' is only 0.5 true.

This only leaves the question of what propositions such as A AND B mean in fuzzy logic.

This is easier to answer than you might think. If we use the definitions:

A OR B  = MAX(A,B)


NOT A = 1-A

then, not only do they seem to produce a reasonable theory, you return to traditional logic if you restrict the truth values to 0 and 1.

For example,

NOT 1 = 1-1 =0


NOT 0 = 1-0 =1

and so on..

In other words fuzzy logic includes standard logic within it as a subset if you simply restrict the allowable values to 0 and 1. It is the an interesting curiosity that And and Or correspond to MIN and MAX.

Fuzzy logic can be used more or less as it stands within expert systems to deal with uncertainty.

For example, if you think that the statement 'The gas tank isn't empty' is .7 true and 'The battery is charged' is .5 true then using the rule:

IF 'gas tank isn't empty' AND
    'battery is charged' THEN
      'fault lies in wiring/fuel lines'

you could conclude that 'fault lies in wiring/fuel lines' is min(.7,.5) i.e. 0.5 true.

In fact the theory of fuzzy logic goes a bit further than this and suggests a particular way in which the truth values should be set up so as to correspond to common terms such as - true, very true, slightly true, not very true etc..

In most of the practical applications of fuzzy logic however this part of the theory is more or less ignored. Indeed you will often find any use of the max or min functions to combine estimates of uncertainly called, without much justification, fuzzy logic.

Overall fuzzy logic is a heuristic that some times seem to work. 


Probably so

The main trouble with fuzzy logic and similar theories is that they don't give you any sort of absolute way of interpreting levels of truth or belief.

For example, is my 0.7 certainty the same as your 0.7 certainty.

In situations such as this most people turn to the subject of probability for an answer.

The most favoured theory of probability (yes there is more than one!) relates all probabilities to physical events. If you say that the probability of a coin landing heads is 0.5 this means that roughly half of all the landings come up heads. You can check that the probability is 0.5 by doing an experiment and counting the proportion of heads that you get. In this sense probabilities are objective and measurable.

Probability in applied subjects always relates to repeatable events that could in principle be counted to provide an estimate of the probabilty in question. 

Saying that the probability of a coin landing heads is 0.5 seems very close to saying that your belief or certainty that the coin will land heads is 0.5 but in fact there is a world of difference.

If you equate probabilities with beliefs then you run into difficulties very quickly.

For example, what does my estimate of 0.8 of the probability of their being life on other planets mean?

It certainly doesn't mean that I expect 80% off all planets to have life on them. If it means that the probability of finding life anywhere in the universe is 0.8 then in what sense can I repeat this event so that 80% of the time there is life and 20% of the time there isn't! You can go on to imagine parallel universes in 80% of which life develops one other planets but this is hardly a measurement that you could make in the same way as tossing a coin.

In short the exact theory of probability doesn't really apply to estimates of beliefs or certainties and at once you realise this fact you might as well admit that fuzzy logic has just as much claim to be correct as probability theory.

This is the key problem with using Bayesian inference when the probabilities aren't physical but related to belief. There is nothing at all wrong with Bayes' theorem - it is pure and classical probability theory based on counting the relative frequencies of events. The problem is when it is applied to situations where the probabilities aren't physical probabilities but belief estimates. There is no particular reason for using probability to quantify your belief in something. 

Most Bayesian statisticians tend to ignore this problem and just "shut up and compute". 

This doesn't mean to say that there aren't applications within AI where probability theory applies. Many an expert system contains rules where the conclusion doesn't always follow from the conditions.

For example, I may have noticed that a particular set of symptoms goes with a particular fault with a probability of 0.8 i.e a real physical probablity in that 80% of the time the symptoms and fault go together:

IF symptoms THEN 
      fault=X with a probability of .8

This is a reasonable use of probability because I can measure the number of times the symptoms are associated with the fault - this isn't a question of belief or even opinion. In cases such as these you should use the laws of probability to work out final probabilities.

In practice this usually turns out to be far too difficult.

For example, if you have a rule

IF A THEN B with a probability of .9

and you have concluded A, as the result of another rule, with a probability of 0.8 what probability do you assign to B? 

It turns out that using strict probability theory it is very difficult to say what the probability of B is. The difficulty is caused by needing to know lots of conditional or joint probabilities rather than anything theoretic. It is simply that you usually can't gather enough data to work out the probability of B if A is also uncertain.

What most expert systems do in this case is to multiply the factors together giving 0.72 and so abandoning any interpretation that the figures have as probability. In the same way probabilities in rules such as

IF A AND B THEN C with probability P

are combined by taking the minimum of the probabilities of A and B.

You might see the connection with fuzzy logic but this has very little to do with probability theory.

The bottom line is that in principle you can use probabilities but in practice the amount of data needed to work out the exact correct results is far too great. As a result we often take ad-hoc approximations ranging from just multiplying probabilities to using fuzzy logic or something in between. There is usually little justification for any of this. 





Last Updated ( Friday, 20 August 2021 )