Different Logic and Prolog |
Written by Mike James | |||||
Thursday, 13 March 2025 | |||||
Page 3 of 4
Probably soThe main trouble with fuzzy logic and similar theories is that they don't give you any sort of absolute way of interpreting levels of truth or belief. For example, is my 0.7 certainty the same as your 0.7 certainty. In situations such as this most people turn to the subject of probability for an answer. The most favoured theory of probability (yes there is more than one!) relates all probabilities to physical events. If you say that the probability of a coin landing heads is 0.5 this means that roughly half of all the landings come up heads. You can check that the probability is 0.5 by doing an experiment and counting the proportion of heads that you get. In this sense probabilities are objective and measurable. Probability in applied subjects always relates to repeatable events that could in principle be counted to provide an estimate of the probabilty in question. Saying that the probability of a coin landing heads is 0.5 seems very close to saying that your belief or certainty that the coin will land heads is 0.5 but in fact there is a world of difference. If you equate probabilities with beliefs then you run into difficulties very quickly. For example, what does my estimate of 0.8 of the probability of their being life on other planets mean? It certainly doesn't mean that I expect 80% off all planets to have life on them. If it means that the probability of finding life anywhere in the universe is 0.8 then in what sense can I repeat this event so that 80% of the time there is life and 20% of the time there isn't! You can go on to imagine parallel universes in 80% of which life develops one other planets but this is hardly a measurement that you could make in the same way as tossing a coin. In short the exact theory of probability doesn't really apply to estimates of beliefs or certainties and at once you realise this fact you might as well admit that fuzzy logic has just as much claim to be correct as probability theory. This is the key problem with using Bayesian inference when the probabilities aren't physical but related to belief. There is nothing at all wrong with Bayes' theorem - it is pure and classical probability theory based on counting the relative frequencies of events. The problem is when it is applied to situations where the probabilities aren't physical probabilities but belief estimates. There is no particular reason for using probability to quantify your belief in something. Most Bayesian statisticians tend to ignore this problem and just "shut up and compute". This doesn't mean to say that there aren't applications within AI where probability theory applies. Many an expert system contains rules where the conclusion doesn't always follow from the conditions. For example, I may have noticed that a particular set of symptoms goes with a particular fault with a probability of 0.8 i.e a real physical probablity in that 80% of the time the symptoms and fault go together: IF symptoms THEN This is a reasonable use of probability because I can measure the number of times the symptoms are associated with the fault - this isn't a question of belief or even opinion. In cases such as these you should use the laws of probability to work out final probabilities. In practice this usually turns out to be far too difficult. For example, if you have a rule IF A THEN B with a probability of .9 and you have concluded A, as the result of another rule, with a probability of 0.8 what probability do you assign to B? It turns out that using strict probability theory it is very difficult to say what the probability of B is. The difficulty is caused by needing to know lots of conditional or joint probabilities rather than anything theoretic. It is simply that you usually can't gather enough data to work out the probability of B if A is also uncertain. What most expert systems do in this case is to multiply the factors together giving 0.72 and so abandoning any interpretation that the figures have as probability. In the same way probabilities in rules such as IF A AND B THEN C with probability P are combined by taking the minimum of the probabilities of A and B. You might see the connection with fuzzy logic but this has very little to do with probability theory. The bottom line is that in principle you can use probabilities but in practice the amount of data needed to work out the exact correct results is far too great. As a result we often take ad-hoc approximations ranging from just multiplying probabilities to using fuzzy logic or something in between. There is usually little justification for any of this. <ASIN:0486425339> <ASIN:9810245343> <ASIN:0671875353> <ASIN:1871962439> |
|||||
Last Updated ( Tuesday, 18 March 2025 ) |