If your computer was intelligent then it would be easy for it to join in a sensible argument.

The sort of thing I have in mind is:

Mother: If is sunny we will

have a picnic.

Child: (Looks out of window)

It is sunny today.

Mother: Get the picnic basket.

Of course this sort of argument is so slight that it almost doesn't deserve the dignity of the title.

Even so imagine a world where the logical connection between such simple statements isn't obvious - well that's how it is with computers.

If you place the statement 'if it is sunny we will have a picnic' and 'it is sunny today' in a database then they will just sit there and nothing will happen. No conclusions will be drawn from the facts stored in a database because the machine doesn't have the slightest idea how to manipulate facts to produce new facts.

Trying to implement human reasoning on a computer would seem to be a good place to start the task of creating an intelligent computer because reasoning is so easy - isn't it?

From the earliest times people, philosophers mainly, have been trying to work out the rules that make an argument valid.

The reasons for this interest was that people really didn't know what constituted a proof that something was true.

For example, Pythagoras suggested that the sum of the squares on the two short sides of a triangle always equalled the square on the longer side.

He had tried it out on a few triangles that happened to be lying around and it was true, as far the accuracy of his measuring instruments would go - but is it true for all triangles?

What more has to be done to establish that this fact is true for all triangles?

In this case we are looking at the problem of mathematical proof but the same difficulties exist in every day argument. If two people have access to the same facts and yet come to different conclusions how do you decide which one is right?

In an attempt to solve all of these difficulties the Greeks created the subject of logic.

Logic is the study of how the known truth of one set of statements can be used prove the truth of other statements.

Notice that this view of the problem was something of a breakthrough in its own right.

Previously people had been worried about establishing absolute truth but logic is about deriving the truth of one set of statements - the theorems - from another set of statements - the premises - that are assumed to be true.

For example, if you know that two statements A and B are true then you can claim that the combined statement A AND B is true.

If statement A is "The weather is good" and B is "We are at the seaside" then A AND B is "The weather is good AND we are at the seaside". Clearly in this case if A is true and B is true then "The weather is good AND we are at the seaside" is also true.

You may think that this is trivial but in this simple idea lies all of traditional computing and most of the future too. One of the problems in trying to understand what logic is all about is that it tends to be littered with old fashioned jargon based on the original Greek and Latin terms - like premises, theorems etc. Don't be put off by the jargon, it is no more difficult than any other jargon.

The Greeks wrote down many different standard forms of acceptable argument.

For example, the little dialogue about picnics is a case of the modus ponens law which is in general

IF A implies B

AND A is true

THEN DEDUCE B is true.

The only other basic law of logical argument that they proposed was the chain law which is

IF A implies B

AND B implies C

THEN DEDUCE A implies C

From these two forms of argument the Greeks could deduce everything else.

Not very much important happened in the study of logic until George Boole noticed that what it was really all about was a sort of arithmetic of truth.

If you take the relationship between the truth of A and the truth of B and the resulting truth of A AND B then you can draw up a table of the possibilities where T and F stand for True and False respectively.

A | B | A AND B |

F | F | F |

F | T | F |

T | F | F |

T | T | T |

This looks very much like a sort of multiplication table showing how two truth values are combined. Boole extended this idea to include all of the other possible ways of combining truth values.

The best known of these are AND, OR and NOT.

A compound statement A OR B is true if either or both A and B are true. NOT A simply 'flips' the truth value of A, if it is true then NOT A is false and vice versa.

Boole very quickly arrived at an algebra of truth values that we now call Propositional Calculus or Boolean logic in honour of the man himself.

The rules of Boolean logic, some are obvious some less so, are -

A AND B = B AND A

A OR B = B OR A

A AND (B AND C) = (A AND B) AND C

A OR (B OR C) = (A OR B) OR C

A AND (B OR C) = (A AND B) OR (A AND C)

A OR (B AND C) = (A OR B) AND (A OR C)

NOT(A AND B) = (NOT A) OR (NOT B)

NOT(A OR B) = (NOT A) AND (NOT B)

You might have come across Boolean logic before because it is the basis of a lot of computer science. The circuits that go to make up a computer work in terms of two voltages but these are combined in exactly the same way as truth values in Boolean logic. If you are a programmer then you will certainly have used AND, OR and NOT within IF statements to make complex selections and choices.

To give you an example of the sort of use that Boolean logic is conventionally put what is the truth value of

(NOT A AND B) OR (A OR NOT B)

The answer is easy to find using the laws of Boolean logic -

(NOT A AND B) OR (A OR NOT B)=

using De Morgans law on the second bracket which gives.

(NOT A AND B) OR NOT(NOT A AND B)

which is always TRUE because the brackets are identical and the whole expression is like C OR NOT C which is always true (think about how it could be false!).

<ASIN:B000S1MBJO@ALL>

<ASIN:3540006788>

<ASIN:0792314476>

<ASIN:0763773379>

<ASIN:0198538324>

The trouble with meeting Boolean logic in connection with computers is that there is a tendency to overlook its role in proof or logical reasoning.

Boolean logic provides the basic framework for examining how truth values combine in compound statements but it doesn't provide a strategy for proving one thing from another.

For example, the modus ponens law includes the word 'implies' and before we can consider proof in Boolean algebra we have to find a translation of implies in terms of AND, OR and NOT. It is customary to use the notation A->B for A implies B and we can draw up a truth table for the compound statement A->B

A |
B | A ->B |

F | F | T |

F | T | T |

T | F | F |

T | T | T |

Notice that this table might not be quite what you are expecting of 'implies'.

The key to understanding it is to notice that A->B only says that B is true if A is true. If A is false then A->B says nothing about the truth of B. By studying the truth table you might discover that

A -> B = NOT A OR B

This simple relationship is the key to most of the recent advances in AI and basis of the language Prolog. What it allows us to do is automate the modus ponens and the chain rule and so add logical reasoning to the dimmest of computers.

Before looking at how we can automate logical reasoning it is worth proving something using traditional methods.

For example, if you have the premises

1. (A AND B) -> ((C OR D) -> (E AND F))

2. (E AND F) -> (NOT F OR G)

3. A AND B

then to prove

(C AND D) -> NOT F OR G

you would have to use modus ponens on 1 and 3 to give

4. (C AND D) -> (E AND F)

using the chain rule on 2 and 4 gives

5. (C AND D) -> (NOT F OR G)

which completes the proof.

This seems easy enough but how can a computer be made to produce the same proof?

How was it determined that the modus ponens rule should be used and then the chain rule?

Like nearly all proofs this one covers up the trial and error and sometimes slightly inspired route that the human took to arrive at the answer.

This is not a method that we can program on a computer.

The first observation that helps to automate logical reasoning is that both modus ponens and the chain rule can be written in a single and very regular form.

If each implication in the chain rule:

(A->B) AND (B->C) DEDUCE (A->C)

is written in the form NOT A OR B we get:

FROM NOT A OR B

AND NOT B OR C

DEDUCE NOT A OR C

and this is can be thought of as 'cancelling' the terms B and NOT B in the first two expressions to produce the third.

This includes modus ponens because if we set A to the trivial proposition TRUE we get:

FROM NOT TRUE OR B = B

AND NOT B OR C = B->C

DEDUCE NOT TRUE OR C = C

What all this tells us is that as long as all the statements in the premises are similar to:

A OR NOT B OR C OR NOT D

i.e. statements that use only ORs and NOTs, then there is a very simple and mechanical method of looking at a set of premises and deriving new truths.

All you have to do is:

- Scan the premises for two compound statements one containing a X and the other containing NOT X
- Join the two statements together but leave out any mention of X or NOT X

This is called inference by resolution. For example, if the premises are:

1. A OR NOT B

2. C OR NOT A

3. A OR B OR NOT C OR D

then we can easily make new true statements by combining or resolving 1. with 2. so eliminating A and giving:

4. C OR NOT B

Then 4 could be resolved with 3 eliminating both B and C giving:

5. A OR D

and so on.

You could easily program this search and combine method and as long as the original premises were true you would carry on generating new truths to add to the set of known truths. Eventually it would add to the original premises every single statement that could be proved using them as the starting point. If you want to prove something simply set the computer running until the statement that you want to prove turns up - well not quite because there are two problems.

The first is that not all Boolean statements are made up of nothing but ORs and NOTs and secondly you could be searching a long time to find the statement that you are looking for.

Both of these problems have very easy solutions.

<ASIN:0262062712>

<ASIN:0135717795>

<ASIN:0321545893>

<ASIN:0136042597>

<ASIN:048624864X>

The difficult of using resolution is solved almost immediately by a theorem in logic which says that any statement in Boolean logic can be reduced to a form like:

(A OR B) AND (NOT C OR D)

AND (E OR NOT F OR G)

This is called conjunctive normal form and there is a simple algorithm that will convert any statement to it. If you have a premise like:

(A OR B) AND (NOT C OR D) AND

(E OR NOT F OR G)

then for this to be true as a whole each of the sub-statements in brackets also have to be true on their own (because they are connected by ANDs).

As this is true you can split the premise up and write:

1. (A OR B)

2. (NOT C OR D)

3. (E OR NOT F OR G)

and these are clearly in the form that we need to be able to apply resolution.

A list of premises that consist of individual statements each only involving OR and NOT is often referred to as being in clause form.

We can speed up the entire process of searching for a solution simply by adopting a slightly different goal for our proof.

The method that results is called reductio ad absurdum.

If the statement that you are trying to prove, P say, is true then adding its negation, that is NOT P, to the premises then will eventually result in the resolution method producing a null statement.

The reason is simply that if P is true then it will eventually be produced by resolving statements together. This results in both P and NOT P being available for resolution so producing the null statement.

Now at last we have a workable and simple method that allows a computer to prove things using logic. All the machine has to do is:

- Convert all of the statements that are known to be true, i.e. the premises,a into clause form.
- Negate the statement P that you want to prove (i.e. write NOT in front of it), convert it into clause form and add the result to the premise
- Use resolution to derive new truths until the null clause is produced so proving that P was indeed true.

Of course it may be that P was false in which case the program as described would loop forever.

Fortunately there are ways of modifying the program so that it knows that it has tried all possibilities and stops.

Notice that in either case the you can only prove that P is true or false relative to the premises that you started with.

More importantly if the program says that P is false all this really tells you is that it cannot be proved using the information given. It may still be true but you need more starting information to prove that it is so.

Armed a program that proves theorems by resolution, an easy one to write, I can at last put the entire mathematical profession out of a job!

All I have to do is translate the basic theorems of mathematics into clause form, enter a likely candidate for a new theorem and wait for my program to win me the Fields medal (the mathematician's equivalent of the Nobel prize).

The flaw in this plan is that you cannot translate mathematics into Boolean logic - it is far to restrictive. It does capture some of the elements of human reasoning but it misses more than it hits. For example, you can make a statement such as:

John is a father -> John is male

but how can you write something that captures that fact that anyone who is a father is also male?

The answer is to be found in first order predicate calculus which does have enough power to deal with mathematics.

This is an extension of simple propositional calculus to include variables and logical functions i.e. functions that are either true or false.

In the jargon predicates are logical functions hence the term 'predicate calculus'. To represent the idea 'fathers are male' as a fact in predicate calculus you would write something like:

X is father -> X is male

In this case the statement is true for all possible values of X. In the jargon X is 'universally quantified'. This is often indicated by writing an upside down A in front of the statement (due to typographical difficulties the A will be used right side up in this article!)

For example,

A X X is father -> X is male

would be read "For all X, X is a father implies X is male".

The only alternative to the universal quantifier A, is E, the existential quantifier (I warned earlier you that the jargon was fun!) This is used in the same way as A but it is read "There exists". (Once again the usual convention is to write E mirror reversed but again due to typography we will use a standard E.)

For example:

AX EY X is person -> Y is father of X

would be read "For all X there exists a Y such that X is a person implies that Y is the father of X.

Once you get the hang of A and E it is easy to write all sorts of complicated things.

However in practice the quantifiers prove to be a problem so these days a different method based on functions is used to indicate that something applies only to a specific X, Y or Z.

For example, you can just as well write:

AX X is a person -> F(X)

where F(X) a function that returns the name of the father of X.

That is F(X) is the Y that is mentioned in the first version of the statement. It turns out that it is possible to re-write any statement so that an E is unnecessary simply by introducing functions that provide the particular object needed.

This process is referred to as Skolemisation and the functions are called Skolem functions but it seems almost commonsense in spite of the fancy titles.

As this is possible we might as well adopt the convention that all variables are universally quantified as a default because any that aren't would occur as functions.

<ASIN:0136042597>

<ASIN:1840468416>

<ASIN:1558604677>

<ASIN:0763773379>

<ASIN:0136070477>

Given predicate calculus you really can oust the mathematicians and install a computer instead. Apart that is from the small problem that our theorem proving by resolution has no way of dealing with variables and functions in statements.

Fortunately there is a simple generalisation of the resolution method that does apply to predicate calculus which was invented by Alan Robinson in the 1960's and which ultimately gave rise to the language Prolog.

The changes are:

- all statements have to be converted to 'Horn clauses' which are clauses with at most one NOT.

- variables in clauses have to be assigned values in an attempt to find pairs that can be resolved together. This assigning of values is called the Unification algorithm.

Although unification sounds complicated it is just a matter of trying to make two clauses look similar by setting variables to the same values.

For example, the following statements:

1. NOT(X is a person) OR

F(X) is father of X

(i.e. X is a person -> F(X)

is father of X)

2. NOT(Y is father of X) OR

X is child of Y

(i.e. Y is father of X -> X

is child of Y)

could be unified by setting Y to F(X) and then resolving the two clauses to give:

3. NOT(X is a person) OR

X is a child of F(X)

or X is a person -> X

is child of F(X)

which is of course a reasonable deduction.

We can still use the trick of adding the negation of the clause that we are trying to prove to the premises and trying to derive the null clause and so proof in predicate calculus can also be automated.

The link between all of this and Prolog is that the language is based on predicate calculus and it uses unification and resolution to prove whatever goals you set it.

The only real difference between what we have been looking at so far and Prolog is a matter of notation. Prolog uses :- for ->, a comma for AND and a semi-colon for OR. The clauses also have to be written in the form of predicates or logical functions.

For example, the clause:

Y is father of X -> X is child of Y

would be written in Prolog as:

father_of(X,Y) :- child_of(Y,X)

where father_of(X,Y) is a function that is either true or false depending on the values of X and Y. Using predicates also avoids the having to use Skolem functions. For example, the idea contained in:

X is a person -> F(X) is father of X

would be translated into Prolog as a listing of values of X and Y for which the predicate father_of(X,Y) was true:

father_of(tom,dick)

means Dick is Tom's father

father_of(dick,harry)

means Harry is Dick's father

and so on. You could then add a clause like

grand_father(X,Y) :-

father_of(X,Z),father_of(Z,Y)

which says that the grandfather of X is Y if the father of X is Z and the father of Z is Y.

Prolog will automatically scan the list of father_of clauses to find a suitable value of Z for any values that are given to X and Y and if none can be found then the predicate grand_father(X,Y) isn't true for the current values of X and Y.

If by this point you are feeling a little boggled by predicates, unification, resolution and so on, then it is worth saying that many users of Prolog never manage to understand its relationship to logic.

This is a shame because after all Prolog does stand for PROgramming in LOGic. Prolog is one just one example of a theorem prover. There are others and some of them are better at the task. However Prolog is a good compromise between power and efficiency but don't be surprised if the language develops to include more powerful features of logic.

Currently theorem provers have successfully proved theorems in mathematics that beforehand were unsolved problems. None of these have been earth shattering and so far no computer has been awarded the Fields medal!

The range of problems that can be tackled using predicate calculus is amazing but it still doesn't really capture the nature of human thought. You can look at this either as a reflection of the fact that human thought is sloppier than logic or that it is more sophisticated than logic. I suppose it all depends on whether you think computers are better than humans.

<ASIN:0136070477>

<ASIN:0446678759>

<ASIN:026201243X>

<ASIN:1432749366>

<ASIN:026201243X>

<ASIN:B000PC0WAG@ALL>

]]>