McCulloch-Pitts Neuron
Article Index
McCulloch-Pitts Neuron
What can the brain compute?

What can the brain compute?

You can see that it would be possible to continue in this way to build more and more complicated neural circuits using cells. Shift registers are easy, so are half and full adders – give them a try! But at this point you might well be wondering why we are bothering at all?

The answer is that back in the early days of AI the McCulloch-Pitts neuron, and its associated mathematics, gave us clear proof that you could do computations with elements that looked like biological neurons.

To be more precise, it is relatively easy to show how to construct a network that will recognise or “accept” a regular expression. A regular expression is something that can be made up using simple rules. In terms of production rules any regular expression can be described by a grammar having rules of the type:

<non-terminal1> -> 
symbol <non-terminal2>

or

<non-terminal1> -> symbol

That is, rules are only “triggered” in the right and symbols are only added at the left. For example the rules:

1. <R1>-> A <R2>
2. <R2>-> B<R1>
3. <> -> <R1>

are regular. Production rules are used by picking a rule with a left-hand side that matches part of the sequence and then using the right-hand side to re-write that part of the sequence. (The symbol <> means the null sequence, i.e. the sequence with no symbols).

The last rule starts things off by allowing use to convert the null sequence <> into <R1>. Rule 1 lets us change <R1> into A<R2> and then rule 2 lets us change this into AB<R1> and so on. You can see that all sequences of the type ABABAB and so on are regular according to this grammar.

Regular sequences are important because they are the most basic pattern that a simple computer – a finite state machine – can recognise. What is important about McCulloch-Pitts networks is that it is easy to show that you can build a network that will generate or recognise any regular sequence and so such networks are equal in power to a finite state machine.

Why is this important?

Well if you agree that McCulloch-Pitts neurons capture the essence of the way biological neurons work then you also have to conclude that biological networks are just finite state machines and as such can only recognise or generate regular sequences.

In their original work McCulloch and Pitts extended this observation into deducing a great deal about human brain function. Most of this seems a bit far-fetched from today’s standpoint but the basic conclusion that the brain is probably nothing more than a simple computer – i.e. a finite state machine – still seems reasonable.

If you know a little about the theory of computation you might well not be happy about this “bottom line” because a finite state machine isn’t even as powerful as a Turing machine. That is, there are lots of things that a Turing machine can compute that in theory we, as finite state machines, can’t. In fact there are three or more complexities of grammar, and hence types of sequence, that finite state machines, and hence presumably us, cannot recognise. This sort of argument is often used to demonstrate that there has to be more to a human brain than mere logic – it has a non-physical “mind” component or some strange quantum phenomena that are required to explain how we think. All nonsense of course!

You shouldn’t get too worried about these conclusions because when you look at them in more detail some interesting facts emerge. For example, all finite sequences are regular and so we are really only worrying about philosophical difficulties that arise because we are willing to allow infinite sequences of symbols. While this seems reasonable when the infinite sequence is just ABAB… it is less reasonable when there is no finite repetitive sequence which generates the chain. If you want to be philosophical about such things perhaps it would be better to distinguish between sequences that have no fixed length limit – i.e. unbounded but finite sequences - and truly infinite sequences.

Surprisingly, even in this case things work out in more or less the same way with finite state machines, and hence human brains, lagging behind other types of computer. The reason for this is simply that as soon as you consider a sequence longer than the number of elements in the brain it might as well be infinite! As long as we restrict our attention to finite sequences with some upper limit on length, and assume that the number of computing elements available is much greater than this, then all computers are equal and the human brain is as good as anything!

McCulloch and Pitts neural networks are not well-known or widely studied these days because they grew into another sort of neural net – one that can be trained into generating any logic function you care to name.

<ASIN:0387310738>

<ASIN:0123740193>

<ASIN:068482471X>

<ASIN:0316180661>

Banner


Data Compression The Dictionary Way

One of the most important lossless forms of compression is the LZW dictionary based method. It turns up in lots of compression utilities - ZIP, Compress, Deflate and in GIF and PNG format files. It is [ ... ]



What Is Computable?

Performing a computation sounds simple like a simple enough task and it is easy to suppose that everything is computable. In fact there are a range of different types of non-computability that we need [ ... ]


Other Articles




 
 

   
RSS feed of all content
I Programmer - full contents
Copyright © 2014 i-programmer.info. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.