The Hopfield Network - A Simple Python Example
Written by Mike James   
Monday, 14 October 2024
Article Index
The Hopfield Network - A Simple Python Example
Exploring Hopfield Networks
Python Program

The recent awarding of the Nobel Prize in Physics to John Hopfield has generated some interest in Hopfield networks. The good news is that it is remarkably easy to understand and implement such a network.

Back in the late 1970s, AI and neural networks in particular were out of fashion. Some people, notably David Rumelhart and Geoffrey Hinton at the University of Toronto and John Hopfield at Princeton University, kept things going, but the big problem was that they simply didn't have the compuational power to implement big deep networks and small shallow networks just don't perform well enough to be taken seriously.

Surprisingly, the biggest breakthrough, and cause of the network revival, came in 1982  when Hopfield suggested an alternative simple type of network. In Hopfield's network all the neurons are connected together and there are no direct inputs or outputs to the outside world.

If there are no external inputs or outputs what good is such a network?

The answer is that the input is taken to be the initial state of the network and the output is take to be the final state of the network after it has settled down. In other words, you use a Hopfield network by setting it to an initial state, the input, and then letting it evolve towards a final state, which is the output.

A Hopfield network can be made to store a set of patterns by a learning process. If an incomplete or distorted version of one of the patterns is used as an input to the network, i.e. used as the initial state, then the network with evolve towards the original complete and undistorted pattern. You should be able to see that a Hopfield network is an associative memory in that it can learn and recall patterns.

Hotfield?

At this point you might still be wondering what the advantage of
the Hopfield network was and why it caused so much fuss? The answer is that let us make a start on analysing and understanding its behaviour and it can be used as an associative memory. There is a curious connection between the Hopfield network and something studied in statistical physics - an Ising spin glass. Statistical physics studies the behaviour of large groups of particles in terms of their statistical distributions. In other words, while it cannot tell you where each particle is, it can tell you the bulk properties of temperature, energy distribution and so on.

An Ising spin glass is just such a collection of particles but with the added complication that they are magnetic (because they are
spinning) and hence interact with one another. Now this sounds a little like the collection of neurons all interacting with one another that we find in the Hopfield network and indeed the mathematics of the Ising spin glass do apply to the Hopfield network. This is very useful because it allows us to predict many properties of a Hopfield network, but it can be disconcerting because there is a tendency to use the same terminology as heat - hence network temperature, energy, entropy etc.. If you don't see the connection then you might find the use of such terms bizzare in the extreme!

A Statistical Physics Perspective

The really interesting thing is that viewing neural networks from the point of view of statistical physics gives you many insights into what is going on. It turns out that when the network learns it undergoes a 'phase change' - as in from a gas to a liquid state. The more difficult a pattern is to learn the higher the transition temperature, i.e. the temperature at which the phase change occurs. You can even go on to alter the way the network works to include some random noise that depends on the temperature. (If you know a little physics then you will recall that the higher the temperature the more the random motion the particles have.) This gives rise to a whole new set of methods and algorithms, the so-called Boltzmann machines and simulated annealing methods. Boltzmann machines were invented and investigated by Hinton building on the start given by Hopfield. It is this connection with physics that resulted in the pair being awarded the 2024 Nobel Prize in Physics.

It may seem a strange thing to do, adding random noise to a neural network, but it does improve the power of the system rather than decrease it. In general, these techniques are very powerful and enable us to solve diverse problems such as the travelling salesman problem, scheduling, routing, code-breaking etc.. Many of these problems are known in the jargon as 'NP Complete'. Which roughly means that the time taken to compute an exact solution grows so fast with the size of the problem that they are effectively insoluble.

The simulated annealing method cannot guarantee to give you the correct answer to these problems, but it can give an answer that will be correct to a given probability. For example, in the travelling salesman case it might give you a route between a group of towns which it claims to be the shortest with a probability of .9 or .99 or .999 etc. The longer you can wait for an answer the better the probability of it being correct.

Playing With Hopfield

The program that we are going to develop simulates a memory consisting of 64 neurons. The neurons are arranged in an 8 by 8 grid and you can enter a number of patterns using  a  mouse and ask for them to be remembered. You  can  then enter another pattern and ask for recall. If the pattern  matches any of the stored patterns exactly then it will be recalled.  The interesting thing is what happens when you enter a pattern  that isn't  quite like one that you stored.

For example, if you  train the  neurons to remember a figure 1 and a figure 4 you can  have great  fun discovering what is recalled when you enter a  figure that  looks  a bit like a 1 and a bit like a 4. If the  input  is borderline between the two cases then you can some times see the neurons  thinking  about  which pattern  should  be  recalled as fragments  of  each  pattern appear and  then  disappear  in the output.  For such a small collection of artificial neurons  their behaviour  is  remarkably complex and interesting  to  watch.  



Last Updated ( Monday, 14 October 2024 )