Marvin Minsky
Marvin Minsky
Written by Historian   
Thursday, 17 August 2017
Article Index
Marvin Minsky
Logo and Robots

Marvin Minsky was, and remains, one of the best known of the revolutionary thinkers of the early days of AI, robotics and computer science.

In the early 1950s computers were comparatively crude devices with far less computing power than a desktop PC. All the more surprising then that some users were already thinking about ways of making the machines emulate human intelligence.

For some, this was simply a misunderstanding.

Of course the computer was just a gigantic mechanical brain and it could think. It might even be superior to us. This really was the naive view current at the invention of the machine and you still encounter it today, although thankfully less often.

While you can forgive the uninitiated for believing that the huge machines were capable of intelligent thought, what about the people who were closer to them?

Surely Alan Turing can’t have been serious about a machine passing the Turing test and successfully mimicking a human in only a few years?

Remember, at the time Turing was thinking about the problem computers filled a large room and ran slower than a first generation PC. How is it possible that the early computer creators could overrate their machines to this extent?

It was a very common mistake then and it is almost as common now.




The effect on the best workers in the AI field seems to be a series of highs, when the goal is in sight, and very deep lows, when they are convinced that no progress has been made. One of the most important of the early AI researchers, Marvin Minsky, created both lows and highs in the history of Artificial Intelligence.



Marvin Lee Minsky 
(August 9, 1927 - January 24, 2016)


Marvin Minsky’s father was an ophthalmologist and the family home was full of lenses and prisms. Marvin took delight in taking the instruments apart and finding out how they worked. His father was often left with the task of putting them back together again.

Later (1950) Marvin went to Harvard and took courses in mathematics, neurophysiology and psychology. He graduated with a BA in mathematics but he was clearly no ordinary mathematician.

Then he moved to Princeton to work on a PhD in mathematics, but not the sort of math that was current at the time. Minksy decided to work on connectionist theories of the brain. He studied everything he could on the physiology and anatomy of the nervous system. He wanted to know how the brain learns but there were too many gaps. He thought that if he could work out how the neurons were connected then he could reverse engineer what they actually do.

He was surprised to discover that the connection schemes had never been properly mapped even for small parts of the brain. Many people, computer experts in particular, are under the impression that we somehow have a “wiring” diagram for the brain, or at least some useful portion of it. The truth is that even today the human brain is not something that you can order a circuit diagram of from your local neurobiology shop. We have recently managed to work out the wiring of some simple insects and small subsystems but as yet nothing major.

At the time Minsky was looking at the problem almost nothing about the functioning of neurons in groups was known. It was also difficult to see how it could be discovered using the techniques of the time. Neural circuits are inherently 3D and the optical equipment of the time could only look at 2D slices. This was a problem that was to occupy Minsky for some time.

Neural networks

Despite the lack of a wiring diagram it was possible to guess at various principles behind the functioning of collections of neurons. Given that the fact that the brain was made up of neurons was something that was only discovered in 1911 by Ramon y Cajal, it is remarkable that by 1943 people were speculating on how neural networks might function.

Some of the earliest work was done by McCulloch and Pitts who showed how idealised neurons could be put together to form circuits that performed simple logic functions. This was such an influential idea that Von Neumann even made use of neuronal delay logic elements in ENIAC and many later pioneering computers made use of neuron-like circuit elements.

At this time it really did seem that the structure of the brain had much to tell us about ordinary programmable computers, let alone intelligent learning machines. Normally we think of computers as being the product of hard engineering, electronics, Boolean logic, flow diagrams and yet in the earliest days the pioneers actually thought there was a direct connection between what they were doing and the structure of the brain.

Minsky must have been strongly influenced by this feeling that computers and brains were the same sort of thing because his thesis was on what we now call “neural networks”. In those days you didn’t simulate such machines using general purpose computers – you built them using whatever electronics came to hand.

In 1951 Minsky built a large machine, the first randomly wired neural network learning machine (called SNARC, for Stochastic Neural-Analog Reinforcement Computer), based on the reinforcement of simulated synaptic transmission coefficients.

After getting his PhD in 1954 he was lucky enough to be offered a Harvard Fellowship. He had started to think about alternative approaches to AI but he was still troubled by the inability to see the neural structures that would tell him so much about how the brain is organised. So he invented a new type of microscope – the confocal scanning microscope.

Because the basic operation of the microscope was electronic he also attempted some of the first image processing using a computer – the SEAC at the Bureau of Standards. Not with much success, however, because the memory wasn’t large enough to hold a detailed image and process it.


In 1959 Minsky and John McCarthy founded what became the MIT Artificial Intelligence Laboratory which, in time became one of the main centres of AI research in the world. The lab attracted some of the most talented people in computer science and AI. Minsky continued to work on neural network schemes but increasingly his ideas shifted to the symbolic approach to AI and robotics in particular.

The difference between the two approaches is subtle but essentially the neural network approach assumes that the problem really is to build something that can learn and then train it to do what you want, whereas the symbolic approach attempts to program the solution from the word go.

In the early days of AI the neural network approach seemed to be having more success. Indeed there was almost a hysteria surrounding the development of one particular type of neural network – the perceptron.


Rosenblatt invented the single-neuron perceptron and a learning algorithm for it in 1958 and went on to prove some very powerful theorems about what it could learn. These theorems were a sort of guarantee that if something was learnable then the perceptron would learn it. The AI community at the time oversold the idea with demonstrations and outlandish claims for what could be done with one single perceptron.

Then the bubble burst. Minsky had met Seymour Papert and they were both thinking about the problem of working out exactly what a perceptron could do. The shocking truth that was revealed in the book that they wrote together in 1969 “Perceptrons” was that there really were some very simple things that a perceptron cannot learn. In particular concepts such as “odd” and “even” are beyond a perceptron, no matter how big it is or how long you give it to learn.




The perceptron book effectively discouraged any further work in the field simply because no funding organisation would give grants to crackpot AI research. For some 10 years, until the start of the 80s, the neural network approach to AI was effectively dead. A few places, mainly psychology labs and neurology labs still worked on the problem but progress was very slow. What started the revival was the discovery that multi-layer networks could be trained and they could solve the problems that Minsky and Papert had proved impossible for a single layer of perceptron.

Was the doubt that Minsky and Papert cast on neural networks justified?

At the time they were partly justified. Many people didn't really understand the result and interpreted it as proving that neural networks were a dead end. In fact the results only showed the single layer networks were a dead end. Of course at the time it was generally believed that there was no way to train a multi-layer network but in fact Paul Werbos published the algorithm that we now call back-propagation in 1974. It did take a very long time for the subject to recover from the misunderstanding of the book.





<ASIN: 0743276647>

Last Updated ( Thursday, 17 August 2017 )

RSS feed of all content
I Programmer - full contents
Copyright © 2017 All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.