Google Has A Network More Like The Brain
Written by Mike James   
Sunday, 22 September 2019

Our current neural networks work wonders, but they are very far from mimicking the biological brain that they are based on. Now Google Research has some breakthrough results with a more biologically plausible artificial neural network - project Ihmehimmeli.

If you are under the impression that the neural networks we currently use are like biological networks, then think again. The best that can be said is that they are biologically inspired.

A neuron gets excited by signals it receives from other neurons and eventually it "fires", passing the signal on. Our current artificial neural networks simply sum the inputs and output an activation signal that is proportional. A spiking network usually copies the way that real neurons behave more closely in that  when the artificial neurons receive pulses or "spikes" from another network that provide a sufficient stimulus, it starts to fire spikes to neurons connected to it. 

Implementations of spiking neural networks differ in the way that they generate and interpret the activity, however, time is of the essence. It has long been supposed that the timing of pulses from neurons is important, but it has been hard to prove. In this case the network uses neurons that output spikes earlier for stronger inputs and the output is taken to be the neuron that spikes first. Notice that this isn't the same as making the most simulated neurons the important ones.

The model also used a weighting constants to determines how quickly the neuron will spike. The neuron's transfer function models the cell membrane potential which slowly rises as input spikes are received. When the potential gets high enough, the neuron fires. The weights control how quickly the potential builds up and hence how soon the neuron fires.

This formulation allows a derivative to be found and in modern AI anything with a derivative can be optimized and this is the learning algorithm for the spiking neural network. This is almost identical to standard back propagation, but now the optimization changes when neurons fire to make a particular output neuron fire first.

So does it work?

It seems to as it achieved 98% on the standard MNIST data-set - as good as a traditional neural network. It is claimed that the slight loss of accuracy is acceptable because the network can be implemented in a more energy efficient way.

The team also comments on the strange behavior of the network while learning. It seems to undergo a phase transition. At first it adopts a slow but accurate mode and then, after more training, it shifts to fast, but not quite as accurate. This is a phenomenon specific to the time coding of this particular type of neural network and would seem worthy of further investigation.

It is also claimed that the network learns more understandable features that are more like those a human would recognize. To demonstrate this the input image was adjusted to stimulate the "correct" output neuron:


There is plenty of scope for further work:

This work is one example of an initial step that project Ihmehimmeli is taking in exploring the potential of time-based biology-inspired computing. In other on-going experiments, we are training spiking networks with temporal coding to control the walking of an artificial insect in a virtual environment, or taking inspiration from the development of the neural system to train a 2D spiking grid to predict words using axonal growth. Our goal is to increase our familiarity with the mechanisms that nature has evolved for natural intelligence, enabling the exploration of time-based artificial neural networks with varying internal states and state transitions.

The final mystery why is the project called Ihmehimmeli:

“Ihmehimmeli” is a Finnish tongue-in-cheek word for a complex tool or a machine element whose purpose is not immediately easy to grasp. The essence of this word captures our aim to build complex recurrent neural network architectures with temporal encoding of information.


The team from Google Research, Zurich, holding a himmeli

You can view this success of this type of neural network, with its use of time coding,  as an indication that this might be how biological networks function. Networks that are closer to biology seem to have advantages and unique properties. However, you could also take it as another demonstration of modern AI's central dogma - if you can differentiate it then you can make it learn.


More Information

Temporal Coding in Spiking Neural Networks with Alpha Synaptic Function (pdf)

Iulia M. Comsa, Krzysztof Potempa, Luca Versari, Thomas Fischbacher, Andrea Gesmundo and Jyrki Alakuijala

Related Articles

China's Tianjic Chip Rides A Bike

Neuromorphic Supercomputer Up and Running

IBM's TrueNorth Rat Brain 

IBM's TrueNorth Simulates 530 Billion Neurons  

Flying Neural Net Avoids Obstacles

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.



New Record From Google - 100 Trillion Digits Of Pi

Emma Haruka Iwao, a Developer Advocate at Google Cloud, started a calculation to compute pi to 100 Trillion digits on Google Cloud from her home office on October 14 2021.

Apple Improves Developer Support

Apple has announced a range of new tools and technologies at its annual Worldwide Developer Conference (WWDC), including making Xcode Cloud open to all members of the Apple Developer Program.

More News


Last Updated ( Sunday, 22 September 2019 )