Google's DeepMind Demis Hassabis Gives The Strachey Lecture
Written by Mike James   
Wednesday, 30 March 2016

You can't have missed the amazing advances in AI that have been happening at Google's DeepMind. First a neural network learns to play arcade games and then convincingly beat's the world's top Go expert. The founder and CEO of DeepMind, Demis Hassabis, gave this year's Strachey Lecture in Oxford and had a lot to say. 

 

strach2

 

The Strachey Lectures are a series of lectures organized by Oxford University in honor of Christopher Strachey who, among other computing achievements, was the first Professor of Computation at Oxford. 

This year's Strachey Lecture was given as a live stream on February 24th but you can watch it as a video. 

Abstract:

Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition. Demis will draw on his eclectic experiences as an AI researcher, neuroscientist and videogames designer to discuss what is happening at the cutting edge of AI research, its future impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind. 

The video covers some philosophical points but there is a lot of general information about what is going on at DeepMind. 

The key idea is the bringing together of deep neural networks with an older technique - reinforcement learning. The idea is that the neural network isn't told how close it is to a desired result, which is what happens in supervised learning. Instead the network is provided with a reward that tells it how well it is doing. The network then modifies its behavior to increase the reward. This is an old idea and it has been tried many times in the past with mixed results. What has changed is mainly that now we can make use of very deep networks that in the past would have been impossible to train just because we didn't have the computing power. 

Now that we do have the computing power, deep reinforcement learning seems to be powerful enough to learn computer games just by playing them, to navigate mazes and learn to play Go at a grand master level. 

Watch the video to find out more;

 

I'm not sure that the question of the future has been answered, but you can tell that there will be a deep reinforcement network in it. Perhaps this is the start of realizing the goal of strong AI.

 

strach1

More Information

Google DeepMind

Related Articles

AlphaGo Changed Everything

AlphaGo Beats Lee Sedol Final Score 4-1

Google's AI Beats Human Professional Player At Go 

Google's DeepMind Learns To Play Arcade Games

Neural Networks Beat Humans 

Google DeepMind Expands 

Google Buys Unproven AI Company

Deep Learning Researchers To Work For Google

Google's Neural Networks See Even Better

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, FacebookGoogle+ or Linkedin

 

Banner


JetBrains Makes WebStorm and Rider Free for Non-Commercial Use
24/10/2024

JetBrains has launched a non-commercial license for its JavaScript and TypeScript IDE, WebStorm, and for Rider, its cross-platform .NET and game development IDE.



The Feds Want Us To Move On From C/C++
13/11/2024

The clamour for safe programming languages seems to be growing and becoming official. We have known for a while that C and C++ are dangerous languages so why has it become such an issue now and is it  [ ... ]


More News

 

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 30 March 2016 )