Google's DeepMind Demis Hassabis Gives The Strachey Lecture
Google's DeepMind Demis Hassabis Gives The Strachey Lecture
Written by Mike James   
Wednesday, 30 March 2016

You can't have missed the amazing advances in AI that have been happening at Google's DeepMind. First a neural network learns to play arcade games and then convincingly beat's the world's top Go expert. The founder and CEO of DeepMind, Demis Hassabis, gave this year's Strachey Lecture in Oxford and had a lot to say. 

 

strach2

 

The Strachey Lectures are a series of lectures organized by Oxford University in honor of Christopher Strachey who, among other computing achievements, was the first Professor of Computation at Oxford. 

This year's Strachey Lecture was given as a live stream on February 24th but you can watch it as a video. 

Abstract:

Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition. Demis will draw on his eclectic experiences as an AI researcher, neuroscientist and videogames designer to discuss what is happening at the cutting edge of AI research, its future impact on fields such as science and healthcare, and how developing AI may help us better understand the human mind. 

The video covers some philosophical points but there is a lot of general information about what is going on at DeepMind. 

The key idea is the bringing together of deep neural networks with an older technique - reinforcement learning. The idea is that the neural network isn't told how close it is to a desired result, which is what happens in supervised learning. Instead the network is provided with a reward that tells it how well it is doing. The network then modifies its behavior to increase the reward. This is an old idea and it has been tried many times in the past with mixed results. What has changed is mainly that now we can make use of very deep networks that in the past would have been impossible to train just because we didn't have the computing power. 

Now that we do have the computing power, deep reinforcement learning seems to be powerful enough to learn computer games just by playing them, to navigate mazes and learn to play Go at a grand master level. 

Watch the video to find out more;

 

I'm not sure that the question of the future has been answered, but you can tell that there will be a deep reinforcement network in it. Perhaps this is the start of realizing the goal of strong AI.

 

strach1

More Information

Google DeepMind

Related Articles

AlphaGo Changed Everything

AlphaGo Beats Lee Sedol Final Score 4-1

Google's AI Beats Human Professional Player At Go 

Google's DeepMind Learns To Play Arcade Games

Neural Networks Beat Humans 

Google DeepMind Expands 

Google Buys Unproven AI Company

Deep Learning Researchers To Work For Google

Google's Neural Networks See Even Better

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, FacebookGoogle+ or Linkedin

 

Banner


Pluralsight Launches Technology Index
25/04/2018

Pluralsight's new Technology Index ranks the popularity of more than 300 software development languages, tools and frameworks. Its initial insights include the dominance of JavaScript and the pop [ ... ]



MakeCode for Lego Mindstorms Launched
18/05/2018

MakeCode for Lego Mindstorms has been launched by Microsoft and Lego. It's a Windows-based system that can be used to code using either a drag and drop code select system, or JavaScript. MakeCode can  [ ... ]


More News

 

justjsquare

 



 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 30 March 2016 )
 
 

   
Banner
Banner
RSS feed of news items only
I Programmer News
Copyright © 2018 i-programmer.info. All Rights Reserved.
Joomla! is Free Software released under the GNU/GPL License.