Geoffrey Hinton And The Existential Threat From AI
Written by Sue Gee   
Sunday, 13 October 2024

As the winner of the Nobel Prize For Physics 2024, Geoffrey Hinton found himself being interviewed multiple times. He used the opportunity to reiterate and explain why he has come to see AI as an existential threat to humanity.

As we reported five days ago Geoffrey Hinton has been awarded the Nobel Prize for Physics. It is perhaps ironic that while he is being lauded for his contributions to machine learning and neural networks, he is keen to focus our attention on the risks such systems pose. His concerns about the possibility that his work might result in an artificial intelligence that was superior to human intelligence led him to quitting his position as Vice President of Google in order to be able to speak more freely about the dangers he perceives. At that time he was interviewed by Will Douglas Heaven as part of the MIT EmTech Digital Conference, see Hinton Explains His New Fear of AI.

Now in the limelight for being a Nobel Prize laureate, Hinton has a new opportunity to outline his fears about the technology he had a pivotal role in creating.  

The first interview came at 3 o'clock in the morning, just an hour after he'd taken the call from the Nobel Prize committee which he initially suspected could be spoof - were it not for the strong Swedish accent - as he wasn't even aware of being nominated. But, knowing about the First Reactions Interviews conducted for the Nobel Prize website he was ready to answer questions put by Adam Smith, including:

"How would you describe yourself? Would you say you were a computer scientist or would you say you were a physicist trying to understand biology when you were doing this work?"

Hinton's reply was:

"I would say I am someone who doesn’t really know what field he’s in but would like to understand how the brain works. And in my attempts to understand how the brain works, I’ve helped to create a technology that works surprisingly well."

Adam Smith then refers to Hinton's "very publicly expressed fears about what the technology can bring" and asks what can be done to allay them. 

Hinton's reply includes:

"I wish I had a sort of simple recipe that if you do this, everything’s going to be okay. But I don’t. In particular with respect to the existential threat of these things getting out of control and taking over, I think we’re a kind of bifurcation point in history where in the next few years we need to figure out if there’s a way to deal with that threat. I think it’s very important right now for people to be working on the issue of how will we keep control? We need to put a lot of research effort into it. I think one thing governments can do is force the big companies to spend a lot more of their resources on safety research. So that, for example, companies like OpenAI can’t just put safety research on the back burner.

Asked about the impact of the Nobel Prize on addressing the risks posed by large language models, Hinton's reply is:

Yes, I think it will make a difference. Hopefully it’ll make me more credible when I say these things really do understand what they’re saying.

Hinton goes into more detail of his fear that machines could outsmart humans in this interview with Faisal Islam for BBC Newsnight.

At the beginning of this interview Hinton states that he is pleased that the world is beginning to take seriously the existential threat that "these things" referring to large language models like OpenAI's ChatGPT, will get smarter than us and want to take control away from us. Asked what triggered this concern he said it was down to two things. Firstly his own experience of "playing" with the large chatbots, both Google's Bard and ChatGPT, and discovering that they clearly understand a lot.

"They have a lot more knowledge than any person - they're like a not very good expert at more or less everything"

The second was coming to understand the way in which they're a superior form of intelligence:

"because you can make many copies of the same neural network each copy can look at a different bit of data and then they can all share what they learned"

Hinton ask us to imagine having the knowledge of 10,000 degrees shared efficiently. The worry is that with this superior knowledge, the AI might want to take control.

Hinton sums this up towards the end of the interview with:

My guess is in between five and 20 years from now there's a probability of of about a half that will'll have to confront the problem of them trying to take over.

Meanwhile at around midday, Hinton gave a press conference at the University of Toronto, where he is University Professor Emeritus of Computer Science and where he did the work recognised by the Nobel Prize in Physics.

GHToronto

Having said he was  "extremely surprised" to receive the prize he continued:

I think of the prize as a recognition of a large community of people who worked on neural networks for many years before they worked really well. I'd particularly like to acknowledge my two main mentors, David Rumelhart with whom I worked on the back propagation algorithm. David died of a nasty brain disease quite young, but for that he would be here instead of me. And my colleague Terry Sejnowski who I worked with a lot in the 1980s on Boltzmann machines and who taught me a lot  about the brain.

I'd also like to acknowledge my students. I was particularly fortunate to have many very clever students, much cleverer than me, who actually made things work. They've gone on to do great things. I'm particularly proud of the fact that one of my students fired Sam Altman. 

One of the questions put to Hinton by the press was: 

Can you please elaborate on your comment earlier on on the call about Sam Altman?

To which Hinton replied:

So Open AI was set up with a big emphasis on safety. Its primary objective was to develop  artificial general intelligence and ensure that it was safe. One of my former students, Ilya Sutskever was the chief scientist and over time it turned out that Sam Altman was much less concerned with safety than with profits and I think that's unfortunate.  

 

More Information

Nobel Prize - Geoffrey E. Hinton Interview

Related Articles

Geoffrey Hinton Shares Nobel Prize For Physics 2024

Geoffrey Hinton Leaves Google To Warn About AI

Hinton Explains His New Fear of AI 

Does OpenAI's GPT-2 Neural Network Pose a Threat to Democracy?

Geoffrey Hinton Awarded Royal Society's Premier Medal

Neural Network Pioneer, David Rumelhart, Dies

Neural Networks

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Data Wrangler Gets Copilot Integration
11/11/2024

Microsoft has announced that Copilot is being integrated into Data Wrangler. The move will give data scientists the ability to use natural language to clean and transform data, and to get help with fi [ ... ]



Google Intensive AI Course - Free On Kaggle
05/11/2024

Google is offering a 5-Day Gen AI Intensive Course designed to equip data scientists with the knowledge and skills to tackle generative AI projects with confidence. It runs on the Kaggle platform from [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

Last Updated ( Monday, 14 October 2024 )