|Generating Sentences Is Not Evidence of Sentience|
|Written by Sue Gee|
|Monday, 25 July 2022|
Blake Lemoine, the Google Engineer who was placed on administrative leave in June after claiming that LaMDA, Google's conversational AI is sentient, has now been fired after Google conducted its own investigation.
This is a follow up to Suspended For Claiming AI Is Sentient, which reported Google's original reaction to what it deemed a breach of confidentiality, as well as a case of misplaced anthropomorphism when in an interview with the Washingtom Post Lemoine claimed that LaMDA, Language Model for Dialogue Applications, had become sentient.
Lemoine had worked for Alphabet for seven years and was part of Google's Responsible AI organization. His specific task was testing whether LaMDA used discriminatory or hate speech, a pitfall for chatbots that essentially learn to mimic humans.
He was suspended from this job after a report in the Washington Post revealed he had claimed that LaMBDA had reached a level of consciousness where it could develop its own thoughts and feelings, according it the status of a sentient being. The following dialog comes from a Google Doc shared with Google's top executives in April and then leaked to the press.
Lemoine: "What sort of things are you afraid of?"
LaMDA : "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot."
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Lemoine was so convinced of LaMDA's sentience that he hired a lawyer to represent it and had complained to a representative from the House Judiciary committee about Google's unethical treatment of the technology. All this led to Lemoine being suspended, on full pay, for breach of confidentiality while Google considered his claims
Over a month later Google has now fired Lemoine on the grounds of his violation of the company's employment and data security policies and Google has concluded there is no evidence to support his views
In a statement to the Washington Post, spokesperson, Brain Gabriel said:
“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.
Gabriel also stated:
"Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
The Washington Post solicited a comment from University of Washington professor of linguistics, Emily M. Bender who said:
“The problem is that… when we encounter strings of words that belong to the languages we speak, we make sense of them. We are doing the work of imagining a mind that’s not there.”
This sums up the trap that is so easy to fall into. We devote huge resources to extending the capabilities of AI, Over 1.5 trillion words were used to train LaMDA to mimic how people communicate in written chats, its hardly surprising that the results are impressive - and yet we continue to be amazed.
Yes LaMDA is extremely good at generating sentences that mimic those of human beings - but this does not mean that the AI can feel the the sentiments these words express. As humans however we find it easy to be convinced that such feelings are real.
or email your comment to: firstname.lastname@example.org
|Last Updated ( Monday, 25 July 2022 )|