The Turing Test Is Past
Written by Mike James   
Wednesday, 22 March 2023

... and dead and gone. This long time talking point really isn't relevant since the advent of LLMs like GTP. We may be worried about AI taking our coding jobs, but let us not forget that this is the time when the Turing Test was passed.

Of course, how important this milestone is depends on how you regard the Turing test. The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence". The basic idea was "If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck." The duck test applied to human intelligence is exactly the idea behind the Turing Test. Can you tell a human intelligent and an AI apart by simply talking to them? If you can't then the AI is a duck - sorry - a human intelligence. Generally the modes of communication are limited so that silly things like the AI not having a body or a face don't influence things. The idea is arrange things so that only the "intelligence" of the human and the AI are being compared.

ab3

Turing thought that the test would be passed and passed quite soon and there were many early attempts to create something that passed it. Once again, we have an example of Goodhart's Law - every measure which becomes a target becomes a bad measure. Once the target was to pass the Turing Test the test became something of a joke.

The first attempt at "chatbots", as programs that targeted the Turing Test became known, was Eliza created in 1966 by Joseph Weizenbaum. Eliza was styled on a Rogerian psychotherapist. It worked by turning what the subject typed into a question, a technique  that many claimed passed the Turing Test. If the subject typed:

"I don't understand computers."

Eliza would respond with:

"What don't you understand about computers?"

You can see that such a transformation can be performed using a simple template:

I don't understand x -> What don't you understand about x.

Eliza was a list of such templated rules and after writing my own version of Eliza back in the eighties I can still remember enough of the rules to take a nap while still appearing to be engaging in a conversation. Of course, once you know how it works you can easily spot the patterns and even work out traps for the unintelligent Eliza.

I don't understand you -> what don't you understand about you

which is more convincing if you add the template

you -> me

You can carry on inventing and adding rules to Eliza in this way to fix problems but as you do what happens is that the rules start to ineract in ways you didn't anticipate and the result is increasing amounts of nonsense.

Eliza was simple but it was new and hence deeply impressive. Users had never encountered anything like it and quickly ascribed it intelligence and even personality. This willingness to believe that even things that are even slightly duck-like are ducks is something that still happens today. The case of the Google enginer who was convinced that a LLM, LaMDA was alive is a recent case in point.

However we now have a big problem - because the Eliza effect has just broken out of its box and become a media sensation. Microsoft has let loose its GPT-based chatbot as part of just about every application it can think of - the Bing search engine being the most talked about. Now Google has announced Bard to compete with it and Amazon is no doubt working on something.

Talking about Amazon, it is worth noting that LLMs like GPT and Bard have the real potential to make many of our current attempts at AI disposable. Yes GPT and Bard is coming for you, Alexa and you, Siri. Wider still they are coming for Boston Dynamics robots and perhaps even self driving cars. The point is the model is increasing going multi-modal and embodied - i.e. it can see and feel and react to the inputs in a reasonable way. A lot of our current AI is about to be relegated to the past.

But to return to the Turing Test. What is interesting is that the general user isn't asking "is this thing intelligent?" Instead they are rating the quality of its jokes, its poems, its ability to do specific things like arithmetic, write articles and so on. It has human-like intelligence, with only the flaw of not always being rooted in reality - it lies or if you're being kind, it "hallucinates". But then so do humans. The point is that this is so human-like there is no point in subjecting it to a formal Turing Test unless it is for the spectacle.

The Turing Test has been passed and without much fuss.

The big problem with the Turing Test is that it is a duck test and duck tests have never sat well with philosophers. If I give you a mechanical duck that quacks, swims and eats seed - is this a duck? No it's still a mechanical duck. This argument is used to say that even if a machine passes the Turing Test it's still a machine.

The Turing test has long been something only relevant to AI's past and the fact that it has been passed just makes this clear. The duck test proves nothing deep, only that some thing that has some of the properties of another thing can sometimes be used in place of it . Yes it's the Liskov substitution principle

The Turing Test doesn't prove that anything, not even a human, can think, and as such it is a relic of times past.

As my great hero Edsger Dijkstra  said:

"The question of whether computers can think is like the question of whether submarines can swim." 

Does it all matter?

Think of this the next time some over=enthusiastic user claims that Bard or Chat GPT, or an even more powerful sucessor, has begged not to be switched off...

More Information

An important next step on our AI journey

Related Articles

Chat GPT 4 - Still Not Telling The Whole Truth

Google's Large Language Model Takes Control

Runaway Success Of ChatGPT

The Year of AI Breakthroughs 2022

The Unreasonable Effectiveness Of GPT-3

Would You Turn Off A Robot That Was Afraid Of The Dark?

Trouble At The Heart Of AI?

The Paradox of Artificial Intelligence

Artificial Intelligence - Strong and Weak

Artificial Intelligence, Machine Learning and Society

Suspended For Claiming AI Is Sentient

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


OpenAI Library For .NET Exits Beta
19/11/2024

A few months ago the OpenAI .NET library was released as a beta. It has now reached version 2.0.0 and the time has come to leave beta and, with a few amendments enter production readiness.



Copilot Improves Code Quality
27/11/2024

Findings from GitHub show that code authored with Copilot has increased functionality and improved readability, is of better quality, and receives higher approval rates than code authored without it.

 [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 22 March 2023 )