This week's selected xkcd cartoon lampoons, ever so gently, the Turing Test. So what is the Turing Test and why has it been reduced to a laughing matter?
Alan Turing was a deep thinker when it comes to computing. He wasn't the father of the computer and, to be honest, his hardware ideas were often a little strange. What he really excelled at was thinking about the limits of computation and how the world we see around us is both enabled and limited by computation.
In the case of the Turing Test we have an argument that basically illustrates the case for AI being possible and a way of telling how well we are doing. The Turing Test isn't deep, in that it is simply a black box approach to the problem, but at the time it was thought up it was challenging in the same way that evolution was, and is. It cuts to the core of the question, "what is a human?"
We recognise a human, and hence human intelligence, very easily, and without the need for any sort of formal test. For AI it isn't so easy. A computer can fool you into thinking it is intelligent by just doing something you regard as intelligent much better than we can. For example, a chess-playing computer is an example of AI, but it isn't an example of intelligence. On the other hand, we have the problem of familiarity breeding contempt. A computer is programmed and so it is easy to conclude that anything programmed cannot be intelligent.
To get a fair test that is unencumbered by prejudice, we need to move away from the ability to "just recognize" human intelligence. We need to move away from the way intelligence is embodied in humans. We need to put the intelligent agent in a black box so we can concentrate on the pure intelligence and not the ancillary functions such as speech, facial expressions and so on.
So the Turing Test is simply to place a human intelligence and a proposed machine intelligence inside black boxes and only allow the same form of interaction with each. If you can't tell the difference after interacting with them then they are essentially the same.
This is where the philosophers start to offer counter-arguments. They worry that what is in the box is as important as what comes out of the box. The famous Chinese Room argument attempts to make this clear. It does this by proposing a simple mechanism inside the box that passes part of the Turing Test. Yet by being so simple it is so obviously not intelligent that it makes nonsense of the Turing Test.The mechanism in question is a lookup table which provides a "correct" response for every input to the box. It attempts to prove that we really do care what mechanism is in the box.
There are many objections to the Chinese Room example, including the computability and complexity of using a lookup table in such a situation. The most important point is that arguments about what is in the box are irrelevant. If a lookup table passes the Turing test then, in that limited sense, it is intelligent. The problem is that we tend to confuse intelligence with other aspects of being human, like having feelings and being self aware.
More cartoon fun at xkcd a webcomic of romance,sarcasm, math, and language
The big problem with the Turing Test is not what is inside the box but that what is inside the box knows it is taking part in the Turing Test. This converts what was a perfectly reasonable scientific test into an adversarial contest more like a trial. The content of the box can use whatever trickery seems appropriate. It can simply respond with prepared humour, sarcasm or whimsy. By being idiosyncratic and random it can seem to answer deep questions and avoid specific questions - and apparently for this a lookup table is more than enough.
The Turing Test needs an extra condition. That neither party in the boxes has been prepared for the test.
So yes the cartoon is right. Today's Turing Test contenders are just as likely to convince the judge that they are a computer as vice versa.
A mysterious unknown Go player appeared on Chinese online platforms at the end of December and beat the world's top ranked players one after another. On January 4th Google DeepMind's Demis Hassabis re [ ... ]