IBM takes on Jeopardy - has AI really got this far?
Friday, 17 December 2010
Natural language understanding is hard but IBM think that their machine Watson can do it well enough to take on the champion Jeopardy players. How does it work?
It was reported some time back that IBM was planning to take on the "Jeopardy" (the long running game show) challenge with a machine called Watson. The task requires natural language abilities and is a good test of how far AI has come.
The machine named after the founder of IBM has been testing its natural language abilities out on former human players. Now it is ready for the real thing in the form of a match against the two best players in "history". The game will be played for real next February.
After taking on chess master Garry Kasparov and winning with the machine "deep blue" IBM clearly now thinks that natural language understanding coupled with some decision making algorithms are where the next challenge in AI resides. Jeopardy is a game where the questions involve puns, jokes and many other subtleties that only humans, for now, find easy to understand. There is also a strategic element in that a player can choose not to play if they are not sure of the correct response.
One of the conditions of the challenge is that Watson has to be self contained and can't interact with outside sources of information like the web. There are also far too many possible questions to simply compile a database of possible answers. What the DeepQA software that Watson uses attempts to do is build a more general database from general knowledge texts from the web and other sources which it can attempt to correlate with the questions. This is made much more difficult by the usually complex and idiomatic form used for a typical question. Here the problem is extracting the general subject area and specifics to home in on a relevant answer in the database - miss one colloquial inversion of meaning and the answer is wrong in a way that only a machine can get it wrong.
IBM have given little away about how Watson actually performs the task - after all they still haven't released the details of how Deep Blue implemented its chess playing AI algorithms. It seems likely that its all done with statistical machine learning made good by the huge amount of data available for training. If they succeed in winning they will have a public relations triumph and some technology that they might be able to turn into a commercial question answering system. Full language understanding however is still a long way off.
If you would like to see Watson in action take a look at the video:
Android Studio 2.3 is out and it is an improvement. Even so its users probably would like it to have a better sense of direction and to be given the impression of a project that knows where it's going [ ... ]