We hear a lot about the great MOOC experiment where 160,000 students signed up for an advanced course on AI from front man Sebastian Thrun but not so much from Peter Norvig - the "other" lecturer involved. Now we have Ted Talk where he gives his point of view.
Going from 200 students in a traditional lecture hall to 100,000 students online is something of a step change. In this TED talk video Professor Norvig explains how they attempted to make it all seem like personal tutorials.
He also makes a strong argument for "due dates" to motivate students and the value of forums in providing answers. The stats are impressive - 80,000 watched one video each week and 20,000 completed all the homework. At the end of the course 23,000 certificates of accomplishment were issued. That's a lot of AI education - especially given its level of difficulty. The words "Introduction to" may suggest "easy" but anyone who had this expectation was quickly disabused.
Norvig makes some good jokes in the TED talk but the fact of the matter is that lecturing techniques haven't changed much since the dark ages. The fact of the matter is that it hasn't changed much in the new MOOC. Apart from the presentation medium, the techniques are just updated 16th century lecturing. Having a lecturer write on a display board complete with voice over seems to be ignoring the available technology. There are some interesting arguments put forward for the value of the approach but at the end of the video the point is made that running online courses is a data-gathering opportunity and the real advances will come when data mining techniques are used to prove what works.
I guess we ain't seen nothing yet...
If you missed the first presentation of Norvig and Thrun's Introduction to AI, it is promised to be "online again soon" as part of Udacity's growing portfolio. And if you want to be better prepared for it sign up to Udacitiy's ST101, Introduction to Statistics which can be considered as a "pre-requisite".
Recently a paper reported the information that neural networks seem to have a fundamental problem in recognizing things now we have another twist on the same basic idea. In this case you can construct [ ... ]