AI Beats Theory And Racing Drone Pilots
Written by Mike James
Sunday, 17 September 2023

Which is better - traditional optimization theory, human pilots or AI reinforcement learning? The answer is, of course, AI but perhaps not for the reasons you might suppose.

If you have studied any of a range of physics, engineering and math courses you will have encountered control theory or optimization or dynamical systems - the field goes under a number of different names but the idea is the same. What you do is build a mathematical model of the system you are trying to control or stabilize and then use various methods of solving the model to give you the inputs needed.

For example, if you want to fly a drone through a complex obstacle course you can implement a model of how the drone flies and then solve the equations so that you know how to alter the thrusters to fly the given path. This is how robotics in particular has attempted to solve problems and when you see Atlas, or any bipedal robot walking, it's a miracle of mathematical modeling and control theory.

Of course today there is an alternative - you can train an AI do do the job. In particular, you can use reinforcement learning (RL) which doesn't attempt to model or solve the optimization problem. It simply applies actions to produce behaviors and then modifies the actions so that the behaviors increasingly receive rewards. The RL approach has much in common with the classical optimal control (OC) approach but recently the breakthroughs in AI make it seem the obvious choice - for one thing we don't have any difficult equations to solve.

So which is really best?

A team from the University of Zurich set out to put the RL method to the test by training it to fly a racing drone though an obstacle course. They tried two forms of OC, one worked well and the other failed to complete the course. The RL method was trained on a desktop machine using a simulation in a very short time and then was transferred to the real world without any tweaking. Now watch the video:

So now you know and you know that humans are easily beaten by AI. However, things are not quite as fair as this comparison suggests.

The first thing to notice is that the humans only had the inputs from the camera on  the drone itself. They are flying with far less info that the automatic systems which have access to multiple cameras and location sensors which provide the exact position of the drone at all time. Then there is the issue of RL v OC. The OC system has to try to fly a path set by a human through the course whereas the RL system is free to find its own path. This means that the RL system is exploring a much larger space of solutions than the OC system so naturally it finds something better.

This isn't to say that the RL system isn't better - it is. To allow the OC system to workout its own path would be big problem involving functional differentiation and variational methods that would make it much more difficult to set up. The joy of the RL method is that you more-or-less just have to give it the problem and set a reward function and you are done.

There goes another human activity that AI is better at than us.

I just hope they remember who programmed them.

Reaching the limit in autonomous racing: Optimal control versus reinforcement learning Yunlong Song, Angel Romero, Matthias Müller, Vladlen Koltun, and Davide Scaramuzza

University of Zurich

#### Related Articles

Go Faster Drones

Drone Racing Star Wars Style

Record Setting Drone Animation

Drones Display Better Than Fireworks!

PiBot - Not Quite The Pilot You Were Expecting

 Happy 25th Birthday, Google27/09/2023Today Google is celebrating its 25th Birthday. Sundar Pichai tweeted "Thanks to everyone who uses our products and challenges us to keep innovating and to all Googlers!" And there's a Google Dood [ ... ] + Full Story Kotlin Re-Enters TIOBE Index Top 2018/09/2023Kotlin, the open-source Java alternative from JetBrains, is back in the Top 20 of the TIOBE Index, displacing Julia which lost the 20th position after only one month and is now down at 25th place. + Full Story More News

Last Updated ( Sunday, 17 September 2023 )