AI Beats Human Pilot In DARPA Dogfight
Written by Nikos Vaggalis   
Wednesday, 02 September 2020

This momentous event happened in a competition organized by the U.S. Defense Advanced Research Projects Agency (DARPA) to test autonomous air-to-air combat systems. Should we be surprised by yet another indication of AI superiority?

alphadogfight

In the qualifying stages of the Alpha Dogfight Trials competition, staged in August 2020, AIs competed against other AIs. The winner that would reach the finals would fight against a human Ace, an operational fighter pilot with many hours of flight under his belt. Eight contractors' AIs competed against each other with Heron Systems taking the spoils. It then went on to beat the human pilot by 5-0.

To me it's no wonder that a machine has beaten a human in aerial warfare. Computers play an all-important role in modern aircraft. For instance, a F117 can't be flown by itself due to its strange shape hence it needs a farm of dedicated CPUs to constantly work in parallel to make adjustments for it to stay in the air. As such, an aircraft vessel can be considered as the recipient of a mass of data collected from a series of sensors, radars and communications signals which it then has to present to the pilot, either on in his HUD display or on the aircraft's instruments, to allow him to process it to produce the information that will make him decide what to do next. And who is better at collecting and processing data, the Human or the Computer? 

Although a Computer can't make decisions, that is if used as a mere calculator, the advances in AI have rendered it capable of reaching intelligent decisions too. So if a Computer is already better than humans in collecting and processing data, and today also capable of making decisions by itself, why not get rid of the human element and with it the errors that it is so prone to?

While Heron's AI was not allowed to "cheat", that is go beyond the capabilities of a real F-16, for instance not to use unreasonable G force, it was not limited to the training that pilots get by therefore they could outsmart them. And the fight included only nose guns, not missiles, in a hardcore battle, WWII-style.

darpa

As far as the algorithm itself goes, it's a child of deep reinforcement learning, having been put through billions of simulated dogfights.

It's not the first time that IProgrammer has reported on such algorithmic-over-human superiority in aerial warfare. Back in July 2016, in "Achieving Autonomous AI Is Closer Than We Think", we introduced ALPHA, a comparatively rudimentary AI agent, that was also capable of out-performing human pilots on flight simulators:

What's remarkable is that ALPHA, despite being a complex piece of engineering, does not require the clustered power of a super computer that most AI powered solutions of today do, but can successfully operate on a Raspberry Pi.

The same article noted:

The subject pilot, retired United States Air Force Colonel Gene Lee who possesses extensive aerial combat experience as an instructor and Air Battle Manager, described his experience as:

“the most aggressive, responsive, dynamic and credible AI I’ve seen to date.”

For the record, the algorithm employed went by the name of "Genetic Fuzzy Tree" whose specialty was the act of splitting a complex decision, such as "is the time right to fire a missile?" into smaller manageable sub-decisions, which coupled together would produce optimal solutions to the if/then scenarios that resulted from each input data combination.

The difference between ALPHA and the current AI achievement, is that back then fight involved squadrons, which meant that multiple AIs had to cooperate with each other. It was also a tactical mission in which firing missiles was allowed. The human controlled squadron was beaten in a battle of strategy, whereas now it has become a battle of close quarter combat.

In the intervening period the algorithms have become more powerful but have still not achieved a production-ready state.The US Ministry of Defense is, for the time being, not looking to eliminate the pilots from flying the planes, but is investigating the idea of having humans teaming up with AI by switching on the AI, like an autopilot, when necessary (when things get tough?).

That said, the battle took place in a simulated environment, and as close as to real thing as modern combat simulators come. We don't know for sure what would had happened in the real world and there has been some speculation that that the AI gamed the system. This was also an issue we detailed in "Autonomous Robot Weaponry - The Debate":

Lastly the talk got to the current state of robot AI, with Alan Winfield noting that the robots now in existence are not as smart as we think they are. Placed outside the lab's controlled environment, they are like fish out of water, behaving chaotically and making mistakes.

This view is strongly opposed by Stuart Russell, who noted that the ingredients for autonomy already exist. Maneuverability is clearly demonstrated by self driving cars as they can detect walls, houses and humans, while tactical thinking and perception is also demonstrated by computers winning humans in the game of chess.

After all they do not need to be 100% percent accurate as their scope is to wreck havoc, something achievable with much lower percentages of accuracy.

Still, even if those AIs were production-ready, the decision to go full-on AI is not an easy one to make. Again in "Autonomous Robot Weaponry - The Debate" we considered the implications of such a decision:

...  we should not demonize everything called a robot or weapon but use classifications that separate them according to their level of automation.

The very first level embodies the human controlled robots, which already play a significant role in the battlefields of today like those cleaning mines, disabling bombs or offering medical aid.

The next layer embodies the semi autonomous weapons which despite lifting some of the risk off the human operator, they can't assume full responsibility since the human operator still has the final say. Examples of those kind of weapons are the fire-and-forget air-to-air missiles or aircraft which automatically lock onto their targets. Pressing the button to launch such missiles is still the pilot's responsibility...

Clearly the danger lurks in the third level, that of the fully autonomous robots where no human intervention is required. These robots are able to act on their own and accomplish their mission without the emotional burden or ethical reservations there to stop them.

The question now shifts to whether you can inject ethics into a machine, whether you can ensure that it obeys the rules of war and whether you can guard against it turning on its maker in a moment of self-awareness.

alphdogsq

More Information

AlphaDogfight Trials Go Virtual for Final Event

 

Related Articles

Achieving Autonomous AI Is Closer Than We Think

Autonomous Robot Weaponry - The Debate

Artificial Intelligence For Better Or Worse?

 

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


AI At edX With 30% Savings
13/12/2024

edX is offering a 30% discount on selected courses and program bundles until December 19th. We look at  AI-related certifications that could boost your resume in 2025.



Use Javascriptmas To Hone Your Webdev Skills
08/12/2024

Every day until December 24th MDN, in partnership with Scrimba, is releasing a daily challenge, which as the name suggests requires you to practice your JavaScript skills. Each solution you submi [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 02 September 2020 )