Autonomous Robot Weaponry - The Debate
Written by Nikos Vaggalis   
Monday, 01 February 2016

Let's face it, Robots in one form or another seem to be everywhere. Having infiltrated factories, healthcare, marine and space exploration, Robots will soon conquer the household too, taking care of all those pesky chores. But what about their military applications. Can they be used to form an army? Further, what about that army being autonomous, able to locate and attack targets without any user intervention?

Do not confuse this use of robots with drones and Unmanned Aerial Vehicles (UAV) which are (still) controlled by humans.

 

dronearms

 

The idea of armed robots might sound far-fetched  but on the contrary the discussion that took place during the World Economic Forum in Davos, proves the exact opposite. As a topic that needs to be debated in such a high profile forum, can only mean that the issue is real or will become so in the very near future.

The panel comprised of eminent scientists, policy makers and industry representatives:

 

The debate commenced with wondering whether autonomous machines can follow the conventions of war given that there are still rules to be followed. Examples of such rules are that civilians should not be killed; an attack must be justified as necessary, and that pilots bailing out of a hit airplane should not be shot.

Can a robot be constructed to sense when a rule is about to be violated and stick by it?

The dilemmas and ponderings continued mixing fact and fiction when advocating a robotic army as opposed to he currently deployed, human army. Operating and maintaining an army of robots is much easier and much cheaper than one made up of humans, as robots do not need to eat, drink, sleep or have any other requirements. Controlling them is also much easier; you just have to write the right piece of code.

This consequently grants an enormous amount of power to those that can develop such an army, enabling them to rewrite the rules and shift the balance of power on a global scale.

The danger is that these power holders do not have to be democratically elected  governments, but could be anyone with the right amount of resources and technology. This includes corporations,organizations or even individuals with unknown intentions.

These are the reasons that prompted many scientists of the AI community, Stuart Russell being the main protagonist, to compile an open letter addressed to the world's leaders requesting the ban of those weapons.

The counter opinion, expressed by Sir Roger Carr, to that banning was that we should not demonize everything called a robot or weapon but use classifications that separate them according to their level of automation.

The very first level embodies the human controlled robots, which already play a significant role in the battlefields of today like those cleaning mines, disabling bombs or offering medical aid.

The next layer embodies the semi autonomous weapons which despite lifting some of the risk off the human operator, they can't assume full responsibility since the human operator still has the final say. Examples of those kind of weapons are the fire-and-forget air-to-air missiles or aircraft which automatically lock onto their targets. Pressing the button to launch such missiles is still the pilot's responsibility...

Clearly the danger lurks in the third level, that of the fully autonomous robots where no human intervention is required. These robots are able to act on their own and accomplish their mission without the emotional burden or ethical reservations there to stop them.

The good news is that despite 40 countries working towards their creation and a multi billion market awaiting to be satisfied, there's still a lot to be done before we reach the level of technology required to produce such a fully automated machine. Of course, this doesn't preclude it happening at some point in time.

 

 

According to Angela Kane, the foundations for such a development have already been set and the international community has to step up to either to regulate or to ban it.

She also brings up the important, but sidelined, issue of desensitization arising from the fact that today's warfare can be carried out from an office miles away from the battlefield, making it look like a video game, and as such removing the sense of the human cost.

She also explains that for regulating such an affair and for reaching a mutual understanding, all stakeholders, states, industry and research institutions, must participate, regardless of how much they have advanced into this technology and knowledge, as it's an issue that affects the whole of mankind and not just individual nations.

It goes without saying that such a potentially dangerous technology has to be tightly regulated and controlled so it doesn't end up in the wrong hands. But that looks as an increasingly difficult task because of its availability; the resources to build a robotic weapon are much easier to find and obtain than those needed, say, for the construction a nuclear weapon. Hence it's much more difficult to restrict its use. That aside, another obstacle towards crafting any sort of legislation, is that the legislators should be capable of comprehending the technology and its consequences as well as being able to sense when the time is right  for intervening to prevent or minimize any damage done.

But sadly, as we've already pointed out in the  article OpenFace – Face Recognition For All,  law can't keep up with the technological advancements, hence leaving a huge and exploitable  gap behind.

Lastly the talk got to the current state of robot AI, with Alan Winfield noting that the robots now in existence are not as smart as we think they are. Placed outside the lab's controlled environment, they are like fish out of water, behaving chaotically and making mistakes. This view is strongly opposed by Stuart Russell, who noted that the ingredients for autonomy already exist. Manoeuvrability is clearly demonstrated by self driving cars as they can detect walls, houses and humans, while tactical thinking and perception is also demonstrated by computers winning humans in the game of chess. After all they do not need to be 100% percent accurate as their scope is to wreck havoc, something achievable with much lower percentages of accuracy. 

 

 

In then end, does it matter if the robots of today are not capable of full autonomy?

I think not only has the scene has been set, we are moving in a direction that will soon pose a challenge that humanity as a whole will have to address. 

Judging on how things fare when we collectively  confront issues of global scale, such as global warming, extinction of species and pollution, things start to look very pessimistic indeed

Ultimately the deeper problem lies in the human infatuation and its quest for power that drives people to reach out for any weapon at their disposal. Looking at it from this angle, it renders the building of such weapons and their consequences, inevitable. Unless the UN steps forward with no hesitation to act

 

gort

More Information

What If: Robots Go to War?

Intelligent robots don't need to be conscious to turn against us

South Korea's autonomous robot gun turrets: deadly from kilometers away

Robotics: Ethics of artificial intelligence

Robots that fly ... and cooperate

A Swarm of a Thousand Cooperative, Self-Organising Robots

The most dangerous WiFi device: $22,000 sniper rifle that aims itself

Related Articles

AI Researchers Call For Ban On Autonomous Weapons 

Halting Problem Used To Prove A Robot Cannot Computably Kill A Human    

Robot Killer To Defend Great Barrier Reef 

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter,subscribe to the RSS feed and follow us on, Twitter, FacebookGoogle+ or Linkedin

 

Banner


Uno Announces Platform Studio
19/11/2024

Uno has announced Uno Platform Studio, a suite of productivity tools featuring Hot Design, which they describe as a next-generation Visual Designer for .NET cross-platform apps.



Flutter Forked As Flock
05/11/2024

One of developers who worked on the Flutter team at Google has created an open-source form of the framework. Matt Carroll says Flock will be "Flutter+", will remain constantly up to date with Flutter, [ ... ]


More News

 

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

Last Updated ( Sunday, 31 January 2016 )