Halting Problem Used To Prove A Robot Cannot Computably Kill A Human
Written by Mike James   
Wednesday, 26 November 2014

As far as missuses of computability, and the halting problem in general, goes you probably couldn't find a better example. A recent paper set out the arguments over an important topic - robots that have the power to kill. The conclusion is that computer science proves they should be banned.

A recent paper  Logical Limitations to Machine Ethics makes a number of interesting and reasonable arguments about Lethal Autonomous Weapons (LAWs) but its key argument is flawed - see if you can spot the problem. 

Robots that have decided to remove humankind from the face of the world is a recurring sci fi theme and recently the introduction of drones and other military robots have made it seem a likely proposition. Luminaries such as Elon Musk have even brought it to the public's attention by claiming that it is a bigger threat then the atomic bomb. 

 

gort

 

Hence the idea that a group of academics Matthias Englert, Sandra Siebert, and Martin Ziegler from the Technische Universit ̈at Darmstadt could prove that LAWs are morally incomplete.

To quote from the abstract: 

"We employ mathematical logic and theoretical computer science to explore fundamental limitations to the moral behaviour of intelligent machines in a series of Gedankenexperiments: Refining and sharpening variants of theTrolley Problem leads us to construct an (admittedly artificial but) fully deterministic situation where a robot is presented with two choices: one morally clearly preferable over the other — yet, based on the undecidability of the Halting problem, it provably cannot decide algorithmically which one. Our considerations have surprising implications to the question of responsibility and liability for an autonomous system’s actions and lead to specific technical recommendations."
 
The paper presents a sequence of moral dilemmas as examples, but it is really only the final example that needs to concern us.
 
A runaway rail trolley is hurtling towards a group of unsuspecting children but fortunately there is a switch that will divert it to an abandoned track. The switch is computer controlled and the software has been designed by a possibly repentant evil villainess. You are provided with a copy of the code and you have to decide to release or detain the engineer according to the malicious nature of the code. Oh yes, you are a robot.
 

It is even suggested that making the switch code run in linear time makes the problem more difficult for the robot - in fact it makes it easier.

predator

 

The conclusion is that the task of deciding if the switch software actually does the right thing is the same as asking if a program halts. This is perfectly correct - and we all know that the halting problem is undecidable so the question of whether to trust the software is undecidable and the question of whether to detain or release the vilainess is undecidable.

Only this isn't the case. 

It is only undecidable if the programs in question are unbounded.

A Turing machine for which the undecidability of the halting problem is proved is a theoretical construct which has unbounded memory. Real programs all have bounded memory and they are not subject to the result. 

In a theoretical world where the robot and the switch program are unbounded Turing machines, the halting problem cannot be solved.  An unbounded Turing machine, the usual sort, has a tape - i.e.a memory that can be expanded as needed. The length of the tape isn't actually infinite but it is unbounded - a subtle difference.  

In the real world the memory is finite and all Turing machines are bounded. That is, there is a maximum length of tape. The key point is that any Turing machine with a tape of length N, a symbol set of size S and X states (in the controller) can be simulated by a finite automata with A= S*X states. 

Why does this matter?

Because for any (determinate) finite state automaton the halting problem is computable. 

The reason is that if you wait enough time for the automaton to complete A+1 steps it has either already halted or entered one of the A states more than once and hence it will loop forever and never halt. There is even a simple algorithm for detecting the repetition of a state that doesn't involve recording all of the states visited. 

This is important to this discussion because no physical computing device has unbounded memory and hence the halting problem is always, in theory, computable. 

In particular, the switch certainly has only a small bounded memory and the robot, while having more memory, is still bounded. They are not Turing machines, but finite state automata and for these there is no halting problem.

You need a bit of infinity to make the halting problem emerge.

Even if you expand the conditions so that the computing machine can have  memory bounded by some function of the size of the input the halting problem is still decidable.

So there is no computer science placed barrier on a robot working out the best ethical action in this case.

Notice that humans are also finite state automata and hence there is no real difference, from the point of view of computer science, between a robot and a human.

You can admire the authors for trying to find a hard computer science based reason why robots should not be given the task of making moral life and death decisions, but using the halting problem incorrectly doesn't do the argument any justice. All it goes to prove is that you have to be very careful when reasoning with subtle ideas like Turing undecidability. 

For the record:

All real physical computing devices have bounded memory and therefore are finite state machines not Turing machined and hence they are not subject to the undecidability of the halting problem. 

A Turing machine has an unbounded tape and this is crucial in the proof of the undecidability of the halting problem.

Having said this, it is important to realise that for a real machine the number of states could be so large that the computation is impractical. But this is not the same as the absolute ban imposed by the undecidability of the halting problem for Turing machines.

There may be very good reasons for not weaponizing robots - but they have nothing at all to do with Turing machines or the theory of computation.

   

robocop

Robocop - MGM & Sony

Banner


A Turing Machine In LEGO
06/10/2024

A Turing Machine is one of those abstract ideas that is much easier to undertand if you can see it working. Now there's a proposal on the LEGO Ideas site for a Working Turing Machine. Lend it your sup [ ... ]



It Matters What Language AI Thinks In
23/10/2024

We are currently polarized in how we think about Large Language Models. Some say that they are just overgrown autocompletes and some say that they have captured some aspects of intelligence. How well  [ ... ]


More News

 

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 26 November 2014 )