Friday, October 01, 2010

Edwards' Three Laws for Roboticists




A while back I blogged about how delighted I was to have been invited by the EPSRC to a retreat to discuss robot ethics, along with a dozen and half or so other experts drawn not just from robotics and AI itself but also from industry, the arts, media and cultural studies, performance, journalism, ethics, philosophy, psychology - and er, law (ie , moi.)

The retreat was this week, and two and a half days later, Pangloss is reeling with exhaustion, information overload, cognitive frenzy and sugar rush :-) It is clear this is an immensely fertile field of endeavour, with huge amounts to offer society. But it is also clear that society (not everywhere - cf Japan - but in the UK and US at least - and not everyone - some kids adore robots) has an inherited cultural fear of the runaway killer robot (Skynet, Terminator, Frankenstein yadda yadda), and needs a lot of reassurance about the worth and safety of robots in real life, if we are to avoid the kind of moral panics and backlashes we have seen around everything from GM crops to MMR vaccinations to stem cell surgery. (Note I have NOT here used a picture of either Arnie or Maria from Metropolis, those twin peaks of fear and deception.)

Why do we need robots at all if we're that scared of them, then? Well, robots are already being used to perform difficult dirty and dangerous tasks that humans do not want to do, don't do well or could not do because it would cause them damage, eg in polluted or lethal environments such as space or undersea. (What if the Chilean miners had been robots?? They wouldn't now be asking for cigarettes and alcohol down a tube..) )

Robots are also being developed to give basic care in home and care environments, such as providing limited companionship and doing menial tasks for the sick or the housebound or the mentally fragile. We may say (as Pangloss did initially) that we would rather these tasks be performed by human beings as part of a decent welfare society : but with most the developed world facing lower birth rates and a predominantly aging population, combined with a crippling economic recession, robots may be the best way to assure our vulnerable a bearable quality of life. They may also give the vulnerable more autonomy than having to depend on another human being.

And of course the final extension of the care giving robot is the famous sexbot , which might provide a training experience for the scared or blessed contact for the disabled or unsightly - or might introduce a worrying objectification/commodification of sex, and sex partners, and an acceptance of the unaceptable like sexual rape and torture, into our society.

Finally and most controversially robots are to a very large extent being funded at the cutting edge by military money. This is good ,because robots in the frontline don't come back in body bags - one reason the US is investing extensively. But it is also bad, because if humans on the frontline don't die on one side, we may not stop and think twice before launching wars, which in the end will have collateral damge for out own people as well as risk imposing devastating casualties on human opposition from less developed countries. We have to be careful in some ways to avoid robots making war too "easy" (for the developed world side, not the developing of course - robots so far at least are damn expensive.)

Three key messages came over:

- Robots are not science fiction. They already exist in their millions and are ubiquitous in the developed world eg robot hoovers, industrial robots in car factories, care robots are being rolled out even in UK hospitals eg Birmingham. However we are at a tipping point because until now robots of any sophistication have mostly been segregated from humans eg in industrial zones. The movement of robots into home and domestic and care environments, sometimes interacting with the vulnerable, children and the elderly especially, brings with it a whole new layer of ethical issues.

- Robots are mostly not humanoid. Again science fiction brings with it a baggage of human like robots like Terminators, or even more controversially, sex robots or fembots as celebrated in Japanese popular culture and Buffy. In fact there is little reason why robots should be entirely humanoid , as it is damn difficult to do - although it may be very useful for them to mimic say a human arm, or eye, or to have mobility. One development we talked a lot about were military applications of "swarm" robots. These resemble a large number of insects far more than they do a human being. Other robots may simply not even resemble anything organic.

-But robots are still something different from ordinary "machines" or tools or software. First, they have a degree of mobility and/or autonomy. This implies a degree of sometimes threatening out of control-ness. Second, they mostly have capacity to learn and adapt. This has really interesting consequences for legal liability: is a manufacturer liable in negligence if it could not "reasonably foresee" what its robots might eventually do after a few months in the wild?

Third, and perhaps most interestingly, robots increasingly have the capacity to deceive the unwary (eg dementia patients) into believing they are truely alive, which may be unfortunate (would you give an infertile woman a robot baby which will never grow up? would you give a pedophile a sex robot that looked like a child to divert his anti social urges?). Connectedly, they may manipulate the emotions and alter behaviour in new ways: we are used to kids insisting on buying an entire new wardrobe for Barbie, but what about when they pay more attention to their robot dog (which needs nothing except plugged in occasionally) than their real one, so it starves to death?

All this brought us to a familiar place, of wondering if it might be a good start to consider rewriting Asimov's famous Three Laws of Robotics. But of course Asimov's laws are - surprise!! - science fiction. Robots cannot and in foreseeable future will not, be able to understand, act on, be forced to obey, and most importantly reason with, commands phrased in natural language. But - and this came to me lit up like a conceptual lightbulb dipped in Aristotle' imaginary bathtub - those who design robots - and indeed buy them and use them and operate them and modify them - DO understand law and natural language, and social ethics. Robots are not subjects of the law nor are they responsible agents in ethics ; but the people who make them and use them are. So it is laws for roboticists we need - not for robots. (My thanks to the wonderful Alan Winfield of UWE for that last bit.)

So here are my Three Laws for Roboticists, as scribbled frantically on the back of an envelope. To give context, we then worked on these rules as a group, particularly a small sub group including Alan Winfield, as mentioned above , and Joanna Bryson of University of Bath, who added two further rules relating to transparency and attribution (I could write about them but already too long!).

It seems possible that the EPSRC may promote a version of these rules, both in my more precise "legalese" form, and in a simpler, more public-communicative style, with commentary : not, obviously, as "laws" but simply as a vehicle to start discussion about robotics ethics , both in the science community and with the general public. It is an exciting thing for a technology lawyer to be involved in, to put it mildly :)

But all that is to come: for now I merely want to stress this is my preliminary version and all faults, solecisms and complete misunderstandings of the cultural discourse are mine, and not to be blamed on the EPSRC or any of the other fabulously erudite attendees. Comments welcome though :)

Edwards' Three Laws for Roboticists

1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill, except in the interests of national security.

2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy.

3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).


My thanks again to all the participants for their knowledge and insight (and putting up with me talking so much), and in particular to Stephen Kemp of the EPSRC for organising and Vivienne Parry for facilitating the event.


Phew. Time for t'weekend, Pangloss signing off!

5 comments:

Ren Reynolds said...

1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill, except in the interests of national security.

Definition – may need to have something that has a condition to define the difference between a machine and a robot e.g. is a watch a robot, is a remote control drone?

This rules out:
- Automated abattoirs
- Assisted suicide machines
- Execution machines
Depending on the definition each of the above already exists.


2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy.

Not sure how this unpacks in terms of emergent behaviour and / or unintended consequences.

The problem that comes up is that as we advance we get systems where it’s not reasonable to place the burden of moral (and thus I presume legal) responsibility on any given individual (possibly even corporate entity). This can be of a number of reasons, on being that the robot is used in a scenario that could not reasonably be predicted as a use case and on the other hand could not reasonable be predicted (by the user) as non-use case. In such circumstances either their is no moral agent or we start to think of the robot as the agent. This might sound out there, however there’s literatures both on moral agency in a determined universe and moral agency in a zombie universe i.e. being with no self consciousness; thus for moral philosophers robots are not much of a step.


3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).

Sure – and people have been developing methodologies for ethical creation of IT systems. However, as noted above, the interesting bits occur when things fall outside what’s reasonable e.g. one might argue that there has to be an operating system, but given the complexity of such systems it is as a matter of current practice impossible to create one without error – so do we say we can’t create a robot until such time as we can prove that the operating system will be error free so there can never be an incident? If so, why are we applying a higher burden of proof than we do for, say plans or cars – where the consequences are greater i.e. on the one hand the benefits we might get from such an un-equal rule applied to robots, on the other hand the negative consequences if we apply the higher bar to cars.

Ren Reynolds said...

2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy.

Not sure how this unpacks in terms of emergent behaviour and / or unintended consequences.

The problem that comes up is that as we advance we get systems where it’s not reasonable to place the burden of moral (and thus I presume legal) responsibility on any given individual (possibly even corporate entity). This can be of a number of reasons, on being that the robot is used in a scenario that could not reasonably be predicted as a use case and on the other hand could not reasonable be predicted (by the user) as non-use case. In such circumstances either their is no moral agent or we start to think of the robot as the agent. This might sound out there, however there’s literatures both on moral agency in a determined universe and moral agency in a zombie universe i.e. being with no self consciousness; thus for moral philosophers robots are not much of a step.

Ren Reynolds said...

3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).

Sure – and people have been developing methodologies for ethical creation of IT systems. However, as noted above, the interesting bits occur when things fall outside what’s reasonable e.g. one might argue that there has to be an operating system, but given the complexity of such systems it is as a matter of current practice impossible to create one without error – so do we say we can’t create a robot until such time as we can prove that the operating system will be error free so there can never be an incident? If so, why are we applying a higher burden of proof than we do for, say plans or cars – where the consequences are greater i.e. on the one hand the benefits we might get from such an un-equal rule applied to robots, on the other hand the negative consequences if we apply the higher bar to cars.

phbradley said...

Can existing law really not cater for what you are proposing to cover with those flawed 'laws'? One need merely to make designers or owners vicariously liable for the deeds of their bots, and tort, contract, and human rights flow into every persons' interactions with the bot from then on, no?

pangloss said...

@phbradley yes they don't say anything new law doesn't already (although the two we added and I don't list, do, a little). But the point is these "laws" aren't to advise lawyers (or to be real legislation) - but to be a blueprint for designers and users - and above all, to reassure the public. Call them a charter or a code of condust if you like rather than laws. I agree that I think existing law has adequate models already in negligence, product liability, animal liability etc etc for robot liability - though the actual results as things stand may need tweaking - and I want to do some research into that next.