I can so use this for the EPSRC Robotics Retreat I am going to in September!! (via io9 with thanks to Simon Bradshaw)
Another slightly more legal bit of robotics that's been doing the rounds, is this robots.txt file from the last.fm site. Robots.txt for the non techies are small text files which give instructions to software agents or bots as to what they are allowed to do on the site. Most typically, they do or don't tell Google and other search engines whether they are allowed to make copies of the site or not ("spider" it). No prize at all for the first person to realise what famous laws the last three lines are implementing:-)
This all raises a serious underlying question (sorry) which is, how should the law regulate robots? We already have a surprising number of them. Of course it depends what you call a robot: Wikipedia defines them as "an automatically guided machine which is able to do tasks on its own, almost always due to electronically-programmed instructions".
That's pretty wide. It could mean the software agents or bots that as discussed above, spider the web, make orders on auction sites like eBay, collect data for marketing and malware purposes, learn stuff about the market, etc - in which case we are already awash with them.
Do we mean humanoid robots? We are of course, getting there too - see eg the world's leading current example, Honda's ASIMO, which might one day really become the faithful, affordable, un-needy helpmate of our 1950's Campbellian dreams . (Although what happens to the unemployment figures then?? I guess not that much of the market is blue collar labour anymore?) .But we also already live in a world of ubiquitous non humanoid robots - such as in the domestic sector, the fabulous Roomba vacuum cleaner, beloved of geeks (and cats); in industry, automated car assembly robots (as in the Picasso ads) ; and, of course, there are emergent military robots.
Only a few days ago, the news broke of the world's alleged first robot to feel emotions (although I am sure I heard of research protypes of this kind at Edinburgh University's robotics group years back.) Named Nao, the machine is programmed to mimic the emotional skills of a one-year-old child.
"When the robot is sad it hunches its shoulders forward and looks down. When it is happy, it raises its arms, looking for a hug.
The relationships Nao forms depend on how people react and treat it
When frightened, it cowers and stays like that until it is soothed with gentle strokes on the head.The relationships Nao forms depend on how people react and treat, and on the amount of attention they show."
Robots of this kind could be care-giving companions not only for children, but perhaps also in the home or care homes for lonely geriatrics and long term invalids, whose isolation is often crippling. (Though again I Ludditely wonder if it wouldn't be cheaper just to buy them a cat.)Where does the law come into this? There is of course the longstanding fear of the Killer Robot: a fear which Asimov's famous first law of robotics was of course designed to repel. (Smart bombs are of course another possibility which already, to some extent exist - oddly they don't seem to create the same degree of public distrust and terror, only philosophical musings in obscure B-movies.) But given the fact that general purpose ambulant humanoid artificially intelligent robots are still very much in the lab, only Japan seems so far to have even begun to think about creating rules securing the safety of "friendly AIs" in real life, and even there Google seems to show no further progress since draft guidelines were issued in 2007.
But the real legal issues are likely to be more prosaic, at least in the short term. If robots do cause physical harm to humans (or, indeed, property) at the moment the problem seems more akin to one for the law of torts or maybe product liability than murder or manslaughter. We are a long away away yet from giving rights of legal personality to robots. So there may be questions like how "novel" does a robot have to be before there will be no product liability because of the state of the art defense? How much does a robot have to have a capacity to learn and determine its own behaviours before what it does is not reasonably foreseeable by its programmer?? Do we need rules of strict liability for robot behaviour by its "owners" - as Roman law did and Scots law still does for animals, depending on whether they are categorised as tame or wild? And should that liability fall on the designer of the software, the hardware, or the "keeper" ie the person who uses the robot for some useful task? Or all three?? Is there a better analogy to be drawn from the liability of master for slave in the Roman law of slavery, as Andrew Katz brilliantly suggsted at a GikII a while back??
In the short(er) term, though, the key problems may be around the more intangible but important issue of privacy. Robots are likely as with NAO above to be extensively used as aids to patients in hospitals, homes and care homes; this is already happening in Japan and S Korea and even from some conference papers I have heard in the US. Such robots are useful not just because they give care but because they can monitor, collect and pass on data. Is the patient staying in bed? Are they eating their food and taking their drugs and doing their exercises? Remote sensing by telemedicine is already here; robot aides take it one step further. All very useful but what happens to the right to refuse to do what you are told, with patients who are already of limited autonomy? Do we want children to be able to remotely surveille their aged parents 24/7 in nursing homes, rather than trek back to visit them, as we are anecdotally told already occurs in the likes of Japan?
There are real issues here about consent, and welfare vs autonomy which will need thrashed out. More worryingly still, information collected about patients could be directly channeled to drug or other companies - perhaps in return for a free robot. We already sell our own personal data to pay for web 2.0 services without thinking very hard about it - should we sell the data of our sick and vulnerable too??
Finally robots will be a hard problem for data protection. If robots collect and process personal data eg of patients, are they data controllers or processors? presumably the latter; in which case very few obligations pertain to them except concerning security. This framework may need sdjusting as the ability of the human "owner" to supervise what they do may be fragile, given leearning algorithms, bugs in software and changing environments.
What else do we need to be thinking about?? Comments please :-)