tag:blogger.com,1999:blog-16688455.post628348669798421455..comments2024-03-18T09:13:19.346+00:00Comments on panGloss: Edwards' Three Laws for Roboticistspanglosshttp://www.blogger.com/profile/00900934369744270540noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-16688455.post-17205107545233441632010-10-03T14:17:52.188+01:002010-10-03T14:17:52.188+01:00@phbradley yes they don't say anything new law...@phbradley yes they don't say anything new law doesn't already (although the two we added and I don't list, do, a little). But the point is these "laws" aren't to advise lawyers (or to be real legislation) - but to be a blueprint for designers and users - and above all, to reassure the public. Call them a charter or a code of condust if you like rather than laws. I agree that I think existing law has adequate models already in negligence, product liability, animal liability etc etc for robot liability - though the actual results as things stand may need tweaking - and I want to do some research into that next.panglosshttps://www.blogger.com/profile/00900934369744270540noreply@blogger.comtag:blogger.com,1999:blog-16688455.post-7868147745871723622010-10-03T02:03:22.103+01:002010-10-03T02:03:22.103+01:00Can existing law really not cater for what you are...Can existing law really not cater for what you are proposing to cover with those flawed 'laws'? One need merely to make designers or owners vicariously liable for the deeds of their bots, and tort, contract, and human rights flow into every persons' interactions with the bot from then on, no?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-16688455.post-9596940696979051282010-10-02T12:32:20.550+01:002010-10-02T12:32:20.550+01:003) Robots are products. As such they should be des...3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).<br /><br />Sure – and people have been developing methodologies for ethical creation of IT systems. However, as noted above, the interesting bits occur when things fall outside what’s reasonable e.g. one might argue that there has to be an operating system, but given the complexity of such systems it is as a matter of current practice impossible to create one without error – so do we say we can’t create a robot until such time as we can prove that the operating system will be error free so there can never be an incident? If so, why are we applying a higher burden of proof than we do for, say plans or cars – where the consequences are greater i.e. on the one hand the benefits we might get from such an un-equal rule applied to robots, on the other hand the negative consequences if we apply the higher bar to cars.Ren Reynoldshttps://www.blogger.com/profile/07814598313993440502noreply@blogger.comtag:blogger.com,1999:blog-16688455.post-58790505351428850462010-10-02T12:31:27.379+01:002010-10-02T12:31:27.379+01:002 Humans are responsible for the actions of robots...2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy. <br />Not sure how this unpacks in terms of emergent behaviour and / or unintended consequences. <br /><br />The problem that comes up is that as we advance we get systems where it’s not reasonable to place the burden of moral (and thus I presume legal) responsibility on any given individual (possibly even corporate entity). This can be of a number of reasons, on being that the robot is used in a scenario that could not reasonably be predicted as a use case and on the other hand could not reasonable be predicted (by the user) as non-use case. In such circumstances either their is no moral agent or we start to think of the robot as the agent. This might sound out there, however there’s literatures both on moral agency in a determined universe and moral agency in a zombie universe i.e. being with no self consciousness; thus for moral philosophers robots are not much of a step.Ren Reynoldshttps://www.blogger.com/profile/07814598313993440502noreply@blogger.comtag:blogger.com,1999:blog-16688455.post-89234945125364168192010-10-02T12:29:48.657+01:002010-10-02T12:29:48.657+01:001.Robots are multi-use tools. Robots should not be...1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill, except in the interests of national security.<br /><br />Definition – may need to have something that has a condition to define the difference between a machine and a robot e.g. is a watch a robot, is a remote control drone?<br /><br />This rules out:<br />- Automated abattoirs<br />- Assisted suicide machines <br />- Execution machines <br />Depending on the definition each of the above already exists.<br /> 2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy. <br />Not sure how this unpacks in terms of emergent behaviour and / or unintended consequences. <br /><br />The problem that comes up is that as we advance we get systems where it’s not reasonable to place the burden of moral (and thus I presume legal) responsibility on any given individual (possibly even corporate entity). This can be of a number of reasons, on being that the robot is used in a scenario that could not reasonably be predicted as a use case and on the other hand could not reasonable be predicted (by the user) as non-use case. In such circumstances either their is no moral agent or we start to think of the robot as the agent. This might sound out there, however there’s literatures both on moral agency in a determined universe and moral agency in a zombie universe i.e. being with no self consciousness; thus for moral philosophers robots are not much of a step.<br /><br /> 3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).<br /><br />Sure – and people have been developing methodologies for ethical creation of IT systems. However, as noted above, the interesting bits occur when things fall outside what’s reasonable e.g. one might argue that there has to be an operating system, but given the complexity of such systems it is as a matter of current practice impossible to create one without error – so do we say we can’t create a robot until such time as we can prove that the operating system will be error free so there can never be an incident? If so, why are we applying a higher burden of proof than we do for, say plans or cars – where the consequences are greater i.e. on the one hand the benefits we might get from such an un-equal rule applied to robots, on the other hand the negative consequences if we apply the higher bar to cars.Ren Reynoldshttps://www.blogger.com/profile/07814598313993440502noreply@blogger.com