Thursday, August 19, 2010
Fascinating story of the week is that Top Gear's famous pet racing driver, The Stig ("some say he is the bastard child of Sherlock Holmes and Thierry Henri") wants to break his contractual and confidential obligations of silence as to his true identity imposed by his (presumably lucrative) BBC contract, so that he can reveal his name and make big $$$ out of his autobiography, in the style of his fellow presenters Clarkson et al, all of whom have reportedly made millions out of parlaying their popularity from the show.
It's a cracker this, in a week when the headlines are already full of a half hint that the Con_Dem government are thinking of having a bash at Eady J's judge-made law on privacy, breach of confidence and press freedom. The general tone of the hints in press has been that the balance has shifted too far in favour of protecting celebrity privacy, and too far from allowing the press to make lots of money out of kiss and tell tittle tattle, sorry," fulfil their public investigative duties".
So we already have an extensive debate about how far celebrities should be able to preserve their privacy even where they live their lives to some extent in public; but till now we've rarely had a debate about whether the "right to respect for private life" (Art 8 of the ECHR, which founds the recent line of English cases on privacy and confidence) also covers the right to disclose as well as hide your secrets.
From one perspective, the right to assert your "nymic" identity seems clearly like something that should be an intrinsic part of private life. In more modern instruments than the ECHR, such as the UN Convention on the Rights of the Child, a right to a name and an identity is explicit. In the ECHR, case law has extended the right to family life to something very similar, with numerous cases on the rigt to a name, to a state affiliation qand to an immigration or domicile status. These cases are complex and go both ways but the underlying notion that private life includes identity is one which most scholars would I think acknowledge.
But another way to look at it - and one I am sure the BBC lawyers are quite keen on - is that this was a simple commercial transaction where the Stig was paid for silence. Non disclosure agreements (contracts) or NDAs are of course ubiquitous. As with the general domain of privacy and personal data online, the question then becomes the more controversial one of how far should you be able to sign away your basic rights by contract. Adopting the language of restrictive covenants, it would be surely be unreasonable if The Stig was not allowed to use his own name in any walk of life, or with any employer. But is it reasonable that be be bound indefinitely by his consent even by the BBC? The question also arises of what remedy would be reasonable here if the BBC were say to seek an injunction to prevent any name-attached autobiography of The Stig being published. In libel law, , the common aphorism is that common law courts prefer not to grant allow prior restraint of speech on allegations of defamation, but to impose damages subsequent to publication if damage to reputation then ensued: "publish and be damned". In pure contract or confidence actions, such a bright line does not pertain. Should The Stig have the right to assert his name and pay the BBC if they suffer loss as a result? Or should he be stoppable by injunction as is possible in the ordinary law of breach of contract?
I'd love to see this go to court but I strongly suspect it'll settle .
Wednesday, August 18, 2010
IGF Workshop on "The Role of Internet Intermediaries in Advancing Public Policy Objectives"
The goal of the Workshop is to discuss and identify lessons learned from experience to date of Internet intermediaries in helping to advance public policy objectives. The workshop will introduce the concept of “Internet intermediaries”, the categories of actors considered, their role, and the three ways in which intermediaries can take on a policy role: through responses to legal requirements; through their business practices; and through industry self-regulation. It will discuss the roles and responsibilities of Internet intermediaries for actions by users of their platforms, their nature and extent and the implications. The workshop is part of a stream of work being conducted by the OECD.
The workshop will take place on September 16 from 14.30 to 16.30, in Room 1.
Wednesday, August 11, 2010
As someone whose last book used the original,wonderful xkcd cartoon as its cover, it seems only right to bring you the updated version! (NB NOT by Randall Munroe, though glad to see they acknowledge him.)
ps but shouldn't that be "sunken island of Google Buzz?"
I can so use this for the EPSRC Robotics Retreat I am going to in September!! (via io9 with thanks to Simon Bradshaw)
Another slightly more legal bit of robotics that's been doing the rounds, is this robots.txt file from the last.fm site. Robots.txt for the non techies are small text files which give instructions to software agents or bots as to what they are allowed to do on the site. Most typically, they do or don't tell Google and other search engines whether they are allowed to make copies of the site or not ("spider" it). No prize at all for the first person to realise what famous laws the last three lines are implementing:-)
This all raises a serious underlying question (sorry) which is, how should the law regulate robots? We already have a surprising number of them. Of course it depends what you call a robot: Wikipedia defines them as "an automatically guided machine which is able to do tasks on its own, almost always due to electronically-programmed instructions".
That's pretty wide. It could mean the software agents or bots that as discussed above, spider the web, make orders on auction sites like eBay, collect data for marketing and malware purposes, learn stuff about the market, etc - in which case we are already awash with them.
Do we mean humanoid robots? We are of course, getting there too - see eg the world's leading current example, Honda's ASIMO, which might one day really become the faithful, affordable, un-needy helpmate of our 1950's Campbellian dreams . (Although what happens to the unemployment figures then?? I guess not that much of the market is blue collar labour anymore?) .But we also already live in a world of ubiquitous non humanoid robots - such as in the domestic sector, the fabulous Roomba vacuum cleaner, beloved of geeks (and cats); in industry, automated car assembly robots (as in the Picasso ads) ; and, of course, there are emergent military robots.
Only a few days ago, the news broke of the world's alleged first robot to feel emotions (although I am sure I heard of research protypes of this kind at Edinburgh University's robotics group years back.) Named Nao, the machine is programmed to mimic the emotional skills of a one-year-old child.
"When the robot is sad it hunches its shoulders forward and looks down. When it is happy, it raises its arms, looking for a hug.
The relationships Nao forms depend on how people react and treat it
When frightened, it cowers and stays like that until it is soothed with gentle strokes on the head.The relationships Nao forms depend on how people react and treat, and on the amount of attention they show."
Robots of this kind could be care-giving companions not only for children, but perhaps also in the home or care homes for lonely geriatrics and long term invalids, whose isolation is often crippling. (Though again I Ludditely wonder if it wouldn't be cheaper just to buy them a cat.)Where does the law come into this? There is of course the longstanding fear of the Killer Robot: a fear which Asimov's famous first law of robotics was of course designed to repel. (Smart bombs are of course another possibility which already, to some extent exist - oddly they don't seem to create the same degree of public distrust and terror, only philosophical musings in obscure B-movies.) But given the fact that general purpose ambulant humanoid artificially intelligent robots are still very much in the lab, only Japan seems so far to have even begun to think about creating rules securing the safety of "friendly AIs" in real life, and even there Google seems to show no further progress since draft guidelines were issued in 2007.
But the real legal issues are likely to be more prosaic, at least in the short term. If robots do cause physical harm to humans (or, indeed, property) at the moment the problem seems more akin to one for the law of torts or maybe product liability than murder or manslaughter. We are a long away away yet from giving rights of legal personality to robots. So there may be questions like how "novel" does a robot have to be before there will be no product liability because of the state of the art defense? How much does a robot have to have a capacity to learn and determine its own behaviours before what it does is not reasonably foreseeable by its programmer?? Do we need rules of strict liability for robot behaviour by its "owners" - as Roman law did and Scots law still does for animals, depending on whether they are categorised as tame or wild? And should that liability fall on the designer of the software, the hardware, or the "keeper" ie the person who uses the robot for some useful task? Or all three?? Is there a better analogy to be drawn from the liability of master for slave in the Roman law of slavery, as Andrew Katz brilliantly suggsted at a GikII a while back??
In the short(er) term, though, the key problems may be around the more intangible but important issue of privacy. Robots are likely as with NAO above to be extensively used as aids to patients in hospitals, homes and care homes; this is already happening in Japan and S Korea and even from some conference papers I have heard in the US. Such robots are useful not just because they give care but because they can monitor, collect and pass on data. Is the patient staying in bed? Are they eating their food and taking their drugs and doing their exercises? Remote sensing by telemedicine is already here; robot aides take it one step further. All very useful but what happens to the right to refuse to do what you are told, with patients who are already of limited autonomy? Do we want children to be able to remotely surveille their aged parents 24/7 in nursing homes, rather than trek back to visit them, as we are anecdotally told already occurs in the likes of Japan?
There are real issues here about consent, and welfare vs autonomy which will need thrashed out. More worryingly still, information collected about patients could be directly channeled to drug or other companies - perhaps in return for a free robot. We already sell our own personal data to pay for web 2.0 services without thinking very hard about it - should we sell the data of our sick and vulnerable too??
Finally robots will be a hard problem for data protection. If robots collect and process personal data eg of patients, are they data controllers or processors? presumably the latter; in which case very few obligations pertain to them except concerning security. This framework may need sdjusting as the ability of the human "owner" to supervise what they do may be fragile, given leearning algorithms, bugs in software and changing environments.
What else do we need to be thinking about?? Comments please :-)
Sunday, August 08, 2010
Note that since this event in July the Commission has announced the draft reform proposal for the DPD will be delayed till probably the second half of 2011 (sigh!)For those interested, the recent Wall Street Journal spread on privacy threats from a US perspective is also well worth perusing (follow links at end for related stories - there are 6 or so)
The Sunday Times are supposed to be publishing a UK follow up today (August 8) in which Pangloss should be quoted - but since its behind paywall I haven't been able to check :-)
Wednesday, August 04, 2010
We defended our position in a series of court cases that eventually made their way up to the European Court of Justice, which earlier this year largely upheld our position. The ECJ ruled that Google has not infringed trade mark law by allowing advertisers to bid for keywords corresponding to third party trade marks. Additionally, the court ruled that advertisers can legitimately use a third party trademark as a keyword to trigger their ads
Today, we are announcing an important change to our advertising trademark policy. A company advertising on Google in Europe will now be able to select trademarked terms as keywords. If, for example, a user types in a trademark of a television manufacturer, he could now find relevant and helpful advertisements from resellers, review sites and second hand dealers as well as ads from other manufacturers.
This new policy goes into effect on September 14. It brings our policy in Europe into line with our policies in most countries across the world. Advertisers already have been able to use third party trademarked terms in the U.S. and Canada since 2004, in the UK and Ireland since 2008 and many other countries since May, 2009.
The most interesting bit for Pangloss is that what accompanies this is a new type of notice and takedown procedure.
In the affected European countries after September 14, 2010, trademark owners or their authorized agents will be able to complain about the selection of their trademark by a third party if they feel that it leads to a specific ad text which confuses users about the origin of the advertised goods and services. Google will then conduct a limited investigation and if we find that the ad text does confuse users as to the origin of the advertised goods and services, we will remove the ad. However, we will not prevent use of trademarks as keywords in the affected regions.
This is an interesting way of implementing the caveats in the ECJ decision. Google have generally sought to automate all their processes as far as possible, whereeas this will create a lot of manual work in processing what will no doubt be a storm of cease and desist notices - compare the Content ID approach on YouTube where take down exists and is faithfully followed, but there is also a push towards persuading IPholders to submit their own works for pre emptive filtering. However in this case they clearly think the work involved in implementing this new scheme will make more money for them in advertising revenue, than it will lose in costs of manual take down. And take down should fend off most future litigation, though not, I suspect, all. For businesses , a harmonised policy through all EU is always a boon.
It would be interesting to see some empirical data emerging on how this affects the choice of keywords, click-through and text of AdWords ads in future, and how this does or not benefit the public interest in access to information in advertising. Google's usual approach to open data should be helpful here. (Will takedown notices under this scheme go to Chilling Effects website, as linking-to-content take down requests do? I hope so.)