Friday, March 18, 2011

The right to forget or the right to spin?

Viviane Reding has been publicising one of the more poetic planks of the upcoming Data Protection Directive reforms, the so-called "right to forget" or from the French (who dreamt it up), the droit a d'oubli.

The right to forget is intriguing and seems to have caught the public attention of more than geeks and DP nerds. In boring Anglo-Saxon, it sounds much less exciting. The right to delete your personal data, wherever it is held - eg on Facebook - is what it's about. Put that way it doesn't sound that new. After all the DPD already gives you the right in art 14 to
" object at any time on compelling legitimate grounds relating to his particular situation to the processing of data relating to him, save where otherwise provided by national legislation. Where there is a justified objection, the processing instigated by the controller may no longer involve those data;"
In the UK DPA 98, s 10, that gets translated as the right to stop processing where it is "causing or is likely to cause substantial damage or substantial distress to him or to another" and this is "unwarranted". As often the case, there is an argument that this is a rather limited expression lof the DPD, especially when case law is considered. There's also a connected right to demand your personal data is not processed for the purposes of direct marketing.

But this doesn't add up to an unqualified right to have data deleted nor to have this done for no reason at all, except it's your data. This is what the "right to forget"or "delete" movement is about.

Pangloss initially found the right to forget very appealing, but has got more conflicted as time has gone on. The trouble most often cited is that your personal data is very often also someone else's personal data. If I post a picture of both of us at a party on FB, do you have the right to delete it? What about my freedom of expression, my right to tell my own story? With pictures, you can imagine solutions - pixellate out the person objecting or crop it. Perhaps the compromise is that I have the right to post the photo but you have the right to untag yourself from it. (Though this will not suit some.)

But what about where I say "I was at Jack's last night and he was steaming drunk?" Does Jack have the right to delete this data, even if it's on my profile? This is where the Americans start indeed to get steamed up - since their culture and legal system has repeatedly preferred free speech to privacy rights.

Unsurprisingly this is one of the the scenarios Peter Fleischer, chief privacy officer of Google, had in mind when he described the right to forget last week as "foggy thinking ", claimed that "this raises difficult issues of conflict between freedom of expression and privacy" and more or less implied that this could be dealt with perfectly well by traditional laws of libel. In an ideal world this might be so: but we don't live in that world, but one where ordinary citizens as opposed to celebrities, almost never get to use laws like libel because they're simply far too costly and scarey.

Would Jack sue for libel in the above example? No, almost never. But he might ask FB to take it down (if he was aware it existed). This is another of Fleischer's worries - that intermediaries like ISPs and hosts would get inextricably and expensively involved in the "right to forget". Here his real agenda becomes fairly apparent - Google's success is entirely based on their right to remember as much as possible about us. We are back here in another version of the cookie and data retention wars, passim.

I am a fan of the Google chocolate factory, as anyone reading this blog will surely have gathered - but it is a mite disingenous to read Fleisher's (beautifully written) post without bearing in mind what seems to be Google's real worry, as cited at the bottom of his list, that search engines will find themselves called on to implement what people often want far more than a right to delete, namely a "right for their data not to be found" - ie, for it to be expunged from Google's web results.

Fleisher says correctly (and commendably under-statedly) that "This will surely generate legal challenges and counter-challenges before this debate is resolved. ". Imagine the reaction of Trip Advisor for example when 1000s of people who run hotels and restaurants try to have the site removed from Google rankings because it has personal data about them that they're not overly fond of..? More sympathetically, many readers of this blog will know decent people who have tried for years to get results removed from Google - unfair and illegitimate reviews, catty remarks from ex partners, professionals whose working life is blighted by abusive remarks by disgruntled ex clients. There should I think be clear remedies for them not dependent on the ad hoc discretion of the sitein question, depending on what mood it's in that day. On the other other hand, I don't want a world where politicians or demagogues can get their dodgy past involvements with fascism or the BNP or whatever quietly deleted or rendered unfindable on Google (this is a turf war which already goes on day in day out on the edits on Wikipedia).

A big problem (as with all DP issues) is the cross border, applicable law or jurisdiction aspects. Fleisher's column cites a rather sensationalist example - when a German court ordered references to a murder by a German citizen removed from a US based Wikipedia page because those convictions under German law were "spent". In fact rules about rehabilitation of offenders and spent convictions are common - certainly the UK has similar - and all that is unusual about this case is the attempt of the German courts to extend jurisdiction to publications hosted abroad. Indeed as some US states have "rights of publicity" protecting celebrity image and some don;t, one imagines they must already have evolved a degree of expertise in the international private law of privacy/publicity rights. (What if Elvis's image on tee shirts is protected in Tennessee but not in Virginia? can the Tennessee estate sue the Virginia t shirt factory that uses his image without paying?)

But certainly an EU right to forget will almost invariably engage us in the same kind of angst and threats of "data wars" over extraterritoriality that the Eighth DP Principle on export of personal data already has - not something to look forward to. It is noticeable that Reding fires off an early salvo on this when her spokesperson says , not for the first time, that companies "can't think they're exempt just because they have their servers in California or do their data processing in Bangalore. If they're targeting EU citizens, they will have to comply with the rules."

In reality , Pangloss suspects any right to forget that makes it through the next few years of horse trading will look much more limited and less existential than most of the ideas in the blogoverse - more like the right FB has already conceded, to delete rather than simply deactivate your profile, for example. Reding's speech itself seems to be in practice more about how FB sets its defaults than anything else: a default opt out from letting third parties tag your photos, rather than opt in, would seem a pretty limited and sensible demand.

Being more aspirational, Pangloss still has a soft spot for one interpretation of the "right to forget" which Fleischer rather derides as technically impossible - self expiring data. I'd love to hear from any techies who know more about this topic.

But the debate that has caught the public imagination goes wider than just DP law, and it is about whether we want to live in an online spin society.

There has been a certain amount of information coming out lately about how the Internet is not what it once was. Once we thought the Web was a conduit to unmediated news and opinions from real people, that it would enable direct democracy and change the world. But recent evidence has been that when it really matters - in matters of politics and revolutions and celebrities and ideology - a lot of what seems to be the "honest bloggers" or commenters or posters are actually paid spinners, employed and trained in the blogging and astro turfing schools of China and Russia and Iran and now, we hear this week, the US.

The right to forget can in some ways be used as the individual, non corporate, non state version of this. Rewriting history has been described by many people as Orwellian: we are at war with Eastasia, we have always been at war with Eastasia. That is chilling (in all senses of the word, including speech :-). The reality, as I already said, is likely to be consideringly less overwhelming (or effective). But this is still a debate we need to start having.

Tuesday, March 15, 2011

Online behavioural advertising: threat or menace?

Pangloss has recently been engaged in high level summit talks with her usual sparring partner Cybermatron on this rather current topic (which Pangloss is teaching, and about which Cybermatron is organising a workshop): as usual CyberM takes the privacy moral high ground that it is simply wrong for businesses and marketers to "follow you around the Web" without clear informed consent, while Pangloss is reduced to her usual confused, "er, um, yes it's a bit squicky but does it really need regulation? is it that significant in the nature of things compared to tsunamis, revolution in Africa and control orders? isn't it a matter that could better be dealt with by code and co-regulation, rather than regulation which would be territorially limited and probably merely favour US over EU digital industry?"

The latter approach certainly seems to be taking centre stage. Today I hear on Twitter that Microsoft, still maker of the most popular browser in the world, have agreed to install a Do Not Track opt-out cookie into IE v 9; this follows Firefox doing something roughly similar, leaving only Chrome (Google) and Safari (Apple) of the major desktop browsers as outliers.

Will this self regulatory, "code" solution, which has been heavily advocated by the FTC in the US be successful? It is very relevant to us in Europe right now, where a similar system is being promoted by the ad industry, especially the IAB and EASA . They suggest an "awareness raising icon" or "big red button" ,which would be put on the sites of participating websites, and would then lead users who clicked on it to an opt-out registry by which means they could indicate "do not track me" to the ad networks. These are the networks which collect data via third party cookies and other techniques such as Flash cookies, and then distribute the ads to participant websites. (Slightly worryingly, Pangloss has heard of this development anecdotally via attendee accounts of meetings held with the EC Commission in December and March, but cannot seem to trace an official document on the Web about it. These accounts seem to indicate that the Commission is already heavily behind these initiatives, which is all the more reason for a proper public debate.)

In an ideal universe, such a user-choice driven system could be good. It might allow users (like Cybermatron) who want to to protect themselves from online data collection and profiling, to do that: and let those who are either quite happy about it all (the majority "don't cares"), or feel that web 2.0 businesses need a revenue stream to survive that targeted ads supply, and the genie is already out of the bottle re their personal data (moi, on a bad day); or who actually like targeted ads (these people must exist somewhere, though Pangloss has never met them); or who feel they can protect themselves from ads using filter products like AdAware or Firefox anti-ad plugins (the techy fringe, and distinctly not including my mum), to go on doing their thing.

But as usual it's a little more complicated than that (c Ben Goldacre, 2011). The WSJ note firstly:

It still isn't clear how effective the privacy protection tools in Microsoft's browser will be. The do-not-track feature automatically sends out a message to websites and others requesting that the user's data not be tracked.

But the system will only work if tracking companies agree to respect visitors' requests. So far, no companies have publicly agreed to participate in the system.

The price goes on to quote the IAB moaning that their members have no systems set up to respond to "Do Not Track" requests. This strikes me as getting into protesteth too much territory: if the advertising industry wants to avoid mandatory regulation with, perhaps, stiff fines, they wil get their act together on this pronto or face the worse alternative. One imagines similar fears are driving Microsoft and Forefox. It is interesting that Google who make Chrome and who benefit by far the most from the online advertising market appear to be dragging their feet.

So what are the problems? Pangloss has been trying to get her head around this, with a bit of help from Ms Matron and Alex Hanff's blog on PI.

First, that good old chestnut, consumer ignorance, inertia and techno-inability. Most consumers don't click on buttons to opt out from behavioural tracking, just like they don't go looking for privacy settings on Facebook. They have better things to do: like go looking for the goods and services they went online for in the first place, or on FB, looking to see what friends are having cool parties. There also seems to be some debate about just how big the "big red button" will be but that's really the least of the problem.

(Interestingly, Pangloss has spent some time lately helping her much maligned mother with computing matters and observed that she (my mum that is) just does not have the habit most readers here of younger generations will have acquired without noticing, of searching all around a webpage for cues. She would never even notice the big red button unless it was as big as a Comic Relief red nose. But I digress.)

And in fact US research bears this out already re the behavioural ads opt out button. Hanff states:
"TrustE carried out an experiment to measure the effectiveness of the (US Do-Not_track) icon. Over 20 million people visited an experimental web page of which 0.6% of unique visitors interacted with the icon. TrustE shouted that this was a wonderful success, but I think the sane among us would argue the opposite is true."

If this is true, I'd certainly agree.

A secpnd, connected, problem is what is the effect of an opt out indication even if someone gets around to making one, by Do Not Track button or otherwise? You might well think it means that you have chosen for data collected about you not to be profiled and mined ie not to be tracked: but in fact the US experience so far may be just that the data collection and mining still goes on, but you don't get the targeted ads. This rather misses the point and I'm pretty sure everyone, including the NAI and IAB , knows this :-)

And a third problem is that given inertia, the problem is not really solved by the button, charming as it is, but by the underlying default set up of consumer browsers like IE, Firefox and Chrome. If the default is no tracking without saying "yes, please." (ie opt-in) then those who really want targeted ads can indeed opt-in, argues Cybermatron, and leave the rest of us alone. Less determined people like me say, well if no one ever clicks buttons if they don't have to, then no one will opt in to targeted ads bar a few maniacs, and web 2.0 will go bankrupt. I don't want that. Hmm. (It is also worth noting at this point that browsers are mostly written by companies whose fortunes are fairly heavily dependent on online advertising. Also hmm.)

Matron's solution is that web 2.0 can survive on serving ads, without using ad networks and behavioural tracking and data mining - good old fashioned second party cookie tracking, where one site uses what it learns about you to serve you more relevant ads. The likes of Amazon used to do quite nicely on this alone, using algorithms like "People like you who bought X also liked Y". Users can also fairly successfully block second party cookies themselves using most browsers, without having to rely on believing ad networks will implement do-not-track opt-out registers, not just save the data fot later and hide the ads.

But such evidence as there has been available to the public in recent years seems to point, unfortunately, to second party cookie tracking not being good enough for economic success. Google has massively the giant's share of the online ad delivery market because via its AdWords programmes, its near monopoly of search terms in many countries and its affiliates like YouTube and Android, it can collect far more targeting info about users than any other single site. The empirical evidence seems to be ; more targeted info means more click throughs means more money for the online industries in question.

One of the notable phenomena is that for companies like Amazon, advertising was a second string activity, really mainly marketing their own services. By contrast, the web 2.0 market, like Google, Facebook, last.fm etc etc, charge nothing so have to make money out of selling something, ie ads for other services and companies. This can only be achieved in any realistic way via third party cookies, ad networks and the like, goes a fairly obvious argument. Is it coincidence that third party advertising networks began to take over the market at almost the same time web 2.0 unpaid activity became the great success story of the Web? Seems unlikely but who knows?

In short, we need more data. Economic data on who makes money from which forms of targeted marketing, and who doesn't. Technical data on how effective an opt-out cookie can be anyway (what for example, would its effect be on Flash and zombie cookies? what happens if you delete your opt-out cookie?) Technical and social data on how valid the underlying data profiles are which are used by ad networks to deliver targeted ads: are their predictions reasonable out of context (eg some in-game data collection seems to have reportedly tagged people as "risk taking" or "aggressive" ; are they verifiable and transparent ; can they be misused (eg used to target addicts or the young with inherently risky offerings); can they be de-anonymised.

Since the latter seems increasingly likely (see Paul Ohm's seminal work passim), I have suggested before that such anonymised data profiles should benefit from some if not all of the same protection as "personal data" under some rubric like "potentially personal data". Notably this might make data profiles even where not tagged by name subject to subject access requests, and deletion requests where damage or distress was shown (or even not at all if we get the much ballyhooed right to forget).

Finally, for us lawyers, I think the biggest challenge is to dig ourself out the regulatory hell we are in where the DPD and PECD (and the media, exceptionally unhelpfully) present us with a mish mash of consent, "explicit consent", prior consent, informed consent, opt-in and opt-out consent. To a very large extent these distinctions are now pretty meaningless in their purpose, ie, to provide protection to users in controlling the processing of their personal data without their knowledge and consent. Eg, "sensitive personal data" is supposed to be specially protected by a requirement of "explicit consent" in the DPD scheme, but a common lawyer would argue a site like Facebook gets exactly that - via the registration, login or "I accept the terms and conditions" box - without any real sense of any added protection.

Hanff (above)argues forcefully that the amended PECD, which is due to be implemented across the EU shortly, now requires prior opt-in, and thus an opt-out system of the "big red button" type, will be illegal. But sympathetic as I am to his outrage, this is not what the new law says.

Art 5(3)of the PECD now says that placing cookies is only allowed where "the user has given his or her consent, having been provided with clear and comprehensive information." In some EU countries such as notably the UK, consent can be given by implication. If the article said "explicit" consent then this would not be possible - but, contrary to some very bad BBC reporting, and according to BIS's version of the amended PECD, there is no use in amended art 5(3) of the word "explicit". (Nor by the way, is there in art 14 on locational data which remains unamended by the new changes. This seems exceptionally odd.)

Furthermore, under EU law generally, it seems that the settings of a browser which has not been altered to opt-out, very unfortunately, can probably be seen as giving that consent by implication, as this has what has been expressly put into the recitals of the amended PECD. Most browsers do by default accept second, and sometimes third, party cookies. In some browsers, such as the version Pangloss has of Firefox, this distinction is not made - cookies are accepted and users can choose to go in and delete them individually. In such an analysis, most browsers will be set to "give consent" and the "big red button" is merely providing users with an opportunity to withdraw the consent they have already given, and is perfectly legal.

This is not a good analysis for privacy or consumers. It is not what those who fought for the changes in art 5(3) probably thought they were getting. But it is a plausible interpretation. Of course, existing national laws and national implementations may alter its meaning "on the ground" ; and I suspect we will see substantial cross EU disharmony emerging as a result. None of which will in fact help the digital industries.

What do we need out of regulation rather than this fumbling about opt in and opt out? Nellie Kroes has some ideas:

First and foremost, we need effective transparency. This means that users should be provided with clear notice about any targeting activity that is taking place.

Secondly, we need consent, i.e. an appropriate form of affirmation on the part of the user that he or she accepts to be subject to targeting.

Third, we need a user-friendly solution, possibly based on browser (or another application) settings. Obviously we want to avoid solutions which would have a negative impact on the user experience. On that basis it would be prudent to avoid options such as recurring pop-up windows. On the other hand, it will not be sufficient to bury the necessary information deep in a website’s privacy policies. We need to find a middle way.[italics added]

On a related note, I would expect from you a clear condemnation of illegal practices which are unfortunately still taking place, such as ‘re-spawning’ of standard HTTP cookies against the explicit wishes of users.

Fourth and finally: effective enforcement. It is essential that any self-regulation system includes clear and simple complaint handling, reliable third-party compliance auditing and effective sanctioning mechanisms. If there is no way to detect breaches and enforce sanctions against those who break the rules, then self-regulation will not only be a fiction, it will be a failure. Besides, a system of reliable third party compliance auditing should be in place."

That "middle way" solution, that involves real opt in consent but not endless pop up windows requesting consent, sounds a lot to me like mandating that browsers and manufacturers set browsers by default to reject cookies so users can demnonstarte real consent by changing that setting : the same strategy that I rejected above as impractical as the death of revenue to web 2.0. Maybe there is some more suble version of Reding's "middle way" I don't know about - I sincerely hope so. (Techy answers again very welcome!!)

But if Ed Vaizey, can for example suggest, as he did this week, that all computers sold in the UK should be shipped with software set by default to filter out all "porn", (however he plans to define that, and good luck with that) then why can't a similar command be sent out re the relatively simple privacy settings of browsers? Pangloss suspects that in reality, neither will happen, especially given that computers and handsets alike are mostly assembled outside the EU. It looks like the cookie and OBA wars , both in and outside of Europe, still have a fair way to go..





Friday, March 04, 2011

A few more dates for diaries

The Strathclyde LLM in Internet law and Policy is happy to present a public lecture by Daithi MacSithigh of the University of East Anglia on March 25th 2011 Room 7.42, 7th floor Graham Hills Building, 40 George Street, Glasgow, commencing at 5.00pm. The event is free but please email Linda at linda.nicolson@strath.ac.uk to let us know if you are planning to attend.

The title is "
"The medium is still the message:Angry Birds,the Met Opera & broadband bills"

Pangloss is really looking forward to that :)

Also for central-belt Scots - put April 14th 2011 evening in your diary, when Strathclyde Law School and the Franco-Scots Alliance will be co-hosting an event on the current state of anti filesharing legislation in the UK and France - myself and Nicolas Jondet (currently teaching IP law at Strathclyde, and local expert on HADOPI) representing these jurisdictions respectively. Venue TBD but Old College in Edinburgh likely. Given the current events around the Digital Economy Act - judicial review, Hargreaves Review - as well as in France this could be lively :)

GikII goes Gothenberg!

From Matthias Klang who is bravely taking the helm..

GikII VI, Göteborg, Sweden 2011

Freedom, openness & piracy?
26-28 June 2011
IT University
Göteborg, Sweden

Call for Papers

Is GikII a discussion of popular culture through the lens of law – or is it about technology law, spiced with popular culture? For five years and counting, GikII has been a vessel for the leading edge of debate about law, technology and culture, charting a course through the murky waters of our societal uses and abuses of technology.

For 2011, this ship full of seriously playful lawyers will enter for the first time the cold waters of the north (well, further north than Scotland) and enter that land of paradoxes: Sweden. Seen by outsiders as well-organised suicidal Bergman-watching conformists, but also the country that brought you Freedom of Information, ABBA, the Swedish chef, The Pirate Bay and (sort of…) Julian Assange. We offer fine weather, the summer solstice and a fair reception at the friendly harbour of Göteborg.

So come one, come all… Clean your screens, look into the harder discs of your virtual and real lives, and present your peers with your ideas on the meaning of our augmented lives. Confuse us with questions, dazzle us with legal arguments, and impress us with your GikIIness. If you have a paper on (for example) regulation of Technology & Futurama, soft law in World of Warcraft, censoring social media & Confucius, the creative role of piracy on latter day punk or plagiarism among the ancient Egyptians – We are the audience for you (for a taste of past presentations see the Programme section).

Application process

Please send an abstract not exceeding 500 words to Professor Lilian Edwards (Lilian.Edwards@strath.ac.uk) or Dr Mathias Klang (klang@ituniv.se). The deadline for submissions is 15 April 2011. We will try to have them approved and confirmed as soon as possible so that you can organise the necessary travel and accommodation.

Registration

As with previous years, GikII is free of charge, and therefore there are limited spaces available, so please make sure you submit your paper early. Priority is always given to speakers, but there are some limited spaces available for students and non-speakers. Registration is open through Eventbrite.