Tuesday, March 15, 2011

Online behavioural advertising: threat or menace?

Pangloss has recently been engaged in high level summit talks with her usual sparring partner Cybermatron on this rather current topic (which Pangloss is teaching, and about which Cybermatron is organising a workshop): as usual CyberM takes the privacy moral high ground that it is simply wrong for businesses and marketers to "follow you around the Web" without clear informed consent, while Pangloss is reduced to her usual confused, "er, um, yes it's a bit squicky but does it really need regulation? is it that significant in the nature of things compared to tsunamis, revolution in Africa and control orders? isn't it a matter that could better be dealt with by code and co-regulation, rather than regulation which would be territorially limited and probably merely favour US over EU digital industry?"

The latter approach certainly seems to be taking centre stage. Today I hear on Twitter that Microsoft, still maker of the most popular browser in the world, have agreed to install a Do Not Track opt-out cookie into IE v 9; this follows Firefox doing something roughly similar, leaving only Chrome (Google) and Safari (Apple) of the major desktop browsers as outliers.

Will this self regulatory, "code" solution, which has been heavily advocated by the FTC in the US be successful? It is very relevant to us in Europe right now, where a similar system is being promoted by the ad industry, especially the IAB and EASA . They suggest an "awareness raising icon" or "big red button" ,which would be put on the sites of participating websites, and would then lead users who clicked on it to an opt-out registry by which means they could indicate "do not track me" to the ad networks. These are the networks which collect data via third party cookies and other techniques such as Flash cookies, and then distribute the ads to participant websites. (Slightly worryingly, Pangloss has heard of this development anecdotally via attendee accounts of meetings held with the EC Commission in December and March, but cannot seem to trace an official document on the Web about it. These accounts seem to indicate that the Commission is already heavily behind these initiatives, which is all the more reason for a proper public debate.)

In an ideal universe, such a user-choice driven system could be good. It might allow users (like Cybermatron) who want to to protect themselves from online data collection and profiling, to do that: and let those who are either quite happy about it all (the majority "don't cares"), or feel that web 2.0 businesses need a revenue stream to survive that targeted ads supply, and the genie is already out of the bottle re their personal data (moi, on a bad day); or who actually like targeted ads (these people must exist somewhere, though Pangloss has never met them); or who feel they can protect themselves from ads using filter products like AdAware or Firefox anti-ad plugins (the techy fringe, and distinctly not including my mum), to go on doing their thing.

But as usual it's a little more complicated than that (c Ben Goldacre, 2011). The WSJ note firstly:

It still isn't clear how effective the privacy protection tools in Microsoft's browser will be. The do-not-track feature automatically sends out a message to websites and others requesting that the user's data not be tracked.

But the system will only work if tracking companies agree to respect visitors' requests. So far, no companies have publicly agreed to participate in the system.

The price goes on to quote the IAB moaning that their members have no systems set up to respond to "Do Not Track" requests. This strikes me as getting into protesteth too much territory: if the advertising industry wants to avoid mandatory regulation with, perhaps, stiff fines, they wil get their act together on this pronto or face the worse alternative. One imagines similar fears are driving Microsoft and Forefox. It is interesting that Google who make Chrome and who benefit by far the most from the online advertising market appear to be dragging their feet.

So what are the problems? Pangloss has been trying to get her head around this, with a bit of help from Ms Matron and Alex Hanff's blog on PI.

First, that good old chestnut, consumer ignorance, inertia and techno-inability. Most consumers don't click on buttons to opt out from behavioural tracking, just like they don't go looking for privacy settings on Facebook. They have better things to do: like go looking for the goods and services they went online for in the first place, or on FB, looking to see what friends are having cool parties. There also seems to be some debate about just how big the "big red button" will be but that's really the least of the problem.

(Interestingly, Pangloss has spent some time lately helping her much maligned mother with computing matters and observed that she (my mum that is) just does not have the habit most readers here of younger generations will have acquired without noticing, of searching all around a webpage for cues. She would never even notice the big red button unless it was as big as a Comic Relief red nose. But I digress.)

And in fact US research bears this out already re the behavioural ads opt out button. Hanff states:
"TrustE carried out an experiment to measure the effectiveness of the (US Do-Not_track) icon. Over 20 million people visited an experimental web page of which 0.6% of unique visitors interacted with the icon. TrustE shouted that this was a wonderful success, but I think the sane among us would argue the opposite is true."

If this is true, I'd certainly agree.

A secpnd, connected, problem is what is the effect of an opt out indication even if someone gets around to making one, by Do Not Track button or otherwise? You might well think it means that you have chosen for data collected about you not to be profiled and mined ie not to be tracked: but in fact the US experience so far may be just that the data collection and mining still goes on, but you don't get the targeted ads. This rather misses the point and I'm pretty sure everyone, including the NAI and IAB , knows this :-)

And a third problem is that given inertia, the problem is not really solved by the button, charming as it is, but by the underlying default set up of consumer browsers like IE, Firefox and Chrome. If the default is no tracking without saying "yes, please." (ie opt-in) then those who really want targeted ads can indeed opt-in, argues Cybermatron, and leave the rest of us alone. Less determined people like me say, well if no one ever clicks buttons if they don't have to, then no one will opt in to targeted ads bar a few maniacs, and web 2.0 will go bankrupt. I don't want that. Hmm. (It is also worth noting at this point that browsers are mostly written by companies whose fortunes are fairly heavily dependent on online advertising. Also hmm.)

Matron's solution is that web 2.0 can survive on serving ads, without using ad networks and behavioural tracking and data mining - good old fashioned second party cookie tracking, where one site uses what it learns about you to serve you more relevant ads. The likes of Amazon used to do quite nicely on this alone, using algorithms like "People like you who bought X also liked Y". Users can also fairly successfully block second party cookies themselves using most browsers, without having to rely on believing ad networks will implement do-not-track opt-out registers, not just save the data fot later and hide the ads.

But such evidence as there has been available to the public in recent years seems to point, unfortunately, to second party cookie tracking not being good enough for economic success. Google has massively the giant's share of the online ad delivery market because via its AdWords programmes, its near monopoly of search terms in many countries and its affiliates like YouTube and Android, it can collect far more targeting info about users than any other single site. The empirical evidence seems to be ; more targeted info means more click throughs means more money for the online industries in question.

One of the notable phenomena is that for companies like Amazon, advertising was a second string activity, really mainly marketing their own services. By contrast, the web 2.0 market, like Google, Facebook, last.fm etc etc, charge nothing so have to make money out of selling something, ie ads for other services and companies. This can only be achieved in any realistic way via third party cookies, ad networks and the like, goes a fairly obvious argument. Is it coincidence that third party advertising networks began to take over the market at almost the same time web 2.0 unpaid activity became the great success story of the Web? Seems unlikely but who knows?

In short, we need more data. Economic data on who makes money from which forms of targeted marketing, and who doesn't. Technical data on how effective an opt-out cookie can be anyway (what for example, would its effect be on Flash and zombie cookies? what happens if you delete your opt-out cookie?) Technical and social data on how valid the underlying data profiles are which are used by ad networks to deliver targeted ads: are their predictions reasonable out of context (eg some in-game data collection seems to have reportedly tagged people as "risk taking" or "aggressive" ; are they verifiable and transparent ; can they be misused (eg used to target addicts or the young with inherently risky offerings); can they be de-anonymised.

Since the latter seems increasingly likely (see Paul Ohm's seminal work passim), I have suggested before that such anonymised data profiles should benefit from some if not all of the same protection as "personal data" under some rubric like "potentially personal data". Notably this might make data profiles even where not tagged by name subject to subject access requests, and deletion requests where damage or distress was shown (or even not at all if we get the much ballyhooed right to forget).

Finally, for us lawyers, I think the biggest challenge is to dig ourself out the regulatory hell we are in where the DPD and PECD (and the media, exceptionally unhelpfully) present us with a mish mash of consent, "explicit consent", prior consent, informed consent, opt-in and opt-out consent. To a very large extent these distinctions are now pretty meaningless in their purpose, ie, to provide protection to users in controlling the processing of their personal data without their knowledge and consent. Eg, "sensitive personal data" is supposed to be specially protected by a requirement of "explicit consent" in the DPD scheme, but a common lawyer would argue a site like Facebook gets exactly that - via the registration, login or "I accept the terms and conditions" box - without any real sense of any added protection.

Hanff (above)argues forcefully that the amended PECD, which is due to be implemented across the EU shortly, now requires prior opt-in, and thus an opt-out system of the "big red button" type, will be illegal. But sympathetic as I am to his outrage, this is not what the new law says.

Art 5(3)of the PECD now says that placing cookies is only allowed where "the user has given his or her consent, having been provided with clear and comprehensive information." In some EU countries such as notably the UK, consent can be given by implication. If the article said "explicit" consent then this would not be possible - but, contrary to some very bad BBC reporting, and according to BIS's version of the amended PECD, there is no use in amended art 5(3) of the word "explicit". (Nor by the way, is there in art 14 on locational data which remains unamended by the new changes. This seems exceptionally odd.)

Furthermore, under EU law generally, it seems that the settings of a browser which has not been altered to opt-out, very unfortunately, can probably be seen as giving that consent by implication, as this has what has been expressly put into the recitals of the amended PECD. Most browsers do by default accept second, and sometimes third, party cookies. In some browsers, such as the version Pangloss has of Firefox, this distinction is not made - cookies are accepted and users can choose to go in and delete them individually. In such an analysis, most browsers will be set to "give consent" and the "big red button" is merely providing users with an opportunity to withdraw the consent they have already given, and is perfectly legal.

This is not a good analysis for privacy or consumers. It is not what those who fought for the changes in art 5(3) probably thought they were getting. But it is a plausible interpretation. Of course, existing national laws and national implementations may alter its meaning "on the ground" ; and I suspect we will see substantial cross EU disharmony emerging as a result. None of which will in fact help the digital industries.

What do we need out of regulation rather than this fumbling about opt in and opt out? Nellie Kroes has some ideas:

First and foremost, we need effective transparency. This means that users should be provided with clear notice about any targeting activity that is taking place.

Secondly, we need consent, i.e. an appropriate form of affirmation on the part of the user that he or she accepts to be subject to targeting.

Third, we need a user-friendly solution, possibly based on browser (or another application) settings. Obviously we want to avoid solutions which would have a negative impact on the user experience. On that basis it would be prudent to avoid options such as recurring pop-up windows. On the other hand, it will not be sufficient to bury the necessary information deep in a website’s privacy policies. We need to find a middle way.[italics added]

On a related note, I would expect from you a clear condemnation of illegal practices which are unfortunately still taking place, such as ‘re-spawning’ of standard HTTP cookies against the explicit wishes of users.

Fourth and finally: effective enforcement. It is essential that any self-regulation system includes clear and simple complaint handling, reliable third-party compliance auditing and effective sanctioning mechanisms. If there is no way to detect breaches and enforce sanctions against those who break the rules, then self-regulation will not only be a fiction, it will be a failure. Besides, a system of reliable third party compliance auditing should be in place."

That "middle way" solution, that involves real opt in consent but not endless pop up windows requesting consent, sounds a lot to me like mandating that browsers and manufacturers set browsers by default to reject cookies so users can demnonstarte real consent by changing that setting : the same strategy that I rejected above as impractical as the death of revenue to web 2.0. Maybe there is some more suble version of Reding's "middle way" I don't know about - I sincerely hope so. (Techy answers again very welcome!!)

But if Ed Vaizey, can for example suggest, as he did this week, that all computers sold in the UK should be shipped with software set by default to filter out all "porn", (however he plans to define that, and good luck with that) then why can't a similar command be sent out re the relatively simple privacy settings of browsers? Pangloss suspects that in reality, neither will happen, especially given that computers and handsets alike are mostly assembled outside the EU. It looks like the cookie and OBA wars , both in and outside of Europe, still have a fair way to go..





1 comment:

Unknown said...

Thanks,excellent article, clarifying many misconceptions.
BTW, did you spell "Chrome" as "Chrime"on purpose, or is this a Freudian slip?