Friday, February 26, 2010

Thursday, February 25, 2010

Annoyed now: Google & Italy

Lots of the blogosphere exploded in indignation yesterday at the revelation that an Italian court had found Google execs, including privacy chief Fleisher, criminally liable for publishing an amateur vid on You Tube which invaded the privacy of the special needs child depicted being bullied therein. Charges of criminal libel were however dismissed. Lawyers amongst us wondered if someone had forgotten to tell Italy about the safe harbours for hosting intermediaries of the E-Commerce Directive , arts 12-15 which apply throughout Europe. Richard Thomas, the UK's former Information Commissioner, despaired that this verdict was giving privacy a bad name. Americans, used to the total (and one might say, over-wide) immunity given online intermediaries in relation to publication torts by the Communications Decency Act were even more flabbergasted. Google, understandably slightly over egging it a tad, called it a serious threat to the very freedom of the Internet, well, at least in Italy. Peter Fleischer, awarded a six month suspended sentence, sounded about as genuinely outraged as a top corporate exec can sound on his blog, and threatened appeals, hellfire and a boycott of pasta.

Pangloss was surprised but also a little smug, as she'd covered this story as far back as May last year and in detail here. While we're waiting for an opinion to come from the Italian court (apparently required within 90 days, and is there an Italian translator out there please?) it is maybe worth refreshing the reader's memory for the only four ways I saw this case could go against Google, assuming Google did plead the ECD (bit of a no brainer that).

1. Italy may not have at all, or properly implemented the ECD. In which case Google has a claim for damages against Italy and the case may eventually to end up in the ECJ to hilarious embarrassment.

2. Italy may not think the ECD applied to Google/You Tube as a host, because of doubts about the "independence" of YT as an intermediary from its users . This argument has prevailed in some high profile French cases, but has largely been rubbished in most the rest of the EU. In particular the "YouTube complicit with users" argument may have some legs when we are talking about YT making money from ads next to popular copyright videos eg MTV clips, and thus, conceivably, being seen to profit from copyright infringement (cf current Viacom US litigation); but has absolutely none in the case of a video of this kind. Basically, YT provided a platform and got nothing out the deal except trouble.

3. Italy may not think the ECD applied to Google/You Tube as a host, because the ECD may only apply to commercial operators. This is almost entirely exploded as a theory, and will be when the Google Adwords case gets its full judgment from the ECJ next month. The Advocate-General's preliminary Opinion, as I noted in November, already plainly agrees that a search engine like Google which makes money indirectly from adverts while free to users can fall within the ECD. The UK courts have also so agreed.

4. The ECD safe harbour for hosts says basically that they are immune from liability for what they publish until they receive "notice" of illegal content. It does not say either that they have to pre-vet videos, nor that they have to read all the comments below a video. Pangloss suspects this, if anything, is the legal ambiguity in the case. Google says they took down as soon as the police gave them notice; Gooogle's opponents say "but the video was up for two months and people complained in comments". Should those "comments" have been regarded as notice then? In which case, did Google have a duty to pro-actively read them?

This is the bit that gets me annoyed. Google's success, as the Guardian's Charles Arthur explained cogently the other day, is built on automating everything. This doesn't mean that Google should be free of all responsibility for what goes on on its watch, but it does mean that exercising that responsibility should be practicable, or we lose Google and all its free chocolate factory offerings. Reviewing every comment under the millions of videos on YouTube - and in a multiplicity of languages - and in real or near real time - is impossible. It is a human task. It is not automatable. You can design algorithms to compare copyright works to "watermark" versions of the same - an approach Google is working on to cut down on YT piracy - but you cannot design a computer programme which can work out what videos - or text or images - are libellous or privacy-invasive. You just can't; well maybe not until artificial intelligence has finally gone Singularity, and possibly not even then - human judges find it hard enough a task.

The ECD was actively designed to set up that kind of practical responsibility for hosts. Receive notice of illegality; take down, or else become liable for it. It raises other issues about kneejerk censorship (we'll come back to that), but it is at least a good start. So when a freak case like this undermines the notice and take down system, it really is time to get our facts straight.

One way out here is to provide an easy way for the worried to flag a video as "inappropriate". That definitely would be notice, to which a takedown response could be automated. Malcolm Coles accuses Google's systems for alert of not working here, so I went and had a look. YT puts a "Flag" button below every video, fairly obviously, but it seems you can only use it if logged in. This means setting up a YT account; a process convoluted enough to put off a casual viewer, especially a one time viewer alerted by some one saying "look have you seen this, isn't it terrible?" This might explain why people left comments rather than gave "notice" on the YT site in the Italian case.

In which case, should Google be liable for failure to design robust systems of notice?? If so we're setting a very, very high bar for ECD immunity. Every host - which includes nearly every ISP and business in Europe with a website - would have to design obvious and accessible notice and take down buttons for the public, or fear legal liability. I can tell you from informal survey research I did myself a while back that most sites have far, far less information (if any) on how to give notice than YT. And in the UK, there is nothing in our law that requires this degree of specificity.

But there is another , more profound reason why automating takedown is not only impossible but undesirable. Google's complaints policy on privacy (for the UK) says:

"We don't act on all privacy complaints. The complaints we do act on usually involve videos, comments, or other text that contain your image or private information (such as social security number, government I.D., or credit card information). These days there's a good likelihood that you might get caught on camera if you're in a public place - whether it be a security camera or a tourist who inadvertently captures your image in their video. If you're complaining about a video that shows you in passing while you're in a public place, chances are we won't take action on your complaint unless you're clearly identified or identifiable in the video."

As a semi expert in the field, that reads to me like a true outline of the law. It may not be true of Italy. However it shows the dangers of accepting any claim of privacy invasion lightly, from anyone, without checking. Human checking that is - possiby even a human lawyer, if that isn't a contradiction in terms. Do we want to live in a world where anyone can censor any online content simply by claiming some kind of abuse of rights - privacy, libel, copyright - and demanding automatic take down? It would be an easier world for Google, to be sure - and an appealing world for those who want, understandably, videos of their children being abused or bullied online removed as as fast as possible - but bad news overall for the public interest in free speech and the public domain.

So how do we square this circle? If Google - and its competitors - can't primarily automate what they do, they cease to be able to function. Yet notice and take down is a process which if automated is inherently either impossible or undesirable. Is there a solution? I'm only a lawyer, not a computer scientist. I'm not sure. But if the Google Italy fracas is to do any good, it should inspire a debate , between science, business, law and the public about what that solution might be.

EDIT: ta to Charles Arthur at the Guardian for the nice link.

Wednesday, February 17, 2010

Filtering round up: French filtering, Ireland backs off, UK sidesteps?

Bit of a round up here on some interesting stories of last few weeks on aspects of filtering that I've been accumulating.

Increasingly, stories as to filtering out illegal content such as child porn; blocking infringing downloads of copyright material by deep packet inspection and disconnection; and filtering to fight the "war on terror" are converging. For all of these, the same issues come up again and again: privacy; proof, transparency and other aspects of due process; and scope creep. These 3 stories illustrate this well. For my own recent take on the issue of Net filtering, as I said before, see my Internet pornograohy chapter on SSRN, which suggests the need for a Free Speech Impact Assessment before non transparent stateNet filtering schemes are introduced, for whatever purpose.

Filtering of illegal content in France

Thanks to @clarinette on Twitter (whose real name I am not absolutely sure of!!) for pointing me to another important European move towards non transparent Internet filtering - this time in France. From La Quadrature de Net:

Paris, February 11th, 2010 - During the debate over the French security bill (LOPPSI), the government opposed all the amendments seeking to minimize the risks attached to filtering Internet sites. The refusal to make this measure experimental and temporary shows that the executive could not care less about its effectivity to tackle online child pornography or about its disastrous consequences. This measure will allow the French government to take control of the Internet, as the door is now open to the extension of Net filtering.

The refusal to enact Net filtering as an experimental measure is a proof of the ill-intended objective of the government. Making Net filtering a temporary measure would have shown that it is uneffective to fight child pornography.

As the recent move1 of the German government shows, only measures tackling the problem at its roots (by deleting the incriminated content from the servers; by attacking financial flows) and the reinforcement of the means of police investigators can combat child pornography.

Moreover, whereas the effectivity of the Net filtering provision cannot be proven, the French government refuses to take into account the fact that over-blocking - i.e the "collateral censorship" of perfectly lawful websites - is inevitable2. Net filtering can now be extended to other areas, as President Sarkozy promised to the pro-HADOPI ("Three-Strikes" law) industries3."

LQN are never exactly ones to mince their words:-) so the strong nature of this statement should perhas be taken with some care - but Pangloss intends to go investigate this story further.

Ireland, Eirecom, disconnection and DP

Meanwhile in a surprising twist, Eirecom have apparently pulled out of the negotiated settlement they reached in January 2009 to disconnect subscribers "repeatedly" using P2P for (alleged) illicit downloading. This was the result of the Irish court case brought against them by various parts of the music industry for hosting illegal downloads, and appeared to open up a route to "voluntary" notice and disconnection schemes on the part of the ISP industry; a worrying trend both for advocates of free speech, privacy, due process, ISP immunity and net neutrality.

Now however according to the Times:

As part of the agreement, Irma said it would use piracy-tracking software to trace IP addresses, which can identify the location of an internet user, and pass this information to Eircom. The company would then use the details to identify its customer, and take action.

But the office of the Data Protection Commissioner (DPC) has indicated that using customers’ IP addresses to cut off their internet connection as a punishment for illegal downloading does not constitute “fair use” of personal information. Irma and Eircom have asked the High Court to rule on whether these data-protection concerns mean the 2009 settlement cannot be enforced.

This is very, very interesting. A court case on this might settle a number of outstanding DP legal issues: whether IP addresses are "always" personal data (on which see also a recent EU study demonstarting the disharmny across Europe on this) and if not, when; what the scope of the exemmptions for preventing and investigating crime are; and what"fair" means in the whole context of the DP principles, purpose limitation and notice for processing.

Not only that but as the Times indicate, the human rights issues which have been repeatedly aired in debate around "three strikes" generally, would also come into play as well, as the straight DP law. Is use of a customer's personal data to cut them off from the Internet a proportionate response to a minor civil infringement? Does it breach a fundamantal right of freedom of expression or association? Does it breach due process? This could be the DP case of the decade. Pangloss is geekily excited. If anyone out there is involved in this case, do let me know.

UK cops don't terrorise the IWF?

Finally , as widely reported, the UK Home Office has introduced a website hotline for the public to report suspected terrorist or hate speech sites. Reports are then vetted by ACPO, the Association of Chief Police Officers, who it appears can then take action, not only by investigating in normal way, but also by asking the relevant host site to take down. The official press release notes : "If a website meets the threshold for illegal content, officers can exercise powers under section 3 of the Terrorism Act 2006 to take it down." Indeed on serving such a notice, the host only has 2 days to take down or loss immunity under the UK ECD Regs.

As TJ McIntyre also notes, this is a rather significant development, not just in itself but for sidestepping use of the Internet Watch Foundation (IWF). There have been persistent rumours since and before then-Home Sec Jacqui Smith's famous speech in Jan 2008, that theUK government was attempting to pressurise the IWF into adding reports of hate speech/terror to its block- or black-list; and that the IWF was as strongly resisting this, hate speech being a somewhat more ambiguous and controversial matter than adjudicating on child sexual imagery.

It seems then that the IWF has held fast and the Home Office have backed off and created their own scheme, which embraces only take down in the UK, not access blocking to sites abroad (?). Whether this is ideal remains to be seen. The IWF, at least until recently had the services of esteemed law prof Ian Walden as well as a lot of accumulated experience, and may have been a better informal legal tribunal, than a bunch of chief constables, to decide on the illegality of sites under terror legislation. Who knows. On the other hand , adding alleged terror URLs to an invisible, encrypted, non public blocklist defeats every concept of transparency and public debate regarding restrictions on freedom of political speech, and Pangloss is glad to see it avoided.

Pangloss's view remains that such difficult non-objective issues are best decided by the body long set up to deal with questions of hazy legal interpretation: namely, the courts. The definition of "terrorist" material for the urposes of s 3 of the 2006 Act is as follows (s 3(7)):

"(a) something that is likely to be understood, by any one or more of the persons to whom it has or may become available, as a direct or indirect encouragement or other inducement to the commission, preparation or instigation of acts of terrorism or Convention offences; or

(b) information which—

(i) is likely to be useful to any one or more of those persons in the commission or preparation of such acts; and

(ii) is in a form or context in which it is likely to be understood by any one or more of those persons as being wholly or mainly for the purpose of being so useful."

Well I hope that clears everything up :-) Still confused? Try s 3(8)).
"(8) The reference in subsection (7) to something that is likely to be understood as an indirect encouragement to the commission or preparation of acts of terrorism or Convention offences includes anything which is likely to be understood as—

(a) the glorification of the commission or preparation (whether in the past, in the future or generally) of such acts or such offences; and

(b) a suggestion that what is being glorified is being glorified as conduct that should be emulated in existing circumstances."

Er give me that last line again?

As with previous contested IWF rulings, the same questions come up again: what is the appeal from a take down notice under s 3 to the regular courts? What notice if any is given to the site owner and the public of therfact of and reasons for take down? What safeguards are there for freedom of speech? None of these are mentioned in ss 1-4 of the 2006 Act. Nor does there seem to be a general provision in the Act for Part 1 or the whole of the 2006 Act for appeals or review. Since the police are a public body however, one imagines that judicial review might be competent. EDIT However I am helpfully informed that ACPO is a company limited by giuarantee and regards itself as not a public body at least for the purpose of FOI requests. Clarity on this would be very desirable. And as noted above record keeping of take down for terror reasons seems to be poor due to voluntary compliance by ISPs.

Finally why introduce these powers if they are to be circumvented anyway? The Register reported on 12 November 2009 that so far no notices had been issued under s 3 anyway, because the UK ISPs involved had agreed to take down voluntarily, and no record has been kept of how many sites this involved. Furthermore if a site is taken down in the UK it won't be hard to resurrect it in a foreign country, where most extremist sites will be based anyway: El Reg reports that one site the police allegedly have their eye on, al-Fateh, a Hamas anti-Jewish kids site, is in fact hosted in Russia. One imagines this will continue to increase pressure on the IWF to expand the block list despite the latest moves.


Sunday, February 07, 2010

HL Committee on the Digital Economy Bill

Yes, that again:-)

As Twitter and ORG resders may know, I'm meaning to write some kind of interim summary of what the Committee stage in the House of Lords has "fixed" in the Digital Economy Bill with respect to the file-sharing and copyright provisions (A: not a lot) and what still needs urgently brought up at Report Stage and if necessary all the way to and through the Commons (A: an awful lot). This despite the best efforts of some exceptionally knowledgeable and persistent Lords, including though not limited to Lord Lucas, L. Howard of Rising, Lord Clement-Jones and the Earl of Errol.

However it seems my job has possibly been done for me - by the Lords' own Human Rights Joint Committee. Their executive summary makes very, very interesting reading and is worth quoting in full:

"
The Digital Economy Bill has been introduced to update the regulation of the communications sector. Due to time-constraints we focus on a single issue in the Bill: illegal file-sharing.

Copyright infringement reports

The Bill establishes a mechanism whereby holders of copyright will be able to issue a 'copyright infringement report' to an ISP where it appears that the ISP's service has been used by an account holder to infringe copyright. ISPs will be required to notify account holders when a copyright infringement report is received in connection with their account. The ISPs will also be required to maintain a list of account holders who have been the subject of such reports.

We consider that it is unlikely that these proposals alone will lead to a significant risk of a breach of individual internet users' right to respect for privacy, their right to freedom of expression or their right to respect for their property rights (Articles 8, 10, Article 1, Protocol 1 ECHR). However, we call on the Government to provide a further explanation of why they consider their proposals are proportionate.

Technical measures

The Bill provides for the Secretary of State to have the power to require ISPs to take "technical measures" in respect of account holders who have been the subject of copyright infringement reports. The scope of the measures will be defined in secondary legislation and could be wide-ranging.

We do not believe that such a skeletal approach to powers which engage human rights is appropriate. There is potential for these powers to be applied in a disproportionate manner which could lead to a breach of internet users' rights to respect for correspondence and freedom of expression. We set out a list of points that the Government should clarify in order to reduce the risk that these proposals could operate in a manner which may be incompatible with the Convention.

Right to a fair hearing

The Bill provides for provisions for appeals in codes. There is little detail about the right to appeal in the case of copyright infringement reports or decisions about the inclusion of certain individuals' information on copyright infringement lists. We consider that statutory provision for a right to appeal to an independent body against inclusion on any infringement list would be a human rights enhancing measure.

Without a clear picture of the criteria for the imposition of technical measures it is difficult to reach a final conclusion on the fairness of the process for the imposition of technical measures. This is a further argument against the skeletal nature of the technical measures clauses. We ask for further information about the quality of evidence to be provided and the standard of proof to be applied to be provided on the face of the Bill.

Reserve powers

Clause 17 of the Bill provides the Secretary of State with the power to amend the Copyright, Designs and Patents Act 1988 by secondary legislation. The broad nature of this power has been the subject of much criticism. In correspondence with us, the Secretary of State explained that the Government intended to introduce amendments to limit the power in Clause 17 and to introduce a 'super-affirmative' procedure. The Government amendments would limit the circumstances in which the Government could use their powers to amend the Act by secondary legislation and would provide a system for enhanced parliamentary scrutiny.

Despite the proposed amendments we are concerned that Clause 17 remains overly broad and that parliamentary scrutiny may remain inadequate. We call for a series of clarifications to address these concerns."

Delightful to see such plain and clear and unadulterated good sense. I particularly applaud the second section: "We do not believe that such a skeletal approach to powers which engage human rights is appropriate." Put that on your tee shirt and smoke it.

In the meantime, all kinds of odd and eddying currents are flowing around the whole filesharing mess, here and abroad. In Blighty, we're seeing more and more sectors of industry, like the hoteliers, coming to the realisation of how bad the DEB will be for them as providers of public wi fi to the public; in Europe, the Belgian SABANE case, which imposed an impossible to fulfil filtering obligation on a Belgian ISP in the interests of rightsholders, is going on appeal to the European Court of Justice, with strong backing in evidence from trusted computer industry experts ; and the first Ozzie case on intermediaries and file sharing since KaZaa has been heard, and as with Oink in the UK ,the music industry have done themselves no favours by bringing it (though this case, being civil, includes no room for accusations of perverse juries).

More on all of these to come, I suspect, but on the last, I direct you meanwhile to my colleague Technollama's very helpful comments on the Australian case. From Pangloss, it is bonne nuit.


Google and China: the fallout continues

Since I wrote my last post suggesting (rather speculatively) that Google's apparent willingness to pull out of China might be linked to US state fears of (and pressure concerning?) cyber espionage against data held by Google about US citizens instead of/as well as Chinese dissidents, the world has become very interested in the succeeding revelation by Google that they are now working with the NSA to improve their cyber defenses.

This raises all kinds of further questions: doesn't Google have as much expertise in computer security itself as the Spooks? Or as someone put it even more conspiratorially on Twitter: hadn't we always assumed Google was working with the spooks? In which case what drove a public admission of it now?

All fun stuff and clearly far beyond the ken of a mere academic lawyer. But t0day's Grauniad has an interesting quote:

"Google is unlikely to be turning to the NSA for technical advice. Why then is it calling in the spooks? One reason could be that the world's dominant internet company is now in the crossfire of early skirmishes of the next cold war.

This thought was reinforced by Financial Times columnist Gideon Rachman. He'd been to the International Institute for Strategic Studies for a briefing on its annual survey, Military Balance. "The thing I found most interesting," he said, "was the confirmation that cyber-security is the hot issue … John Chipman, the head of the IISS, says the institute is about to launch a study of cyber-security which raises all sorts of issues. What if a country's infrastructure could be destroyed as effectively by a cyber-attack as by an invasion of tanks? How do you defend against that? How do you identify the culprits? What does international law have to say – might we have to revise our definitions of what constitutes an act of war?"

"Chipman argues, plausibly, that we are now at an equivalent period to the early 1950s. Just as strategists had to devise whole new doctrines to cope with the nuclear age, so they will have to come up with new ideas to cope with the information age."

I've noted before that I find it difficult to see how current international law can define cyber attacks and especially cyber espionage as armed attacks justifying, eg, the doctrine of self defense. But I've also now been to several events where military lawyers seemed to be if not saying then at least moving toeards exactly that. It is clear we are entering the era of what is sometimes called "justificatory discourse" regarding cyber war, or PR in less elevated circles. (The irony of the fact this is playing out as the Iraq inquiry goes on is not lost on Pangloss. Nor that MI5 appears to be trying to get in on the action by revealing what bad stuff Chinese cyber spies have been doing inthe UK too.) The same thing, is, of course, happening in China too: one report from there notes that the average Chinese citizen is mostly apathetic to the loss of Google but Chinese news coverage has " focused not on Google but on what is perceived as US "information imperialism." "

And meanwhile, the ever excellent Ray Corrigan points out (I think - lots of interesting stuff packed in here) that cyberwar may be becoming the latest bogeyman, following hard on pedophiles and alQuaeda to justify incursions into our civil liberties. And that we are hardly ones to condemn China's Great Firewall, when we do an awful lot of net censorship ourselves. (See further, dare I say, my own chapter here, which is the basis of the paper on cyber filtering and free speech I'm giving in a few days.)

OT: Looking at B2fxx reminds me I have been derelict of duty not to mention my collague Chris Marsden's much awaited book on 'Net neutrality: towards a co-regulatory solution' is not only just published by Bloomsbury but also available for free download under a creative commons licence at http://www.bloomsburyacademic.com/pdf%20files/NetNeutrality.pdf . Lordy lordy such wondrous times we live in!!