Pangloss was surprised but also a little smug, as she'd covered this story as far back as May last year and in detail here. While we're waiting for an opinion to come from the Italian court (apparently required within 90 days, and is there an Italian translator out there please?) it is maybe worth refreshing the reader's memory for the only four ways I saw this case could go against Google, assuming Google did plead the ECD (bit of a no brainer that).
1. Italy may not have at all, or properly implemented the ECD. In which case Google has a claim for damages against Italy and the case may eventually to end up in the ECJ to hilarious embarrassment.
2. Italy may not think the ECD applied to Google/You Tube as a host, because of doubts about the "independence" of YT as an intermediary from its users . This argument has prevailed in some high profile French cases, but has largely been rubbished in most the rest of the EU. In particular the "YouTube complicit with users" argument may have some legs when we are talking about YT making money from ads next to popular copyright videos eg MTV clips, and thus, conceivably, being seen to profit from copyright infringement (cf current Viacom US litigation); but has absolutely none in the case of a video of this kind. Basically, YT provided a platform and got nothing out the deal except trouble.
3. Italy may not think the ECD applied to Google/You Tube as a host, because the ECD may only apply to commercial operators. This is almost entirely exploded as a theory, and will be when the Google Adwords case gets its full judgment from the ECJ next month. The Advocate-General's preliminary Opinion, as I noted in November, already plainly agrees that a search engine like Google which makes money indirectly from adverts while free to users can fall within the ECD. The UK courts have also so agreed.
4. The ECD safe harbour for hosts says basically that they are immune from liability for what they publish until they receive "notice" of illegal content. It does not say either that they have to pre-vet videos, nor that they have to read all the comments below a video. Pangloss suspects this, if anything, is the legal ambiguity in the case. Google says they took down as soon as the police gave them notice; Gooogle's opponents say "but the video was up for two months and people complained in comments". Should those "comments" have been regarded as notice then? In which case, did Google have a duty to pro-actively read them?
This is the bit that gets me annoyed. Google's success, as the Guardian's Charles Arthur explained cogently the other day, is built on automating everything. This doesn't mean that Google should be free of all responsibility for what goes on on its watch, but it does mean that exercising that responsibility should be practicable, or we lose Google and all its free chocolate factory offerings. Reviewing every comment under the millions of videos on YouTube - and in a multiplicity of languages - and in real or near real time - is impossible. It is a human task. It is not automatable. You can design algorithms to compare copyright works to "watermark" versions of the same - an approach Google is working on to cut down on YT piracy - but you cannot design a computer programme which can work out what videos - or text or images - are libellous or privacy-invasive. You just can't; well maybe not until artificial intelligence has finally gone Singularity, and possibly not even then - human judges find it hard enough a task.
The ECD was actively designed to set up that kind of practical responsibility for hosts. Receive notice of illegality; take down, or else become liable for it. It raises other issues about kneejerk censorship (we'll come back to that), but it is at least a good start. So when a freak case like this undermines the notice and take down system, it really is time to get our facts straight.
One way out here is to provide an easy way for the worried to flag a video as "inappropriate". That definitely would be notice, to which a takedown response could be automated. Malcolm Coles accuses Google's systems for alert of not working here, so I went and had a look. YT puts a "Flag" button below every video, fairly obviously, but it seems you can only use it if logged in. This means setting up a YT account; a process convoluted enough to put off a casual viewer, especially a one time viewer alerted by some one saying "look have you seen this, isn't it terrible?" This might explain why people left comments rather than gave "notice" on the YT site in the Italian case.
In which case, should Google be liable for failure to design robust systems of notice?? If so we're setting a very, very high bar for ECD immunity. Every host - which includes nearly every ISP and business in Europe with a website - would have to design obvious and accessible notice and take down buttons for the public, or fear legal liability. I can tell you from informal survey research I did myself a while back that most sites have far, far less information (if any) on how to give notice than YT. And in the UK, there is nothing in our law that requires this degree of specificity.
But there is another , more profound reason why automating takedown is not only impossible but undesirable. Google's complaints policy on privacy (for the UK) says:
"We don't act on all privacy complaints. The complaints we do act on usually involve videos, comments, or other text that contain your image or private information (such as social security number, government I.D., or credit card information). These days there's a good likelihood that you might get caught on camera if you're in a public place - whether it be a security camera or a tourist who inadvertently captures your image in their video. If you're complaining about a video that shows you in passing while you're in a public place, chances are we won't take action on your complaint unless you're clearly identified or identifiable in the video."
As a semi expert in the field, that reads to me like a true outline of the law. It may not be true of Italy. However it shows the dangers of accepting any claim of privacy invasion lightly, from anyone, without checking. Human checking that is - possiby even a human lawyer, if that isn't a contradiction in terms. Do we want to live in a world where anyone can censor any online content simply by claiming some kind of abuse of rights - privacy, libel, copyright - and demanding automatic take down? It would be an easier world for Google, to be sure - and an appealing world for those who want, understandably, videos of their children being abused or bullied online removed as as fast as possible - but bad news overall for the public interest in free speech and the public domain.
So how do we square this circle? If Google - and its competitors - can't primarily automate what they do, they cease to be able to function. Yet notice and take down is a process which if automated is inherently either impossible or undesirable. Is there a solution? I'm only a lawyer, not a computer scientist. I'm not sure. But if the Google Italy fracas is to do any good, it should inspire a debate , between science, business, law and the public about what that solution might be.
EDIT: ta to Charles Arthur at the Guardian for the nice link.