Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Wednesday, September 26, 2012

2012/13 : the video!

It's become a Pangloss tradition at the start of the new academic yesr to find a new video i can use to scare the students. THis year's conveniently just arrived courtesy of @niccuzor - thanks Nic!

Tuesday, September 28, 2010

Location, Location, Geolocation..

Pangloss is collecting material on this fascinating topic. In the meantime have a look at this rather wonderful presentation from Kevin Anderson: (via Matthias Klang on Twitter)

Wednesday, August 11, 2010

Do robots need laws? : a summer post:)


I can so use this for the EPSRC Robotics Retreat I am going to in September!! (via io9 with thanks to Simon Bradshaw)



Another slightly more legal bit of robotics that's been doing the rounds, is this robots.txt file from the last.fm site. Robots.txt for the non techies are small text files which give instructions to software agents or bots as to what they are allowed to do on the site. Most typically, they do or don't tell Google and other search engines whether they are allowed to make copies of the site or not ("spider" it). No prize at all for the first person to realise what famous laws the last three lines are implementing:-)

User-Agent: *
Disallow: /music?
Disallow: /widgets/radio?
Disallow: /show_ads.php

Disallow: /affiliate/
Disallow: /affiliate_redirect.php
Disallow: /affiliate_sendto.php
Disallow: /affiliatelink.php
Disallow: /campaignlink.php
Disallow: /delivery.php

Disallow: /music/+noredirect/

Disallow: /harming/humans
Disallow: /ignoring/human/orders
Disallow: /harm/to/self

Allow: /

This all raises a serious underlying question (sorry) which is, how should the law regulate robots? We already have a surprising number of them. Of course it depends what you call a robot: Wikipedia defines them as "an automatically guided machine which is able to do tasks on its own, almost always due to electronically-programmed instructions".

That's pretty wide. It could mean the software agents or bots that as discussed above, spider the web, make orders on auction sites like eBay, collect data for marketing and malware purposes, learn stuff about the market, etc - in which case we are already awash with them.

Do we mean humanoid robots? We are of course, getting there too - see eg the world's leading current example, Honda's ASIMO, which might one day really become the faithful, affordable, un-needy helpmate of our 1950's Campbellian dreams . (Although what happens to the unemployment figures then?? I guess not that much of the market is blue collar labour anymore?) .But we also already live in a world of ubiquitous non humanoid robots - such as in the domestic sector, the fabulous Roomba vacuum cleaner, beloved of geeks (and cats); in industry, automated car assembly robots (as in the Picasso ads) ; and, of course, there are emergent military robots.

Only a few days ago, the news broke of the world's alleged first robot to feel emotions (although I am sure I heard of research protypes of this kind at Edinburgh University's robotics group years back.) Named Nao, the machine is programmed to mimic the emotional skills of a one-year-old child.

"When the robot is sad it hunches its shoulders forward and looks down. When it is happy, it raises its arms, looking for a hug.
Nao the robot
The relationships Nao forms depend on how people react and treat it

When frightened, it cowers and stays like that until it is soothed with gentle strokes on the head.The relationships Nao forms depend on how people react and treat, and on the amount of attention they show."

Robots of this kind could be care-giving companions not only for children, but perhaps also in the home or care homes for lonely geriatrics and long term invalids, whose isolation is often crippling. (Though again I Ludditely wonder if it wouldn't be cheaper just to buy them a cat.)

Where does the law come into this? There is of course the longstanding fear of the Killer Robot: a fear which Asimov's famous first law of robotics was of course designed to repel. (Smart bombs are of course another possibility which already, to some extent exist - oddly they don't seem to create the same degree of public distrust and terror, only philosophical musings in obscure B-movies.) But given the fact that general purpose ambulant humanoid artificially intelligent robots are still very much in the lab, only Japan seems so far to have even begun to think about creating rules securing the safety of "friendly AIs" in real life, and even there Google seems to show no further progress since draft guidelines were issued in 2007.

But the real legal issues are likely to be more prosaic, at least in the short term. If robots do cause physical harm to humans (or, indeed, property) at the moment the problem seems more akin to one for the law of torts or maybe product liability than murder or manslaughter. We are a long away away yet from giving rights of legal personality to robots. So there may be questions like how "novel" does a robot have to be before there will be no product liability because of the state of the art defense? How much does a robot have to have a capacity to learn and determine its own behaviours before what it does is not reasonably foreseeable by its programmer?? Do we need rules of strict liability for robot behaviour by its "owners" - as Roman law did and Scots law still does for animals, depending on whether they are categorised as tame or wild? And should that liability fall on the designer of the software, the hardware, or the "keeper" ie the person who uses the robot for some useful task? Or all three?? Is there a better analogy to be drawn from the liability of master for slave in the Roman law of slavery, as Andrew Katz brilliantly suggsted at a GikII a while back??

In the short(er) term, though, the key problems may be around the more intangible but important issue of privacy. Robots are likely as with NAO above to be extensively used as aids to patients in hospitals, homes and care homes; this is already happening in Japan and S Korea and even from some conference papers I have heard in the US. Such robots are useful not just because they give care but because they can monitor, collect and pass on data. Is the patient staying in bed? Are they eating their food and taking their drugs and doing their exercises? Remote sensing by telemedicine is already here; robot aides take it one step further. All very useful but what happens to the right to refuse to do what you are told, with patients who are already of limited autonomy? Do we want children to be able to remotely surveille their aged parents 24/7 in nursing homes, rather than trek back to visit them, as we are anecdotally told already occurs in the likes of Japan?

There are real issues here about consent, and welfare vs autonomy which will need thrashed out. More worryingly still, information collected about patients could be directly channeled to drug or other companies - perhaps in return for a free robot. We already sell our own personal data to pay for web 2.0 services without thinking very hard about it - should we sell the data of our sick and vulnerable too??

Finally robots will be a hard problem for data protection. If robots collect and process personal data eg of patients, are they data controllers or processors? presumably the latter; in which case very few obligations pertain to them except concerning security. This framework may need sdjusting as the ability of the human "owner" to supervise what they do may be fragile, given leearning algorithms, bugs in software and changing environments.

What else do we need to be thinking about?? Comments please :-)

Sunday, August 08, 2010

Reforming privacy laws

The videos are up from the recent very successful ORGcon on digital rights; and here is the vid of the panel Pangloss appeared on (all speakers v good - I appear about 9 mins in).

[ORGCon] Reforming Privacy Laws from Open Rights Group on Vimeo.

Note that since this event in July the Commission has announced the draft reform proposal for the DPD will be delayed till probably the second half of 2011 (sigh!)

For those interested, the recent Wall Street Journal spread on privacy threats from a US perspective is also well worth perusing (follow links at end for related stories - there are 6 or so)

The Sunday Times are supposed to be publishing a UK follow up today (August 8) in which Pangloss should be quoted - but since its behind paywall I haven't been able to check :-)

Thursday, June 17, 2010

A Day in Paris (Is Like a Year In Any Other Place.)

Pangloss just spent a very intense, very challenging day at the OECD Workshop on the Liability of 0nline Intermediaries, sadly curtailed by the need to rush off on a plane to Estonia (of which more anon). The idea was to kick off a major programme of work in this area and the great and good were assembled in force, with pithy comments and insights coming thick and fast.

Danny Weitzner, who was a fresh faced freedom fighter for the CDT when I first met him, transmogrified into a rising star at the WWW and MIT, and is now an adviser to Obama (ah, why doesn’t UK academe provide this kind of career path!) lead the forces favouring, by and large, US-style industry self regulation, but noted that even in 1731, Benjamin Franklin had recognised need for intermediary immunities by presenting an “apology for printers” (of the human, not inkjet, kind) lest they be persuaded by criticism to print only texts they were personally convinced by.

Peter Fleischer, chief privacy counsel for Google, made the political decidedly personal, by commencing his intervention on privacy and intermediaries with anecdotes about being a convicted criminal who could no longer enter Italy (prompting mildly irascible responses from various Italians trying to make it plain they were not exactly the new China). Gary Davis from Ireland, perhaps a tad controversially for a data protection deputy commissioner, noted that there seemed to be emerging agreement on trading personal data for free web 2.0 services, but the question was, how much data was too much data; and Bruce Schneier (no link needed!) created the biggest stir of the day (to Pangloss’s silent cheers) by mentioning almost casually that, at least in relation to security, he had never had much time for user education. An unnamed EU Commission person made the sign of the cross and quoted liberally from the EC’s Safer Social Networking principles. Lightning did not however smite the infidel Schneier.

Jean Bergevin, in charge of the EC Commission’s much delayed but upcoming review of the E-Commerce Directive (ECD) (expect a consultation soon) took ferocious notes and reminded those present that although copyright and criminal liability may steal the headlines, the exclusion of gambling from the ECD gives a case study of how these things pan out (clue: not well) when safe harbours for intermediaries are not in place. The response seemed to be for the actual gambling hosting websites to move safely offshore, leading to undue pressure from states against payment intermediaries, so as to starve the unauthorised gambling sites of funds; yet, on the whole, these strategies merely multiplied bureaucracy and were still unsuccessful, since the grey market found ways round them (as it did, I noted, when similar strategies were applied to stimey offshore illegal music sites like AllofMP3.com in Russia). Later Mr Bergevin finally enlightened me as to why the ECD excludes data protection and privacy from its remit, as famously was publicised during the Google Italy case; not some abstract academic justification, but just that “that belonged to another Directive”. Time to raise the issue of intermediary liability in the ongoing DPD reform process then, methinks?

My own main contribution came in the first scene-setting session, where Prof. Mark MacCarthy of Georgetown University kicked off discussion on whether the OECD (which is also soon to review its longstanding and much applauded privacy guidelines) could conceivably come up with similar global guidelines on intermediary liability acceptable to all states, all types of intermediaries (ISPs, search engines, social networking sites, domestic hosts, user generated content sites?) and all types of content related liability (copyright, trademark, porn, libel, privacy, security??)? Everyone agreed that once upon a time a rough global consensus on limited liability, based around the notice and takedown (NTD) paradigm, had been achieved c 2000, with the standout exception of the US’s CDA s 230(c), which provided total immunity to service providers in relation to publication torts, but which was seen in the EU at least as something of a historical accident.

Since then, however, twin pressures from both IP rightsholders seeking solutions to piracy, and states keen to get ISPs to police the incoming vices of online child pornography, pro-terror material and malware, had converged to drive some legislatures, and some courts, towards re-imposing liability on online intermediaries (graduated response laws and ISPs being one of the most obvious case studies) and even moving tentatively from a post factum NTD paradigm to an ex ante filtering duty (SABAM, some Continental eBay counterfeit goods cases, the projected Australian mandatory filtering scheme for adult content). While the “top end” of the market might sort its own house out in the negotiable world of IP without further regulation (see the protracted Viacom v YouTube saga, which could be seen as a very expensive game of blind negotiator’s bluff) other areas were (still) less amenable to self regulation.

Privacy was identified very early on as an outstanding example of this: getting sites like Facebook and Google, which live off the profits of selling their client’s personal data, to take the main responsibility for policing those clients’ privacy was, as one speaker said, like getting the wolf to guard the sheep. Ari Schwartz of the CDT interestingly noted the new-ish difficulty of getting businesses like Facebook to take responsibility vis a vis their own users for third party apps using their platform. Apple however were piloting a new model of responsibility by careful selection of apps allowed to use their platforms, while Google Android were doing it differently again (I want to come back to this fascinating discussion in a separate post).

My own points circled around the idea that increasingly, the current idea of “one size fits all” enshrined in the ECD does not really work; more in relation to types of liability though (copyright vs libel , for example, with very diifferent balances of rights and public policy at work) than in relation to types of intermediaries (did search engines really need a special regime, of the kind the DMCA has and the ECD doesn’t, I was asked? My answer, given the fact that the two most troublesome EC Google cases – Italy and Adwords – have actually related to hosting not linking – was probably no (though that still leaves Copiepresse to sort out).)

However there was also room for thinking about different regimes for different sizes of intermediaries – small ISPs and hosts, eg, will simply crumble under the weight of any potential monitoring obligations, jeopardising both freedom of expression and innovation, while in a similar bind, Google can afford to build a Content ID system for YouTube which lets filtering become, effectively, a monetising opportunity. All this of course still avoided the main problem, of how complicit or “non neutral” (in the words of the ECJ Adwords case) an intermediary has to be in relation to illegal or infringing behaviour or content (cf eBay, YouTube, Google etc) before it should lose any special immunities. On that point, even the EU let alone the OECD is going to have to work very, very hard to find consensus.

Security provided the best example (and the best panel) of the day on how market-driven self regulation cannot always provide an optimum solution in the Internet intermediary world, given the prevalence of what became known by shorthand as “misaligned incentives”. Put simply, this refers to the situation where A causes harm to B (or to everyone) but does not suffer the costs of those harms themselves and so has no or few incentives to correct/avoid them. So one of the most obvious ways to reduce malware spread, botnet threats, etc would be to ask ISPs to monitor users on their networks, isolate them if they became apparently infected by malware, and refuse to allow them to rejoin the Internet until they had submitted to “decontamination” and perhaps mandatory reloading of anti-virus protection plus automatic patching. In fact however ISPs mostly don’t do this; partly because there’s no extra money in it for them, but rather a possibility of years of wearying customer care; partly because many ISPs still think (probably wrongly, the Prodigy years are over) that taking any active steps may lead to them being held legally liable to the customer or for bad content. The bad effects meanwhile are felt by (a) society and (b) sometimes though not always, the customer: so misaligned incentives all round. Notwithstanding this, we heard heartening tales of newly launched voluntary initiatives in Germany and Australia for local ISP industry to take part in isolation and decontamination – so hurrah for that, and let us hope the OECD takes this on board as an important if not “traditional” part of the intermediary liability issue.

(This was where the Bruce Schneier quote on user education came in – and I have to say I absolutely agree. If you want a safer Internet for all – a societal aggregate good of security - you do not leave complex choices to be made by domestic users, who not only don’t understand either the risks or the options, but will never be interested enough, or continually educated enough, to do so. But this is not the same as when you talk about privacy; which is primarily an individual not a social good, and where society views the individual making an informed choice as a key element of their autonomy as a subject of human rights. But talking about consent to giving up personal data on SNSs took us into the world of age verification for kids and its impact on privacy, an even nastier can of worms, and no-one’s going to convince me you can get kids to use anonymous digital signatures when it’s hard enough to persuade lawyers to do this).

In short, a day with so much to chew on, my jaw ached by the end. Very sorry I had to miss the last two sessions: if anyone reading has notes on any preliminary conclusions reached, I’d be pleased to see them. Thanks to Karine Perset of the OECD especially for organising the day. Meanwhile I hope myself to stay involved both with this OECD work, and the revision of the ECD; as I often say, watch this space.

Wednesday, December 09, 2009

Facebook Privacy:: Fact or theory?

Xmas comes early for privacy advocates?!

The Register reports

"Facebook has ordered its 350 million users to sort out their privacy settings right now, before it throws the switch on its revamped security system.

The social networker farmer in chief Mark Zuckerberg, told its users last week that, "We're adding something that many of you have asked for — the ability to control who sees each individual piece of content you create or upload." He also promised a simplified privacy page.

..In today's warning, coinciding with the actual launch of the tools, Facebook promised its new Publisher Privacy Control would allow users to set a privacy setting for each piece on content they create.

The firm is also removing its "regional networks", in favour of four basic control settings: friends, friends of friends, everyone and customised.

This will be allied with an "easy, intuitive and accessible" privacy settings page."


Well, hmm, let's see - but Blogzilla. looks like we may finally have to rewrite that FB paper!

Of course in other news today, Sophos, who discovered 2 years ago that most FB users would revel their most private details to cartoon frog, found that 2 years on, relicating the study in Australia, ... well, nothing had really changed.

"The survey found that 46% of users in a fictional 21 year old's age group accepted the offered friendship, while 41% of a fictional 56 year old's peers did.

On Facebook once someone has been accepted as your 'friend' they can see more information about you, but you can still choose to hide information from those friends or limit it to specific groups amongst your online friends....

"Both groups were very liberal with their email addresses and with their birthdays," said Sophos head of technology in Asia Pacific Paul Ducklin. "This is worrying because these details make an excellent starting point for scammers and social engineers.""

Ah well, you can't have everything!



Monday, September 21, 2009

Facebook and privacy

Via Andrea Matwyshyn - after the Canadian reforms and this, what next?

"A Look at the Facebook Privacy Class Action (Beacon) Settlement

Facebook announced on Friday that it settled the class action challenging its "Beacon" advertising program. [Inside Facebook; h/t Jim McCullagh on Twitter] You can access the key docs here: [pdf] (Settlement Agreement; Motion for Preliminary Approval).

Net result? Facebook establishes a privacy foundation funded with $9.5 million (or what's left of this amount after attorneys' fees, costs, and class claims are deducted). "

Thursday, August 27, 2009

Canada Forces Facebook to make Privacy Changes

(via Ian Brown)

In a remarkable turn of events, Facebook has agreed to add significant new privacy safeguards and make other changes in response to the Privacy Commissioner of Canada’s recent investigation into the popular social networking site’s privacy policies and practices.

"The following is an overview of key issues raised during the investigation and Facebook’s response:

1. Third-party Application Developers

Issue: The sharing of personal information with third-party developers creating Facebook applications such as games and quizzes raises serious privacy risks. With more than one million developers around the globe, the Commissioner is concerned about a lack of adequate safeguards to effectively restrict those developers from accessing users’ personal information, along with information about their online “friends.”

Response: Facebook has agreed to retrofit its application platform in a way that will prevent any application from accessing information until it obtains express consent for each category of personal information it wishes to access. Under this new permissions model, users adding an application will be advised that the application wants access to specific categories of information. The user will be able to control which categories of information an application is permitted to access. There will also be a link to a statement by the developer to explain how it will use the data.

This change will require significant technological changes. Developers using the platform will also need to adapt their applications and Facebook expects the entire process to take one year to implement.

2. Deactivation of Accounts

Issue: Facebook provides confusing information about the distinction between account deactivation – whereby personal information is held in digital storage – and deletion – whereby personal information is actually erased from Facebook servers. As well, Facebook should implement a retention policy under which the personal information of users who have deactivated their accounts will be deleted from the site’s servers after a reasonable length of time.

Response: Facebook has agreed to make it clear to users that they have the option of either deactivating their account or deleting their account. This distinction will be explained in Facebook’s privacy policy and users will receive a notice about the delete option during the deactivation process.

While we asked for a retention policy, we looked at the issue again and considered what Facebook was proposing. We determined the company’s approach – providing clarity about the options, offering a clear choice, and alleviating the confusion – is acceptable because it will allow users to make informed decisions about how their personal information is to be handled.

....

4. Accounts of Deceased Users

Issue: People should have a better way to provide meaningful consent to have their account “memorialized” after their death. As such, Facebook should be clear in its privacy policy that it will keep a user’s profile online after death so that friends can post comments and pay tribute.

Response: Facebook agreed to change the wording in its privacy policy to explain what will happen in the event of a user’s death."

Pangloss is mildly amused that only two years after she, Ian Brown and Chris Marsden presented a paper highlighting the privacy and security issues around the use of third party apps on Facebook, changes are finally being made.

The interesting issue will be if these changes are only made for Facebook in Canada or applied worldwide; similar legal pressure has not, it seems, being exerted in other jurisdictions such as the UK and the US - but there has certainly been concern over the repeated use of third party apps as an easy way to collect personal data for fraudulent or criminal purposes, or to spread malware. One might speculate that if FB are investing in developing new more privacy-compliant code it might as well install it system-wide given the PR advantages and the fact that FB's growth appears to have peaked (the rate of growth has been declining since about January 08). Chris Soghoian on Twitter seems to indicate the changes will be worldwide. If so, the Canadians have certainly done us all a favour.

Pangloss is also intrigued by the Canadian concern over Facebook's treatment of profiles on death. While the matter is certainly a pressing one (with 200 million users, not all young, FB profiles are, sadly, often a major concern to relatives after death) in fact FB has been pretty much in the vanguard in the area of transmision of digital assets, in at least providing a clear and accessible way for relatives to ask for profiles to be "memorialised" after death.

Other sites where digital "assets" remain after death (eg eBay, Flickr, et al) are in general much less clear about what rights they offer relativesafter death, have hard to penetrate procedures on the matter, or actively refuse to allow relatives control after death (see the famous Yahoo! case where relatives of a US marine were initially refused access to his emails after death because the privacy policy forbade passing on information to any third party. At least in the US, the privacy policy remains unchanged to date.)

However in my recent talk on this subject, I also suggested that it would be easy for FB in its various preference suggestions to allow users themselves to indicate what they would like done with their profiles after death. Not all want their profiles left open for comments after death ; some would like them closed down; others might like a friend or relatives to make the decision what to do. One size does not fit all and a solution should also consider and balance the interests of both the profile owner and the relatives. However if FB take a lead here under Canadian persuasion, they may well benefit all by becoming a good practice example in a rather under-considered part of the web 2.0 field.

Tuesday, March 24, 2009

Google Street View - Up Your Street?

Many of my friends and colleagues have been having fun with Google Street View since it went live in the UK last week. My social networking Friends lists are full of people exclaiming "OH MY GOD that's my house!!!" or happily pointing out their car, their garden and in one case, their boyfriend leaning out of the window. Those who live in cities not yet rolled out, bemoan their luck and count how many yards they are from the Googlezone.

Others are not so happy. Privacy International, a well respected privacy watchdog, have already announced their intention to take Google to court on the grounds that they are breaking data protection law, and have made a formal complaint to the Information Commissioner.

Says the Beeb, "Privacy International wants the ICO to look again at how Street View works.

"The ICO has repeatedly made clear that it believes that in Street View the necessary safeguards are in place to protect people's privacy," said Google.

Privacy International (PI) director Simon Davies said his organisation had filed the complaint given the "clear embarrassment and damage" Street View had caused to many Britons."


So is G. Street View ("manic street features" as another BBC piece gleefully calls it) the greatest free of cost and publcly available innovation to hit online mapping ever, or another piece in the jigsaw of ubiquitous commercial and government surveillance in the UK?

Pangloss admits to have been far more excited than worried when she first got the news. Google have invested a pretty large amount of effort into protecting privacy, having learnt from earlier protests and roll outs in the US as well as accepting the reality of ldata protection law in Europe. Faces and number plates have been, with some fairly low margin of error pixelated out. There are indeed errors: we have already had reports of people asking to have maps taken down because they depicted them being sick outside a pub or visiting a well known brothel. But Google have also provided an extremely easy to use take-down request system. Have they done enough?

My esteemed colleague Ian Brown of the OII doesn't think so (and repeated these feelings during a brisk debate last night at a post privacy conference dinner :) Said Ian to the Beeb:

"They [Google] should have thought more carefully about how they designed the service to avoid exactly this sort of thing."

Dr Brown said Google could have taken images twice, on different days, so offending images could have been easily replaced and protected privacy better.

Google says it has gone to great lengths to ensure privacy, suggesting that the service only shows imagery already visible from public thoroughfares."


There are a number of ways to frame this debate. One is the question of opt in to privacy, versus opt out. In the same way that Google Library has tried to push copyright discourse from opt in - consnt by authors to copying of their work - to opt out - asking to be left out of the scheme if not wanting copies to be made (and failed - given the recent settlement?) - it is arguably trying to do the same with privacy here.

If privacy is indeed a fundamental human right, then it can be argued that in principle no one should have to be exposed to even a low risk of an intrusion of privacy by error (let's leave the debate on what that exactly is, plus the debate on how far your privacy stretches in a public place, aside for the minute) and then have to request take down; instead`they should always be asked to give consent a priori. This is probably in gist PI's argument as to why what Google is doing is illegal.

In strict law, Pangloss is not really sure if this is right: the UK DP Act (and the EC DP Directive) do not always demand consent to processing of personal data - there is a well known exception which allows processing to be undertaken without consent if it is in pursuance of a legitimate aim of the data processor (Google) and does not at the same time unreasonably prejudice human rights (DPD, Art 7(f)).

A "few dozen" requests seem to have been made for take down, according to the BBC. If we knew how many views there are on GSV we could work out what percent have been privacy invading.It is probably a very very low percent. But is this the right way to construct the Art 7(f) balance? or should we be looking only at the degree of privacy invasion suffered by each individual data subject concerned - how much they lost - their wife, their job?

We need a real debate here about whether privacy invasion should be regarded as purely an individual issue or a societal problem; similarly whether GSV brings advantages to society as a whole (surely?) and do these outweigh the privacy loss to the few individuals. If GSV sparks this debate it will in itself have been of value.

Ian's compromise solution above - essentially, get it right the first time so as to minimise privacy intrustions requiring post factum take down - is a pragmatic one but does not in essence meet the above theoretical problem. It raises another pragmatic problem too - Google has already spent vast amounts providing a fantastic service for free to the UK public. Yes, they wil gain from ad revenue - but this is still an enormous free gift to the public as a whole. How much more money would it be reasonable to ask them to spend to meet the needs of the very few?

Taking two pictures of every location would presumably have doubled costs. Would fewer cities then be rolled out? Would there be more social and digital exclusion? Will rural areas ever be included in fact? and would someone living next to a person who had had "his" street view pulled out by justifiably irritated at his social exclusion? Should the invaded privacy rights of a few be allowed to stifle technological innovation for everyone? If we consider the P2P debate where the same issue arises - should theeconomic interests of the few in the entertainment industries be allowed to stifle useful innovation for the rest of us? - then generally the informed answer is no. There are many more societal cost/benefit balances to be thought about here.

In the meantime, Pangloss is going to go off to yet another workshop to talk about privacy and trust in next generation networks. Do we indeed trust Google to know where we live and to respect our privacy? Most do but some don't, it appears. Yet Google cannot automate, and thus provide at reasonable cost, the amazing services it delivers for "free" , unless we all agree on this in adavance, or at least are presumed to agree, subject to later opt out. This may be becoming a key problem of the digital era :)

Thursday, February 19, 2009

Facebook U-Turn on New Terms and Conditions

Following Facebook's recent climbdown on their change of terms & conditions to continue claiming a license to use and publish user data even after users delete their profile, here's a few comments from me in New Scientist.

As I said to the interviewer but which failed to get quoted, the real interest in this little storm in a digital tea cup has been in demonstrating what lawyers know but users rarely think of, namely that Facebook can change their terms any time they damn well like, to be more - or usually less - privacy-friendly.

At the moment, FB's privacy policy declares that users only consent to the sharing of their data with advertisers and marketers in anonymised or aggregated form - but there is no reason why that can't change any day to FB selling full details of user's personal data. And given the downturn in the advertising fortunes of web 2.0, and the fact that Facebook anecdotally still makes almost no money despite its huge userbase and is worth far less than was once thought, can that day be far away?

Ownership of personal data and control over user's own generated content are issue that could well be regulated by model clauses in the current boom in Codes of Practice for social networking sites: instead unsurprisingly they tend to concentrate on kiddy safety - see eg the latest EC effort in this direction. THe proposals do however include the useful provision that the profiles of all users under 18 should be set to "friends only" by default. (This ignores the need for protection of adult privacy though.)

In any case, even sales of aggregate anonymised data now pose a danger to privacy which current DP law wholly fails to notice. At the recent Information Security Best Practices conference 2009 run by Wharton College, Pennsylvania, several security expert speakers in te Data Mining and Privacy panel emphasised the improvements in deriving personal data from aggregate data. The bottom line appears to be that anonymised data as a concept is heading for extinction. Interesting times.

(And despite all this Pangloss is still on FB, albeit behind a lot of privacy locks. Do as I say not as I do, kiddoes.)

Schedule update:

24 February , PLC seminar: "Social Networking Sites, Privacy and Other Legal Aspects", sold out but contact Claire.Dine@practicallaw.com for cancellations.

4 March , Aberdeen University Law Faculty, "Phishing In A Cyber Credit Crunch World".

18-20 March, WSRI Web Science Conference, Athens, chairing panel on "“What can Web Science Do for the Privacy of Data Subjects?: Law, Privacy and Data Retention in a Post 9/11 World”

23 March, London, attending Privacy Value Network Advisory Board.

30-31 March: speaking at SCRIPT-ed Governance of New Technologies Conference, Edinburgh

22-23 April: speaking at BILETA 2009 - The 24th Annual Conference, Winchester

That'll do for now:)

Wednesday, February 18, 2009

When MI5 tell you the state is spying on its citizens too much...

.. they're probably right?

Stella Rimington, our very own real life M, in unlikeliest declaration of support for the forces of light of this or any other week :)

" “It would be better that the Government recognised that there are risks, rather than frightening people in order to be able to pass laws which restrict civil liberties, precisely one of the objects of terrorism: that we live in fear and under a police state.”

Wednesday, September 17, 2008

More Scottish info privacy news

While we're making Scotocentric comments on HBOS meltdown day, another snippet, slightly late, from OUT-LAW on 12/9/08:


The Scottish Government has asked a panel of experts to produce rules for public bodies to follow so that personal information and privacy is better protected. The move follows a series of UK-wide data breaches involving public authorities.

The panel will produce guidance for public bodies to ensure that they are treating personal information properly. That guidance will be subject to public consultation before any adoption by the Scottish Government.

The group of experts includes representatives from the public and private sectors and includes Rosemary Jay, a privacy law expert at Pinsent Masons, the law firm behind OUT-LAW.COM.

The group also includes Gus Hosein of Privacy International, Scottish Government director of corporate services Paul Gray, assistant information commissioner for Scotland Ken Macdonald, Edinburgh University honorary fellow Charles Raab and Jerry Fishenden, Microsoft's lead technology advisor for the UK.""


Pangloss notes with approval this list of luminaries but feels slightly sad they didn't ask her, just when she's (sort of) moved back to Edinburgh. Ah, hubris!

Friday, July 25, 2008

Just another silly season Friday..

In the immortal words of John MacEnroe..


You cannot be se-rious....

Someone do a LOL cat please? I CAN HAS LIVER WITH A NICE CHARDONNAY NAO PLIS?

It'd be good if it had the IT Crowd in it too :) (So hey, Judith, are they infringing personality rights too? is there an exception in the German law of personality for parody or comment)

Sez OUT_LAW:

"The Court looked into the degree to which the pursuit of artistic freedom interfered with the personality rights of Meiwes. It found that artistic freedom was not so powerful a right that it allowed for someone's life to be made into a horror film.

Meiwes advertised online for someone to be killed and eaten by him. Bernd Jürgen Brandes responded to his advert and tried to join Meiwes in eating his own severed penis before being killed and eaten."

But his life *IS * A HORROR FILM!!!

More legally: I'm all for autonomy, but do you have a right to assert your personality so as to gain a reward or remedy if it involves doing criminal acts?? Does a serial killer have a right to get a movie about him banned in germany because it's not horrible ENOUGH!? Surely there's some version of the Dworkinian principle of not profiting by your own wrong here?

Wow it's a great time to be a privacy lawyer. Nazi orgies (allegedly). German cannibals. Any guesses on what next?

EDIT: Ok, this next. But hey, haven't all the cool kids given up playing Scrabulous anyway?

Well that took a full ten minutes..

Also this, about which I can say little other than that it's about time they started selling close-target limited tactical nuclear strikes on eBay.

I think I'll go back to bed! :)

Thursday, July 24, 2008

Meanhile after Mosley.. a privacy and libel round up

For a change, something privacy related.

So what do we think of the Mosley case? In many ways this is absolutely nothing new let alone "landmark". We have had a long string of cases which support the idea that press intrusion into the firmly private lives of celebrities will be regarded as a serious breach of privacy. This wasn't even a difficult case: the events took place in private behind closed and locked doors, not in the more contested world of the outdoors (cf Rowling (Murray v Big Picture)); the case wasn't contaminated as in Douglas by the existence of a threatened connected revenue stream. It wasn't a contested kiss and tell dispute as in Ash where opposing rights of freedom of expression and privacy of non-press parties clashed. This really was a pure privacy and reputation case, about as intimately private a matter as you can get, an exotic sex life, where the incentive of the newspaper was to sell lots of newspapers. It doesn't seem surprising therefore that the damages award was so high, or that the judge was so critical of the paper involved.

Nor is there really anything very new on the tabloid side. It's clear if there really had been a "public right to know" here, the case would have gone the other way. But the Nazi allegations were never proven and the NotW botched its defence. Frankly , Pangloss remains bemused how even if Mr Mosley did spend every Tuesday goosestepping in jackboots and lederhosen singing Tomorrow Belongs To Me, this would have much to do with his "public" role, the handling of Formula 1 racing. But perhaps this is one of these sporty things we females are not privy to. (I don't understand why footballers are expected to have faithful marriages either, or why the public should care either way.)

Still, as my colleague Judith Rauhofer wrote to me triumphantly to say, this case certainly affirms the aphorism from earlier cases, that even if the public is "interested", it won't necessarily be "in the public interest" for the details to be disclosed.

The much bigger issue is how far will the flowering emergence of UK post HRA privacy jurisprudence go. Almost everyone except the tabloids thinks the UK's tabloid press needs restrained, by privacy case law in the absence of legislation.

But what if it is not the press but me or you who had blown the gaffe on Mosley? We live in the web 2.0 world after all. What if I had spilled it in my blog.? What if someone had set up a fake Mosley Facebook profile in which his interests were claimed to be the Luftwaffe, iron crosses and Eva Braun, his sexuality was described as Random Play with Whips, and his politics as Neo-Fascist?

This isn't altogether a hypothetical. Oddly enough today someone also got successfully sued for 15K damages for libel, and £2K for privacy, for setting up a fake profile on Facebook in an attempt to embarrass and belittle his former mate from school. (he sounds quite a horrible person, but that's not the point really.)

The fake FB profile actually involved lies about the alleged subject, or it wouldn't have lead to a libel award. But the next case , after Mosley and the rest, could easily only involve private and damaging, but not false, details.

One clear example that clarifies where this might lead is one Judith and I debated at the Law and Society conference in Montreal - is there now a human right not to be "outed"? Tonight I've watched a documentary in which John Barrowman explained in copious detail how glad he is to be gay. But not everyone feels that way. Indubitably, outing can cause damage - everything from loss of job to loss of friends and emotional distress to suicide in some cases. Shouldn't it be actionable?

But - Do I , an individual have the ethical duty not to harm my fellow man, if I do not lie? Maybe I do , but that is still a long way from a legal duty. The judge in the Mosley case stated:

"The law now affords protection to information in respect of which there

is a reasonable expectation of privacy, even in circumstances

where there is no pre-existing relationship giving rise of itself to an

enforceable duty of confidence. That is because the law is concerned to

prevent the violation of a citizen's autonomy, dignity and self-esteem."



But don't I too , as part of my rights of autonomy and personality and self esteem, have a right to describe the world how I see it, as long as I don't lie, defame or negligently misstate? These are`my duties of care, the traditional limits of freedom of speech. I am not required in general to protect and sustain the image my friends and enemies want to project - to be part of their personal PR agency. Nor should I be.

Of course if I out my friends, they are unlikely to stay my friends and I might well be ostracised in my social group. Shouldn't these social norms and sanctions suffice? Yet it is hard to see exactly where to draw the line between the next Facebook case, the one about privacy not defamation, and the outing example. There is also surely a societal interest in truth, and critique, as well as in privacy.

Do we really want the whole world to be a giant self fulfillment and image protection arcade? or do we want the right to say, "but look - the Emperor has no clothes." Or perhaps even, in today's case, no jackboots.

Tuesday, June 17, 2008

It's amazing..

.. what you see on TV these days.

The local news just had this story about a shopping mall in Portsmouth where mobile tracking technology by Path Engineering has been installed - which I have tracked to this story from the Register.

"By installing receivers around a shopping centre the company can pick up communication between handsets and base stations, enabling them to track shoppers to within a metre or two - enough to spot the order in which shops are visited. Two UK shopping centres are already using the tech, with three more deploying in the next few months."

As far as one can tell, the tracking is completely non-identifying ; the shopping centre and path both do not know personal mobile phone numbers nor corresponding user names. The TV report showed predictable reactions: why weren't we told; I don't like it; I've got nothing to hide; etc.

So what do people think? Despite the obvious knee jerk reaction, as the info is completely non attributable to identified individuals, I really can't see a problem. You could get exactly the same results (at greater cost) by posting tellers at each shop or destination in the shopping centre to do counts all day, every day - would anyone object to that on privacy grounds?

(Hmm - I suppose yes, if they could identify the shoppers. Technology actually has the privacy advantage here of being blind. Here we're pre supposing CCTV isn't used in some way to identify the mobile shoppers - which despite what El Reg suggests would be extremely difficult to arrange in real time.)

I think it's important here to seperate technophobic squeamishness from real privacy concerns. (This is also not like Phorm where anonymity had been artificially imposed and could easily be "broken". Here the mobile tracking system simply doesn't know your personal phone number or your name.)

Of course you need to seperate it too from a consent-based tracking system which can be abused by forced or mistaken consent to reval significant personal data, like Sniff. Which I'm sure everyone else has blogged enough about by now.


And completely off-topic, in the Guardian today, I nearly choked on my post-swim coffee at the ostensible discovery that gay men and heterosexual women (and straight men and lesbians)apparently have similar shaped brains. If true this could destroy several decades of careful academic work on cultural construction :)

And now Newsnight is trying to tell me that Obama will be made or broken by Internet bloggers. Possibly time to turn off the TV and write some more of the third edition of Law and the Internet instead :)

Monday, March 17, 2008

Phorm an orderly queue

It might easily be said that the British just love creating problens with Phorms..

Here is the press release for the FIPR official letter to the ICO on the current Phorm controversy. It has my full support as a lucid and explanatory response to a pressingly potential worrying incursion into consumer privacy (disclaimer: I am member of FIPR advisory board.)

FIPR Press Release

For Immediate Release: Monday 17th March 2008

Open Letter to the IC on the legality of Phorm's advertising system
-------------------------------------------------------------------

The Foundation for Information Policy Research (FIPR) has today released
the text of an open letter to Richard Thomas, the Information
Commissioner (IC) on the legality of Phorm Inc's proposal to provide
targeted advertising by snooping on Internet users' web browsing.

The controversial Phorm system is to be deployed by three of Britain's
largest ISPs, BT, Talk Talk and Virgin Media. However, in FIPR's view
the system will be processing data illegally:

* It will involve the processing of sensitive personal data: political
opinions, sexual proclivities, religious views, and health -- but it
will not be operated by all of the ISPs on an "opt-in" basis, as is
required by European Data Protection Law.

* Despite the attempts at anonymisation within the system, some people
will remain identifiable because of the nature of their searches and
the sites they choose to visit.

* The system will inevitably be looking at the content of some
people's email, into chat rooms and at social networking activity.
Although well-known sites are said to be excluded, there are tens or
hundreds of thousands of other low volume or semi-private systems.

More significantly, the Phorm system will be "intercepting" traffic
within the meaning of s1 of the Regulation of Investigatory Powers Act
2000 (RIPA). In order for this to be lawful then permission is needed
from not only the person making the web request BUT ALSO from the
operator of the web site involved (and if it is a web-mail system, the
sender of the email as well).

FIPR believes that although in some cases this permission can be
assumed, in many other cases, it is explicitly NOT given -- making the
Phorm system illegal to operate in the UK:

* Many websites require registration, and only make their contents
available to specific people.

* Many websites or particular pages within a website are part of the
"unconnected web" -- their existence is only made known to a small
number of trusted people.

The full text of the open letter can be viewed at:

http://www.fipr.org/080317icoletter.html

QUOTES

Said Nicholas Bohm, General Counsel, FIPR:

"The need for both parties to consent to interception in order for
it to be lawful is an extremely basic principle within the
legislation, and it cannot be lightly ignored or treated as a
technicality. Even when the police are investigating as serious a
crime as kidnapping, for example, and need to listen in to
conversations between a family and the criminals, they must first
obtain an authorisation under the relevant Act of Parliament: the
consent of the family is not by itself sufficient to make their
monitoring lawful."

Said Richard Clayton, Treasurer, FIPR:

"The Phorm system is highly intrusive -- it's like the Post Office
opening all my letters to see what I'm interested in, merely so that
I can be sent a better class of junk mail. Not surprisingly, when
you look closely, this activity turns out to be illegal. We hope
that the Information Commissioner will take careful note of our
analysis when he expresses his opinion upon the scheme."

CONTACTS

Nicholas Bohm
General Counsel, FIPR
01279 870285
nbohm@ernest.net

Richard Clayton
Treasurer, FIPR
01223 763570
07887 794090

NOTES FOR EDITORS

1. The Foundation for Information Policy Research (http://www.fipr.org)
is an independent body that studies the interaction between
information technology and society. Its goal is to identify
technical developments with significant social impact, commission
and undertaken research into public policy alternatives, and promote
public understanding and dialogue between technologists and policy-
makers in the UK and Europe.

2. Phorm (http://www.phorm.com/) claims that their "proprietary,
patent-pending technology revolutionises both audience segmenting
techniques and online user data privacy" and has recently announced
that it has signed agreements with UK Internet service providers BT,
TalkTalk and Virgin Media to offer its new online advertising
platform Open Internet Exchange (OIX) and free consumer Internet
feature Webwise.

3. In a statement released on 3rd March the Information Commissioner's
Office (ICO) said:

"The Information Commissioner's Office has spoken with the
advertising technology company, Phorm, regarding its agreement
with some UK internet service providers. Phorm has informed us
about the product and how it works to provide targeted online
advertising content.

"At our request, Phorm has provided written information to us
about the way in which the company intends to meet privacy
standards. We are currently reviewing this information. We are
also in contact with the ISPs who are working with Phorm and we
are discussing this issue with them.

"We will be in a position to comment further in due course."

-

Wednesday, June 27, 2007

FaceBook Brought to Book?

My colleague Ian Brown of Blogzilla reports on an interesting post on why Facebook may be violating European privacy law.

The article reveals that creating an "exploit" in FaceBook - ie hacking the privacy of unsuspecting users - is trivially easy. All you have to do is use Advanced Search and you can search across controversial (and in European DP language, "sensitive") pieces of data such as Religion and Sexuality in apparently unlimited numbers of profiles. This is true even if the user has taken steps to protect the privacy of their data (see below). As Ian comments this is a security failure on FB's part, which should have been trivially easy to fix in their code.

Having just returned from the SCL Conference where it was revealed that over 3 million people in the UK are on Facebook (including apparently nearly every corporate lawyer in the UK.. and definitely at Allen and Overy :-) and it is growing in the UK at 6% per WEEK, this is serious, er, excrement.

Pangloss's own experimentation proves that in fact hacking FaceBook is even easier than this. Suppose you want to stalk person X who you know lives in London. All you have to do is set up an FB profile, join the London network - which requires NO validation, certainly not a University of London email address or the like - and suddenly you can see all their personal details - some of which (on brief inspection) are highly revealing , of social and sexual data that many people would not want public. Of course they may not have joined the London network - but very often it will be very easy to guess what network the stalkee is in.

Of course, will say FaceBook, you, the stalkee, can stop this. You can in fact change all your privacy defaults on FB so no one can see ANYTHING on your profile site unless they are people you have accepted as "Friends". (Pangloss has just gone and done this, with a vengeance.) Fair enough, except that the default privacy settings on FB are almost entirely in favour of disclosure and there is very little direction or instruction on the site to "change these defaults for heaven's sake, 300,000 people can see who you want to sleep with".

As the blogger above, Quiet Paranoia (great name) comments, "Users cannot be expected to know that the contents of their private profiles can be mined via [advanced] searches, and thus, very few do set the search permissions associated with their profile."

I agree. If an er um respected professor of privacy law can take some while to realise how exposed her data is on FaceBook, then it is unreasonable to expect children of 16 or 17 (FB is associated with high school students but the T & C say 13 up) to make these kind of difficult judgment calls, when what they are really concerned about is popularity and finding out about the good parties?

FB will say that they have provided opt-in to privacy, and anyone who does not avail themselves of the tools available is impliedly giving consent to processing of their data. They wil also point to their privacy policy which does not give the impression of overwhelming concern about the remarkably weak default privacy protection and indeed, security, offered by FaceBook.

"You post User Content (as defined in the Facebook Terms of Use) on the Site at your own risk. Although we allow you to set privacy options that limit access to your pages, please be aware that no security measures are perfect or impenetrable. We cannot control the actions of other Users with whom you may choose to share your pages and information. Therefore, we cannot and do not guarantee that User Content you post on the Site will not be viewed by unauthorized persons. We are not responsible for circumvention of any privacy settings or security measures contained on the Site. You understand and acknowledge that, even after removal, copies of User Content may remain viewable in cached and archived pages or if other Users have copied or stored your User Content."

Even Pangloss, who is no privacy fundamentalist, does not think this is good enough, particularly in relation to "sensitive personal data" where "explicit consent" to processing by third parties is required. (Is searching via key words "processing"? Almost certainly - see Art 2 of the Data Protection Directive which includes "retrieval" whether or not by automatic means. )

But FB will again say : Everyone who signs up to FB assents to the T & C. Does that mean they have given the requisite explicit consent to processing of sensitive data even by "unauthorised third parties"? Even if in pure contract law the T & C can be read this way, at this point both DP law and the Unfair Contract Terms Directive should surely both converge to make such a clause either void or unenforceable?

In comparison, another social networking site where Pangloss hangs out, Live Journal, has not only very sophisticated privacy controls, but also a culture of discussion and awareness that privacy and openness can be manipulated by the software. Of course privacy breaches do still occur (via "cut and paste fairies" for example) but they are pretty rare.

Do we need a legal solution? Is there a case for extension of DP law to cover the setting of defaults on social network sites? Should privacy not be the default, by law (perhaps with some exceptions to preserve functionality, such as name and network) and openness the opt-out, rather than the reverse? Maybe. Maybe all that is needed is an Industry Code of Practice combined with some upping of awareness of the issue. However with the number of people - especially young pre-employment proto-citizens - involved in web 2.0 sites rising by the minute, this really does seem an issue which is not merely knee jerk alarmism and should not be swept under the carpet. First year students may not care now about spilling their sexuality and contacts to the world: they may when they are older, wiser and looking for employment :)

Another suggestion might be the automatic expiry of social networking data after say six months unless the user chooses to opt in to keeping their data out there. Viktor Mayer-Schoenberger has made this kind of suggestion recently. In social networking sites where the whole business model is based around large databases of personal data, data is routinely retained apparently forever. Data retention is another area where the DPO authorities might want to have a bit of a look at whether the law needs tweaked.

Thursday, May 31, 2007

Google faces EU Regulation?

FInally today (honest), the Art 29 WP has issued a significant letter criticising Google's privacy protection of personal data. Google is now to be the subject of an Art 29 report.

Google's recent olive branch of increasing privacy protection by anonymising server logs older than 18-24 months old is dismissed as insufficient data minimisation for EU law. In particular the 30 year duration of a Google cookie (!) is mentioned as disproportionate.

Interesting to compare our cousins over the pond.. where this blogger is suggesting that Google can be seen as the Transparent Society in action. Since everyone, including commerce and the state already collects far more data about us than we know of or can control, isn't a way to fight back to have all that data openly available to everyone not just the state - as collected by a private and semi neutral organisation, ie Google?

"On the one side is that massive data integration by the State - and if you think you'll see much data from that, you'll be waiting a long time. On the flip side all the other data, just put out there for people to use. The State's default mode is to hide everything, Google's is to put it out there for everyone to use.

I know which society I'd prefer to live in."

I don't agree, at all, but it's an interesting angle. Especially in the age of the shadow of the ID database..

Back at market regulation, Web 2.0 is already beginning to provide us with companies whose business model is to allow you to track down what data people hold about you (a right you have in law under DP but how the hell do you do it in aggregate in practice) - try looking at Garlik for example.

ps More from the Beeb on this with an emphasis on Google's recent acquisition of DoubleClick.