A UK-based cyberlaw blog by Lilian Edwards. Specialising in online privacy and security law, cybercrime, online intermediary law (including eBay and Google law), e-commerce, digital property, filesharing and whatever captures my eye:-) Based at The Law School of Strathclyde University . From January 2011, I will be Professor of E-Governance at Strathclyde University, and my email address will be lilian.edwards@strath.ac.uk .
Tuesday, February 28, 2006
Sunday, February 26, 2006
Security, Spam and EBay Law round up
Finally tonight, folks, also worth noting: yet another intensely sensible comment on trust and security from Bruce Schneier, my man of the moment:
and a clip I've meant to blog for some time - Yahoo! and AOL have reinvented the email postage stamp, only a year after Bill Gates did, we all laughed at him, The Great Unwashed Public said "We aren't gonna pay extra for steenking email!" and he said "Gee, that wasn't such a good idea after all huh?". OK the new scheme is a bit different. Yahoo! and AOL say it will act to give email that is stamped "preference", rather than acting, as Gates first envisaged it, as a spam whitelist. This still won't make the public like it so it's being sold as a way of prioritising business email. Do you want your email de prioritised? I sure as hell don't..
Also there's this: "AOL and Yahoo will still accept e-mail from senders who have not paid, but the paid messages will be given special treatment. On AOL, for example, they will go straight to users' main mailboxes, and will not have to pass the gantlet of spam filters that could divert them to a junk-mail folder or strip them of images and Web links."
So if you're a spammer with a bit of start up cash all you have to do is pay the stamp and you evade all filters. OK, 99.99% of spammers won't do that but it still irks me, as the whole point of spam is that it is unsolicited. Spam filters should apply if it's SPAM no matter how much blood money has been paid! OK, the NY Times report adds "The senders must promise to contact only people who have agreed to receive their messages, or risk being blocked entirely." - but like, all spammers have been totally truthful up till now? Riiight!
Theer's also the point that Yahoo! and AOL simply keep the "postage stamp" money. When economic modes of stopping spam were first proposed a year or two back, the general foundational idea was that the "spam tax" money would not be kept by ISPs but raked back by the givernment or at least some independent body to be spent on functions of use to the whole Internet - like developing better spam filters. This way it becomes just another revenue stream for Yahoo!/AOL. Back to the NY Times article. ""From AOL's perspective, this is an opportunity to earn a significant amount of money from the sale of stamps," he said. "But it's bad for the industry and bad for consumers. A lot of e-mailers won't be able to afford it."
Meanwhile back at The Register, the old idea of strict liability for data breaches has reared its head again in the wake of the theft of a laptop from a mortgage lender containing 550,000 people's full credit information. In the US, the the Gramm Leach Bliley Act (GLBA), 15 USC 6801, demands that holders of financial data take reasonable care as to it. In the end however, the mortgage lender was found to have behaved reasonably: " it was not foreseeable that the laptop containing this information, being kept in this home office, might be the subject of a burglary. The court even deemed the location to be a "relatively safe" neighborhood in suburban Washington DC. This is despite the fact that last year alone there were a large number of laptop thefts across the United States."
Finally just a marker of what might be a significant case in the beginning of the end for EBay's carefully kept position of "intermediary neutrality". Tiffany, the diamond folks, are suing EBay for essentially aiding and abetting the passing off of Tiffany fakes via their site. It's hard to see how EBay, unlike old fashioned ISPs, can maintain that they can only stay in business if not held liable for third party content, when their entire business model is based on taking a cut from other parties' third party content. The fact that EBay maintains pages of guidance on not selling goods such as counterfeits on its site merely demonstrates that (a) they know the problem exists but (b) they aren't going to spend any (OK, many) resources on solving it, even though they have the benefit of access to far more data than either the businesses whose trademarks are infringed or the police. Watch this one run..
and a clip I've meant to blog for some time - Yahoo! and AOL have reinvented the email postage stamp, only a year after Bill Gates did, we all laughed at him, The Great Unwashed Public said "We aren't gonna pay extra for steenking email!" and he said "Gee, that wasn't such a good idea after all huh?". OK the new scheme is a bit different. Yahoo! and AOL say it will act to give email that is stamped "preference", rather than acting, as Gates first envisaged it, as a spam whitelist. This still won't make the public like it so it's being sold as a way of prioritising business email. Do you want your email de prioritised? I sure as hell don't..
Also there's this: "AOL and Yahoo will still accept e-mail from senders who have not paid, but the paid messages will be given special treatment. On AOL, for example, they will go straight to users' main mailboxes, and will not have to pass the gantlet of spam filters that could divert them to a junk-mail folder or strip them of images and Web links."
So if you're a spammer with a bit of start up cash all you have to do is pay the stamp and you evade all filters. OK, 99.99% of spammers won't do that but it still irks me, as the whole point of spam is that it is unsolicited. Spam filters should apply if it's SPAM no matter how much blood money has been paid! OK, the NY Times report adds "The senders must promise to contact only people who have agreed to receive their messages, or risk being blocked entirely." - but like, all spammers have been totally truthful up till now? Riiight!
Theer's also the point that Yahoo! and AOL simply keep the "postage stamp" money. When economic modes of stopping spam were first proposed a year or two back, the general foundational idea was that the "spam tax" money would not be kept by ISPs but raked back by the givernment or at least some independent body to be spent on functions of use to the whole Internet - like developing better spam filters. This way it becomes just another revenue stream for Yahoo!/AOL. Back to the NY Times article. ""From AOL's perspective, this is an opportunity to earn a significant amount of money from the sale of stamps," he said. "But it's bad for the industry and bad for consumers. A lot of e-mailers won't be able to afford it."
Meanwhile back at The Register, the old idea of strict liability for data breaches has reared its head again in the wake of the theft of a laptop from a mortgage lender containing 550,000 people's full credit information. In the US, the the Gramm Leach Bliley Act (GLBA), 15 USC 6801, demands that holders of financial data take reasonable care as to it. In the end however, the mortgage lender was found to have behaved reasonably: " it was not foreseeable that the laptop containing this information, being kept in this home office, might be the subject of a burglary. The court even deemed the location to be a "relatively safe" neighborhood in suburban Washington DC. This is despite the fact that last year alone there were a large number of laptop thefts across the United States."
Finally just a marker of what might be a significant case in the beginning of the end for EBay's carefully kept position of "intermediary neutrality". Tiffany, the diamond folks, are suing EBay for essentially aiding and abetting the passing off of Tiffany fakes via their site. It's hard to see how EBay, unlike old fashioned ISPs, can maintain that they can only stay in business if not held liable for third party content, when their entire business model is based on taking a cut from other parties' third party content. The fact that EBay maintains pages of guidance on not selling goods such as counterfeits on its site merely demonstrates that (a) they know the problem exists but (b) they aren't going to spend any (OK, many) resources on solving it, even though they have the benefit of access to far more data than either the businesses whose trademarks are infringed or the police. Watch this one run..
Oyster cards, privacy and security
Not that I'm claiming I started it or anything but there has been something of a flurry in the press lately about the Transport for London Oyster Card and how easily it can be used to track down an individual's movements. No one did come back last time to tell me how an Oyster card worked, (well except Ian Brown ) but from the Register and Independent on Sunday articles, it seems you need nothing beyond the actual card in your hand to access journey information at a kiosk, but slightly more security operates when you try to get the info on-line from your own PC:
"The IoS claims that Oyster journey data can be extracted at a ticket machine using the card, or online by keying the serial number of the card. As far as The Register is aware, however, internet access is slightly more secure than this, requiring a username and password or the serial number, and mother's maiden name or similar, from the application form. These are not, however, insuperable hurdles for the suspicious spouse or close friend, and access to the individual's email account would probably be enough for a snooper to change passwords and gain access to the account itself."
As the Register point out, the current basic level of security helps no one. Either close down access altogether - why do you need to access details of your OWN journeys? you KNOW where you've been!! - or add some decent security like a password for ticket machine access.
And as they also add, the problem will more pressing if/when , as planned for a year or so, the Oyster Card scheme is extended to become a smart card wallet, used in DigiCash like ways to pay for small purchases like milk and papers.
"The IoS claims that Oyster journey data can be extracted at a ticket machine using the card, or online by keying the serial number of the card. As far as The Register is aware, however, internet access is slightly more secure than this, requiring a username and password or the serial number, and mother's maiden name or similar, from the application form. These are not, however, insuperable hurdles for the suspicious spouse or close friend, and access to the individual's email account would probably be enough for a snooper to change passwords and gain access to the account itself."
As the Register point out, the current basic level of security helps no one. Either close down access altogether - why do you need to access details of your OWN journeys? you KNOW where you've been!! - or add some decent security like a password for ticket machine access.
And as they also add, the problem will more pressing if/when , as planned for a year or so, the Oyster Card scheme is extended to become a smart card wallet, used in DigiCash like ways to pay for small purchases like milk and papers.
Who Do You Trust, Reloaded?
Interesting response from my coder guru pal, Pete Fenelon: I don't agree with every word but I thought it was worth reproducing in full..
Overview: code signing and secure OSes won't work - but that's not where
the effort should be going; it should be going into creating a
well-policed interface between private systems and the network - and
making the owners of those systems liable.
PF: I admit that I'm something of an oddball in my views here, but I
belive that what goes on behind your net connection is your own business; what comes out of it is very much not your business. Same as I can have a rocket-powered car in my garage, but I'm toast if I try to take it on the road. ;)
LE: Bill argues that being asked to trust the people who supply "trusted" software - people like Sony - is akin to owning a car where you can't look under the bonnet.
PF: And what's wrong with this? -- most people who buy cars these days don.t know diddly about what goes on under the bonnet, and entrust repairs to qualified professionals (or at least people who they think are qualified professionals). Most home computers are "administered" by "our Kevin who's dead good with computers, he gets high scores on all them games he gets discs of". "Our Kevin" often isn't mindful of the consequences (or even existence of) malware, and would click "OK" like a Pavlovian dog if it meant playing a warez version of Halo 3.
Bill: "I have a very nice car, and I try to take good care of it. It runs on petrol, but I want the freedom to fill it up with diesel and destroy the
engine. It's my engine, after all.
PF: Well, in many cases it's probably the finance or leasing companys engine, but hey...
Bill: The same goes for my computer. I want the freedom to write, compile and run my own code, take risks with dodgy software I've downloaded from the
net and even break the law and risk prosecution by playing unlicensed music or running cracked software. "
PF: It might well be "his computer", in the same way that it's "his
car", but his car has to be MOTed regularly to ensure that it still complies with the law, and he has to take out insurance against any damage he might cause to others. When people call the Internet the "information superhighway" they seem to forget that the real highway isn.t a free-for-all -- there are people out there watching what you do, there are laws by which and your vehicle must abide if you wish to drive on it, and you must be licensed to even venture onto it. The penalties
are (or at least should be) draconian. The analogy is simple; we don't have "car cops" in Britain who stop you fitting an eight litre engine and slick tyres to your Morris Minor, we have "traffic cops" who get peeved if they see it on the road. Similarly, we shouldn.t have "computer cops" who stop you installing Frigware Linux R00tK1T 3D1T10N, we should have "network traffic cops" who pull the plug if your machine starts behaving dangerously.
PF: Right now, lives aren't at stake on the Internet (although no doubt some fool will eventually connect up some safety-critical equipment to an
unprotected public network and someone will get hurt), but the economic well-being of others is. What we need isn't a technical solution; it's a financial/legal one. We need:
PF: liability for damage caused by anything coming from a network
endpoint for which a particular legal entity (individual, corporation) is responsible.
PF: Regulation of apparatus that can connect - and I don't mean the old BABT red/green triangles, I mean mandating *approved* firewall/gateways between the public network and any equipment connected to it. Found without a firewall/working and up to date AV system? (and your ISP will be probing, otherwise it'll be fined and
potentially ostracised at LINX or similar.... or at least would be in my universe?) Exactly the same as having no catalytic converter, no headlights and bald tyres -- your connection "goes dark" and you're fined. Simple as that.
PF: Unfortunately I don;t believe that licensing of individuals as fit to use computers can take place - for a start there's the problem of proving who's in control of a machine at any point.
PF: I also don't believe that licensing of applications can meaningfully be done. True 'trusted computing' costs, and costs in the eight figure sort of region for a typical project. And, frankly, how far does trust go? You can't trust any mainstream commercial or open-source desktop operating systems, not with the level of flaws found in them (and for an amusing aside, google "Reflections On Trusting Trust"). True Trusted Computing platforms are expensive, inflexible, and don't offer the kind
of experience that modern end users expect -- it'd be like stepping back around 20 years for most PC owners. A trusted system according to the Orange Book or Common Criteria would not be something most people would buy, and it'd move computers back from being a part of the home to being an expensive office tool. Maybe no bad thing ;)
LE: What this apparently appealing metaphor obscures is two things. One "trusted computing" in the strict sense is about hardware not software. I'll come back to this. Trusted computing means that the (metaphorical) box your computer comes in has to be a "black box" unopenable by the user - otherwise the user can do all the stupid things users do right now like open emails from strangers, accept downloads with payloads of virus executables , and click on URLs that take them to phishing websites.
PF: Exactly. But extending your thoughts even further, it's a systems view and a human view that we need, not a software one. If I do something that trashes my computer, it's my risk and my loss. If I do something that trashes my computer, turns it into a zombie host for running a botnet from, and makes it part of a denial of service attack, it's different. I've messed someone else's system up and that's contributory negligence... or criminal damage ;)
LE: This means you do indeed have to trust the people who supply you with trusted computing hardware, and I agree with Bill that there should be
serious legal obligations with decent compliance mechanisms placed on those who do sell "trusted computing" so they do not sell us, as we Glaswegians say, a pig in a a poke (or a root kit in a DRM).
LE: But the Internet is not going to be any more secure if we sell people trusted computing hardware and let them, as Bill, wants to, tinker and
fiddle. It would be like selling my mum a Ferrari and suggesting that if she's bored one Sunday she tunes the engine up a bit. She would destroy a very expensive engine and she would also endanger people on the road if she took the car out and it ran out of control as a result of her unskilled modifications while she was driving.
PF: Agreed.
LE: Security of hardware sold to consumers, and consequentially the security of the entire Internet (see posts on bots , zombies etc, passim) is simply no longer compatible with open tinkering.
LE: Once upon a time anyone who bought a car was allowed to simply take delivery and drive it. Then when the density of cars increased, we reaised we needed driving tests for public safety. Maybe people like Bill who are well equipped to safely tinker with/fine tune their computers (unlike my Mum) , should have to pass a test too before they're allowed to drive away a non-black-box computer?
PF: Unenforceable. You don.t stop people owning computers, you just make it very, very hard, risky, and expensive to connect anything dubious to the public internet.
LE: Radical in the libertarian world of computer culture ; but not very odd at all when you look at the rest of the everyday attitude to owning potentially dangerous objects.
PF: "Libertarianism" on the public internet is a consensual illusion
passed down from idealistic old-timers of the 1970s and 1980s who enjoyed unrestricted ARPAnet/Internet access as a perk of their jobs or studies and the network was largely run by and for enthusiasts as a piece of research. It's been a fiction ever since individuals have been paying for their access; you are always "playing with someone else's ball" and that someone else is much bigger than you. AUPs are going to get more and more restrictive, either because ISPs are covering their asses or because governments are leaning on them, and the onsequences for breaching those AUPs must become commensurately more painful.
LE: What about the software that trusted computing hardware is willing to accept and excute? the so called "signed" software? Here I completely agree with Bill that the defining of what is acceptable software cannot safely be left to the dictat of the software/hardware vendors. Microsoft eg (just AS an example!) has absolutely no incentive to let me, a consumer, run open source software on the trusted platform they've just sold me. Without needing to imply any malice at all, simple competitive strategy would dictate they should allow access to Microsoft software products and nothing else, if they can get away with it. So as Bill says:
PF: This "ecosystem" doesn't work; Gates tried to build a "trusted
computing" platform with XBox. I forget how many weeks it took to crack it wide open. DVD regioning tried to enforce a controlled system in hardware. Ditto. There are more and cleverer people out there fighting for "freedom" than there are people able to deny them. So move the problem - take it out of the technical domain and into the legal one.
LE: [actually Bill] "The second thing we need is diversity when it comes to code signing. If my computer is set to run only signed software or read only signed
documents, then who can sign what becomes far more than a matter of technology, it becomes a political issue.
LE: [still actually Bill] We must not settle for a closed platform which allows the hardware vendor or the operating system supplier to decide, so it is time for governments to intervene and to ensure that we have an open marketplace for code signing.
PF: A closed platform won't work (see above). And signing authorities? This just permits the development of 800lb monopoly gorillas like Verisign. Far simpler to move the burden - the place to police is the network interface. I don't care what naughty crap people run on their computers; what I do care about is that someone running dangerous software can't swerve across the information superhighway and unintentionally deny my service.
LE: [still Bill!] The simplest way to do this is to give the process a statutory backing and then issue licences, just like we do for many professional and financial services. "
PF: Software licensing on this scale can't and won't happen. Especially
not while you can buy hooky software from market stalls and/or China ;)
PF: A regulatory framework needs to be put in place and that regulatory framework needs to be centred around policing traffic through network
endpoints, not what's hanging off them. Does it matter what a non-connected computer runs? Of course not.
LE: It's the last para I can't see happening, for the simple reason that a lot of hardware and software comes from the US and the US is not prone
to extending governement regulation of industry. The UK can impose local regulation on hardware, at least in theory, by stopping it at ports: it simply can't impose licensing control on software downloaded from the States. How can you download that "dodgy software" you have your eye on, if the country it originates from hasn't bought in to a licensing scheme model? Do you simply accept any software with no license - then bangs goes security.
PF: All good points.
LE: A better candidate for a certification authority for signing or
licensing software as safe might be the existing international standard setting authorities. If an ISO standard, available on-line and revised on application by new entrants into the software market, said what programmes my black box should (or could) accept and execute and which it definitely shouldn't, both I and my technophobe mother might feel a lot safer on the Net.
PF: A wise old engineer who used to work in telecoms once said to me
"What's the difference between Jurassic Park and the ISO?" I said I didn't know. "One of them's a theme park filled with dinosaurs - and the other.s a movie". By the time the ISO has defined a model for software certification and verification the problem will have morphed out of recognition. The ISO is essentially completely reactive when it comes to comms and computers; their one attempt to define networking standards was a complete failure in the face of the open-source TCP/IP protocol stack and since then they.ve essentially been regarded as a laughing stock by the Internet community. ISO, ECMA, and similar bodies simply don't have the leverage.
PF: Your technophobe mother doesn't want a true "Trusted Computer"; I doubt she.d be willing to take on the cost of buying one. Your technophobe mother wants a computer that does the right job for her, and that's difficult to unintentionally or maliciously modify.
And LE adds - couldn't agree more! Thanks Pete.
Overview: code signing and secure OSes won't work - but that's not where
the effort should be going; it should be going into creating a
well-policed interface between private systems and the network - and
making the owners of those systems liable.
PF: I admit that I'm something of an oddball in my views here, but I
belive that what goes on behind your net connection is your own business; what comes out of it is very much not your business. Same as I can have a rocket-powered car in my garage, but I'm toast if I try to take it on the road. ;)
LE: Bill argues that being asked to trust the people who supply "trusted" software - people like Sony - is akin to owning a car where you can't look under the bonnet.
PF: And what's wrong with this? -- most people who buy cars these days don.t know diddly about what goes on under the bonnet, and entrust repairs to qualified professionals (or at least people who they think are qualified professionals). Most home computers are "administered" by "our Kevin who's dead good with computers, he gets high scores on all them games he gets discs of". "Our Kevin" often isn't mindful of the consequences (or even existence of) malware, and would click "OK" like a Pavlovian dog if it meant playing a warez version of Halo 3.
Bill: "I have a very nice car, and I try to take good care of it. It runs on petrol, but I want the freedom to fill it up with diesel and destroy the
engine. It's my engine, after all.
PF: Well, in many cases it's probably the finance or leasing companys engine, but hey...
Bill: The same goes for my computer. I want the freedom to write, compile and run my own code, take risks with dodgy software I've downloaded from the
net and even break the law and risk prosecution by playing unlicensed music or running cracked software. "
PF: It might well be "his computer", in the same way that it's "his
car", but his car has to be MOTed regularly to ensure that it still complies with the law, and he has to take out insurance against any damage he might cause to others. When people call the Internet the "information superhighway" they seem to forget that the real highway isn.t a free-for-all -- there are people out there watching what you do, there are laws by which and your vehicle must abide if you wish to drive on it, and you must be licensed to even venture onto it. The penalties
are (or at least should be) draconian. The analogy is simple; we don't have "car cops" in Britain who stop you fitting an eight litre engine and slick tyres to your Morris Minor, we have "traffic cops" who get peeved if they see it on the road. Similarly, we shouldn.t have "computer cops" who stop you installing Frigware Linux R00tK1T 3D1T10N, we should have "network traffic cops" who pull the plug if your machine starts behaving dangerously.
PF: Right now, lives aren't at stake on the Internet (although no doubt some fool will eventually connect up some safety-critical equipment to an
unprotected public network and someone will get hurt), but the economic well-being of others is. What we need isn't a technical solution; it's a financial/legal one. We need:
PF: liability for damage caused by anything coming from a network
endpoint for which a particular legal entity (individual, corporation) is responsible.
PF: Regulation of apparatus that can connect - and I don't mean the old BABT red/green triangles, I mean mandating *approved* firewall/gateways between the public network and any equipment connected to it. Found without a firewall/working and up to date AV system? (and your ISP will be probing, otherwise it'll be fined and
potentially ostracised at LINX or similar.... or at least would be in my universe?) Exactly the same as having no catalytic converter, no headlights and bald tyres -- your connection "goes dark" and you're fined. Simple as that.
PF: Unfortunately I don;t believe that licensing of individuals as fit to use computers can take place - for a start there's the problem of proving who's in control of a machine at any point.
PF: I also don't believe that licensing of applications can meaningfully be done. True 'trusted computing' costs, and costs in the eight figure sort of region for a typical project. And, frankly, how far does trust go? You can't trust any mainstream commercial or open-source desktop operating systems, not with the level of flaws found in them (and for an amusing aside, google "Reflections On Trusting Trust"). True Trusted Computing platforms are expensive, inflexible, and don't offer the kind
of experience that modern end users expect -- it'd be like stepping back around 20 years for most PC owners. A trusted system according to the Orange Book or Common Criteria would not be something most people would buy, and it'd move computers back from being a part of the home to being an expensive office tool. Maybe no bad thing ;)
LE: What this apparently appealing metaphor obscures is two things. One "trusted computing" in the strict sense is about hardware not software. I'll come back to this. Trusted computing means that the (metaphorical) box your computer comes in has to be a "black box" unopenable by the user - otherwise the user can do all the stupid things users do right now like open emails from strangers, accept downloads with payloads of virus executables , and click on URLs that take them to phishing websites.
PF: Exactly. But extending your thoughts even further, it's a systems view and a human view that we need, not a software one. If I do something that trashes my computer, it's my risk and my loss. If I do something that trashes my computer, turns it into a zombie host for running a botnet from, and makes it part of a denial of service attack, it's different. I've messed someone else's system up and that's contributory negligence... or criminal damage ;)
LE: This means you do indeed have to trust the people who supply you with trusted computing hardware, and I agree with Bill that there should be
serious legal obligations with decent compliance mechanisms placed on those who do sell "trusted computing" so they do not sell us, as we Glaswegians say, a pig in a a poke (or a root kit in a DRM).
LE: But the Internet is not going to be any more secure if we sell people trusted computing hardware and let them, as Bill, wants to, tinker and
fiddle. It would be like selling my mum a Ferrari and suggesting that if she's bored one Sunday she tunes the engine up a bit. She would destroy a very expensive engine and she would also endanger people on the road if she took the car out and it ran out of control as a result of her unskilled modifications while she was driving.
PF: Agreed.
LE: Security of hardware sold to consumers, and consequentially the security of the entire Internet (see posts on bots , zombies etc, passim) is simply no longer compatible with open tinkering.
LE: Once upon a time anyone who bought a car was allowed to simply take delivery and drive it. Then when the density of cars increased, we reaised we needed driving tests for public safety. Maybe people like Bill who are well equipped to safely tinker with/fine tune their computers (unlike my Mum) , should have to pass a test too before they're allowed to drive away a non-black-box computer?
PF: Unenforceable. You don.t stop people owning computers, you just make it very, very hard, risky, and expensive to connect anything dubious to the public internet.
LE: Radical in the libertarian world of computer culture ; but not very odd at all when you look at the rest of the everyday attitude to owning potentially dangerous objects.
PF: "Libertarianism" on the public internet is a consensual illusion
passed down from idealistic old-timers of the 1970s and 1980s who enjoyed unrestricted ARPAnet/Internet access as a perk of their jobs or studies and the network was largely run by and for enthusiasts as a piece of research. It's been a fiction ever since individuals have been paying for their access; you are always "playing with someone else's ball" and that someone else is much bigger than you. AUPs are going to get more and more restrictive, either because ISPs are covering their asses or because governments are leaning on them, and the onsequences for breaching those AUPs must become commensurately more painful.
LE: What about the software that trusted computing hardware is willing to accept and excute? the so called "signed" software? Here I completely agree with Bill that the defining of what is acceptable software cannot safely be left to the dictat of the software/hardware vendors. Microsoft eg (just AS an example!) has absolutely no incentive to let me, a consumer, run open source software on the trusted platform they've just sold me. Without needing to imply any malice at all, simple competitive strategy would dictate they should allow access to Microsoft software products and nothing else, if they can get away with it. So as Bill says:
PF: This "ecosystem" doesn't work; Gates tried to build a "trusted
computing" platform with XBox. I forget how many weeks it took to crack it wide open. DVD regioning tried to enforce a controlled system in hardware. Ditto. There are more and cleverer people out there fighting for "freedom" than there are people able to deny them. So move the problem - take it out of the technical domain and into the legal one.
LE: [actually Bill] "The second thing we need is diversity when it comes to code signing. If my computer is set to run only signed software or read only signed
documents, then who can sign what becomes far more than a matter of technology, it becomes a political issue.
LE: [still actually Bill] We must not settle for a closed platform which allows the hardware vendor or the operating system supplier to decide, so it is time for governments to intervene and to ensure that we have an open marketplace for code signing.
PF: A closed platform won't work (see above). And signing authorities? This just permits the development of 800lb monopoly gorillas like Verisign. Far simpler to move the burden - the place to police is the network interface. I don't care what naughty crap people run on their computers; what I do care about is that someone running dangerous software can't swerve across the information superhighway and unintentionally deny my service.
LE: [still Bill!] The simplest way to do this is to give the process a statutory backing and then issue licences, just like we do for many professional and financial services. "
PF: Software licensing on this scale can't and won't happen. Especially
not while you can buy hooky software from market stalls and/or China ;)
PF: A regulatory framework needs to be put in place and that regulatory framework needs to be centred around policing traffic through network
endpoints, not what's hanging off them. Does it matter what a non-connected computer runs? Of course not.
LE: It's the last para I can't see happening, for the simple reason that a lot of hardware and software comes from the US and the US is not prone
to extending governement regulation of industry. The UK can impose local regulation on hardware, at least in theory, by stopping it at ports: it simply can't impose licensing control on software downloaded from the States. How can you download that "dodgy software" you have your eye on, if the country it originates from hasn't bought in to a licensing scheme model? Do you simply accept any software with no license - then bangs goes security.
PF: All good points.
LE: A better candidate for a certification authority for signing or
licensing software as safe might be the existing international standard setting authorities. If an ISO standard, available on-line and revised on application by new entrants into the software market, said what programmes my black box should (or could) accept and execute and which it definitely shouldn't, both I and my technophobe mother might feel a lot safer on the Net.
PF: A wise old engineer who used to work in telecoms once said to me
"What's the difference between Jurassic Park and the ISO?" I said I didn't know. "One of them's a theme park filled with dinosaurs - and the other.s a movie". By the time the ISO has defined a model for software certification and verification the problem will have morphed out of recognition. The ISO is essentially completely reactive when it comes to comms and computers; their one attempt to define networking standards was a complete failure in the face of the open-source TCP/IP protocol stack and since then they.ve essentially been regarded as a laughing stock by the Internet community. ISO, ECMA, and similar bodies simply don't have the leverage.
PF: Your technophobe mother doesn't want a true "Trusted Computer"; I doubt she.d be willing to take on the cost of buying one. Your technophobe mother wants a computer that does the right job for her, and that's difficult to unintentionally or maliciously modify.
And LE adds - couldn't agree more! Thanks Pete.
Tuesday, February 21, 2006
Who Do You Trust?
Bill Thompson of the BBC Going Digital has written a very sensible column on how trusted computing, rather than being a smokescreen for All that Is Evil (or Microsofty) might actually be the way forward to defend computers against spyware, adware and virus-ridden CDs of the infamous Sony "root kit" type.
However the tone changes in the second para:
"Unless we are careful the tools which could make us a lot safer and give us more power over what we do with the hardware we own and the software we license - few programs are actually "sold", not even free software - will instead be used to take control away from us.
At the moment the companies behind trusted computing do not trust their customers at all.
They want to use digital rights management to control what we can do with content we have purchased, they want to make sure we don't install programs or new hardware that they haven't approved, and they want to be able to monitor our use of the expensive computers we own."
Bill argues that being asked to trust the people who supply "trusted" software - people like Sony - is akin to owning a car where you can't look under the bonnet.
"I have a very nice car, and I try to take good care of it. It runs on petrol, but I want the freedom to fill it up with diesel and destroy the engine. It's my engine, after all.
The same goes for my computer. I want the freedom to write, compile and run my own code, take risks with dodgy software I've downloaded from the net and even break the law and risk prosecution by playing unlicensed music or running cracked software. "
What this apparently appealing metaphor obscures is two things. One "trusted computing" in the strict sense is about hardware not software. I'll come back to this. Trusted computing means that the (metaphorical) box your computer comes in has to be a "black box" unopenable by the user - otherwise the user can do all the stupid things users do right now like open emails from strangers, accept downloads with payloads of virus executables , and click on URLs that take them to phishing websites.
This means you do indeed have to trust the people who supply you with trusted computing hardware, and I agree with Bill that there should be serious legal obligations with decent compliance mechanisms placed on those who do sell "trusted computing" so they do not sell us, as we Glaswegians say, a pig in a a poke (or a root kit in a DRM).
But the Internet is not going to be any more secure if we sell people trusted computing hardware and let them, as Bill, wants to, tinker and fiddle. It would be like selling my mum a Ferrari and suggesting that if she's bored one Sunday she tunes the engine up a bit. She would destroy a very expensive engine and she would also endanger people on the road if she took the car out and it ran out of control as a result of her unskilled modifications while she was driving.
Security of hardware sold to consumers, and consequentially the security of the entire Internet (see posts on bots , zombies etc, passim) is simply no longer compatible with open tinkering. Once upon a time anyone who bought a car was allowed to simply take delivery and drive it. Then when the density of cars increased, we reaised we needed driving tests for public safety. Maybe people like Bill who are well equipped to safely tinker with/fine tune their computers (unlike my Mum) , should have to pass a test too before they're allowed to drive away a non-black-box computer? Radical in the libertarian world of computer culture ; but not very odd at all when you look at the rest of the everyday attitude to owning potentially dangerous objects.
What about the software that trusted computing hardware is willing to accept and excute? the so called "signed" software? Here I completely agree with Bill that the defining of what is acceptable software cannot safely be left to the dictat of the software/hardware vendors. Microsoft eg (just AS an example!) has absolutely no incentive to let me, a consumer, run open source software on the trusted platform they've just sold me. Without needing to imply any malice at all, simple competitive strategy would dictate they should allow access to Microsoft software products and nothing else, if they can get away with it. So as Bill says:
"The second thing we need is diversity when it comes to code signing. If my computer is set to run only signed software or read only signed documents, then who can sign what becomes far more than a matter of technology, it becomes a political issue.
We must not settle for a closed platform which allows the hardware vendor or the operating system supplier to decide, so it is time for governments to intervene and to ensure that we have an open marketplace for code signing.
The simplest way to do this is to give the process a statutory backing and then issue licences, just like we do for many professional and financial services. "
It's the last para I can't see happening, for the simple reason that a lot of hardware and software comes from the US and the US is not prone to extending governement regulation of industry. The UK can impose local regulation on hardware, at least in theory, by stopping it at ports: it simply can't impose licensing control on software downloaded from the States. How can you download that "dodgy software" you have your eye on if the country it originates from hasn't bought in to a licensing scheme model? Do you simply accept any software with no license - the bangs goes security.
Plus the national model of licensing financial and profesional services has already proven to be a nightmare of possible restrictive practices which the EU , the most harmonised region of nations in the world, is only slowly getting over. How tempting would it be for a faltering French software industry (say) to refuse to sign off on US or even Chinese software products?
A better candidate for a certification authority for signing or licensing software as safe might be the existing international standard setting authorities. If an ISO standard, available on-line and revised on application by new entrants into the software market, said what programmes my black box should (or could) accept and execute and which it definitely shouldn't, both I and my technophobe mother might feel a lot safer on the Net.
However the tone changes in the second para:
"Unless we are careful the tools which could make us a lot safer and give us more power over what we do with the hardware we own and the software we license - few programs are actually "sold", not even free software - will instead be used to take control away from us.
At the moment the companies behind trusted computing do not trust their customers at all.
They want to use digital rights management to control what we can do with content we have purchased, they want to make sure we don't install programs or new hardware that they haven't approved, and they want to be able to monitor our use of the expensive computers we own."
Bill argues that being asked to trust the people who supply "trusted" software - people like Sony - is akin to owning a car where you can't look under the bonnet.
"I have a very nice car, and I try to take good care of it. It runs on petrol, but I want the freedom to fill it up with diesel and destroy the engine. It's my engine, after all.
The same goes for my computer. I want the freedom to write, compile and run my own code, take risks with dodgy software I've downloaded from the net and even break the law and risk prosecution by playing unlicensed music or running cracked software. "
What this apparently appealing metaphor obscures is two things. One "trusted computing" in the strict sense is about hardware not software. I'll come back to this. Trusted computing means that the (metaphorical) box your computer comes in has to be a "black box" unopenable by the user - otherwise the user can do all the stupid things users do right now like open emails from strangers, accept downloads with payloads of virus executables , and click on URLs that take them to phishing websites.
This means you do indeed have to trust the people who supply you with trusted computing hardware, and I agree with Bill that there should be serious legal obligations with decent compliance mechanisms placed on those who do sell "trusted computing" so they do not sell us, as we Glaswegians say, a pig in a a poke (or a root kit in a DRM).
But the Internet is not going to be any more secure if we sell people trusted computing hardware and let them, as Bill, wants to, tinker and fiddle. It would be like selling my mum a Ferrari and suggesting that if she's bored one Sunday she tunes the engine up a bit. She would destroy a very expensive engine and she would also endanger people on the road if she took the car out and it ran out of control as a result of her unskilled modifications while she was driving.
Security of hardware sold to consumers, and consequentially the security of the entire Internet (see posts on bots , zombies etc, passim) is simply no longer compatible with open tinkering. Once upon a time anyone who bought a car was allowed to simply take delivery and drive it. Then when the density of cars increased, we reaised we needed driving tests for public safety. Maybe people like Bill who are well equipped to safely tinker with/fine tune their computers (unlike my Mum) , should have to pass a test too before they're allowed to drive away a non-black-box computer? Radical in the libertarian world of computer culture ; but not very odd at all when you look at the rest of the everyday attitude to owning potentially dangerous objects.
What about the software that trusted computing hardware is willing to accept and excute? the so called "signed" software? Here I completely agree with Bill that the defining of what is acceptable software cannot safely be left to the dictat of the software/hardware vendors. Microsoft eg (just AS an example!) has absolutely no incentive to let me, a consumer, run open source software on the trusted platform they've just sold me. Without needing to imply any malice at all, simple competitive strategy would dictate they should allow access to Microsoft software products and nothing else, if they can get away with it. So as Bill says:
"The second thing we need is diversity when it comes to code signing. If my computer is set to run only signed software or read only signed documents, then who can sign what becomes far more than a matter of technology, it becomes a political issue.
We must not settle for a closed platform which allows the hardware vendor or the operating system supplier to decide, so it is time for governments to intervene and to ensure that we have an open marketplace for code signing.
The simplest way to do this is to give the process a statutory backing and then issue licences, just like we do for many professional and financial services. "
It's the last para I can't see happening, for the simple reason that a lot of hardware and software comes from the US and the US is not prone to extending governement regulation of industry. The UK can impose local regulation on hardware, at least in theory, by stopping it at ports: it simply can't impose licensing control on software downloaded from the States. How can you download that "dodgy software" you have your eye on if the country it originates from hasn't bought in to a licensing scheme model? Do you simply accept any software with no license - the bangs goes security.
Plus the national model of licensing financial and profesional services has already proven to be a nightmare of possible restrictive practices which the EU , the most harmonised region of nations in the world, is only slowly getting over. How tempting would it be for a faltering French software industry (say) to refuse to sign off on US or even Chinese software products?
A better candidate for a certification authority for signing or licensing software as safe might be the existing international standard setting authorities. If an ISO standard, available on-line and revised on application by new entrants into the software market, said what programmes my black box should (or could) accept and execute and which it definitely shouldn't, both I and my technophobe mother might feel a lot safer on the Net.
Monday, February 13, 2006
Big game Season
Still on jetlag content here .. From Boing BOing,
Cheney shoots 78-year old lawyer with shotgun
The best bit is the Reader Comment from someone called Om:
"The important questions raised by this hunting accident are:
1) *Are* lawyers in season right now?
2) Was the lawyer at least a 4-point?
3) Was Cheney within his permit limit?
4) Was the Cheney aide misquoted about the lawyer's hunting suit having a target on the back, or that he'd bought it at Target a while back?
5) Will Disney adapt this into a cartoon about a baby lawyer having to adjust to living in the wild without his parent?
6) Is this what you should expect if you don't contribute enough to a political reelection fund in the future? "
Cheney shoots 78-year old lawyer with shotgun
The best bit is the Reader Comment from someone called Om:
"The important questions raised by this hunting accident are:
1) *Are* lawyers in season right now?
2) Was the lawyer at least a 4-point?
3) Was Cheney within his permit limit?
4) Was the Cheney aide misquoted about the lawyer's hunting suit having a target on the back, or that he'd bought it at Target a while back?
5) Will Disney adapt this into a cartoon about a baby lawyer having to adjust to living in the wild without his parent?
6) Is this what you should expect if you don't contribute enough to a political reelection fund in the future? "
Saturday, February 11, 2006
Wednesday, February 01, 2006
The Flickr of Tiny Web beacons?
My concerns about Flickr as a possible exercise in setting web beacons across many sites, have been examined more closely by Adam Fields on his blog. Neither Adam and I can find any conclusive evidence in the Flickr privacy policy that either says Flickr IS doing this , nor that they have barred themselves FROM doing this. One way to safeguard yourself, should you be feeling particularly conspiratorial, would be to tweak your browser to refuse to accept any third party cookies (says my occasional tech-guru correspondent, Mike Scott, who notes that even IE v 6 on only bars third party cookies by default when they come from a source without a compact privacy policy. (Which is a bit different from barring all thrid party cookies by default.)
Mobile, Ubiquitous and Continuing Paranoia
Rupert White of the Law Society Gazette points out that an even easier way to stalk someone rather than "borrowing" their mobile phone (see last entry)is to "borrow" their London Oyster card (should they be a Londoner, of course :-) This gives a full printout of everywhere the card carrier has been for the last n months. The Oyster card can be replaced in the stalkee's jacket, with them none the wiser.
The intersting question about this is what if any crime has been committed? My instinct is that this is (yet again) unauthorised access under s 1 of the CMA 1990.
"1.—(1) A person is guilty of an offence if—
(a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer;
(b) the access he intends to secure is unauthorised; and
(c) he knows at the time when he causes the computer to perform the function that that is the case."
The big issue, of course, is is an Oyster card a "computer"? The 1990 Act, deliberately, has no definition. Ian Lloyd, an expert on computer crime, suggested in his IT Law textbook a while back that given the ubiquity of smart-chip enabled white goods these days, a dishwasher or a smart fridge might be considered a "computer". I myself think it is not stretching the definition to call a smart-chipped Oyster card a computer.
If not, though, where are we? The Law Commissions tied themselves in knots a few years back over whether an offence of "theft of information" other than in well recognised categories like trade secrets, existed. (This was , in fact, one of the reasons the CMA was enacted in the first place.) Data protection law forbids the unfair processing of personal data, which in this case would include processing (or viewing) without consent. "Processing" includes "use" and display. Data subjects whose rights are violated have rights to sue the processor. But I am not convinced there is a criminal offence here. And, of course, there's always the murky waters of simple fraud - especially in Scotland where the offence of fraud can be charged at common law, not under statute. But again, I am not convinced this is actually a case of fraud as the victim is simply stolen from, not lied to or in any way deluded. The English law of fraud is currently being amended to more comprehensively cover "phishing" - where personal data is stolen by deception. But this does not quite fall under that head either. Interesting problem..
Rupert also points out that the Information Commissioner has expressed worries about the transparency and security of data collection via Oyster cards before - but this is more in relation to what London Transport might do with the information than the accessibility of the card itself as a key to access to personal information by strangers. (But I too have pointed out to my students that the public register entry with the ICO for Transport for London represents no barrier whatsoever to aggressive data mining.)
I am not a Londoner so I am not sure just how easy it is to extract data from an Oyster card. Do you need to give a password or other ID to extract the details of stations passed through, or do you just stick it in a smartcard reader? The Oyster web site merely tells you that details of the last 8 weeks' journeys can be extracted. Help appreciated!
The intersting question about this is what if any crime has been committed? My instinct is that this is (yet again) unauthorised access under s 1 of the CMA 1990.
"1.—(1) A person is guilty of an offence if—
(a) he causes a computer to perform any function with intent to secure access to any program or data held in any computer;
(b) the access he intends to secure is unauthorised; and
(c) he knows at the time when he causes the computer to perform the function that that is the case."
The big issue, of course, is is an Oyster card a "computer"? The 1990 Act, deliberately, has no definition. Ian Lloyd, an expert on computer crime, suggested in his IT Law textbook a while back that given the ubiquity of smart-chip enabled white goods these days, a dishwasher or a smart fridge might be considered a "computer". I myself think it is not stretching the definition to call a smart-chipped Oyster card a computer.
If not, though, where are we? The Law Commissions tied themselves in knots a few years back over whether an offence of "theft of information" other than in well recognised categories like trade secrets, existed. (This was , in fact, one of the reasons the CMA was enacted in the first place.) Data protection law forbids the unfair processing of personal data, which in this case would include processing (or viewing) without consent. "Processing" includes "use" and display. Data subjects whose rights are violated have rights to sue the processor. But I am not convinced there is a criminal offence here. And, of course, there's always the murky waters of simple fraud - especially in Scotland where the offence of fraud can be charged at common law, not under statute. But again, I am not convinced this is actually a case of fraud as the victim is simply stolen from, not lied to or in any way deluded. The English law of fraud is currently being amended to more comprehensively cover "phishing" - where personal data is stolen by deception. But this does not quite fall under that head either. Interesting problem..
Rupert also points out that the Information Commissioner has expressed worries about the transparency and security of data collection via Oyster cards before - but this is more in relation to what London Transport might do with the information than the accessibility of the card itself as a key to access to personal information by strangers. (But I too have pointed out to my students that the public register entry with the ICO for Transport for London represents no barrier whatsoever to aggressive data mining.)
I am not a Londoner so I am not sure just how easy it is to extract data from an Oyster card. Do you need to give a password or other ID to extract the details of stations passed through, or do you just stick it in a smartcard reader? The Oyster web site merely tells you that details of the last 8 weeks' journeys can be extracted. Help appreciated!
Subscribe to:
Posts (Atom)