The Ninth Circuit Court of Appeals just determined that Roommates.com - a networking site for people looking for, housesharers, did not deserve immunity under Section 230 of the US Communications Decency Act for information that users of the site provide on questionnaires during registration.
The Register reports that "Section 230 of the CDA gives providers of an interactive computer service, such as a website, immunity from lawsuits relating to the publication of information on the site by a person other than the site's provider. Thus, information posted to a blog's comments or on an online forum won't put the site provider on the hook for damages if the publication of the content happens to break the law someplace.
Someone who, in whole or in part, creates or develops the published information, however, qualifies as a "content provider," and falls outside the bounds of the immunity. The Ninth Circuit panel determined that Roommates.com, by filtering the kind of information that visitors to the site would see, had developed the information provided, and could not claim immunity for the publication of the information...
The key quote from J Kozinski is "By categorizing, channeling and limiting the distribution of users’ profiles, Roommate provides an additional layer of information that it is “responsible” at least “in part” for creating or developing." [bold added]In other words, Rommmates .com were, it seems, held to have "created" , in part though not in whole, the information that users themselves supplied via structured drop down menus; (eg "do you want to live with [options] straight men/gay men/straight women/gay women/anyone")but not information supplied by users themselves in freeform comments. That information was then held to have breached the anti-discrimination provisions of the Fair Housing Act.
This is rather reminiscent of the debate in the UK before the E Commerce Directive about whether sites were "editors" under the Defamation Act 1996 s 1 if they undertook any kind of filtering or editing of content - and the even earlier debate in the US about whether ISPs like Prodigy were putting themselves at risk of liability by undertaking similar editorial work to create "family-safe" content. Basically, if you are a user-generated content site, do you dare to mess with the content at all, even if the result is a better or more searchable/manageable/less offensive product for your users? Section 230 (c) was designed to put an end to such worries, as was in Europe, the ECD. From that perspective this is a very regressive step.
On the other hand, it has become increasingly clear that s 230(c) was too widely drawn in giving absolute immunity to ISPs/hosts in respect of criminal liability and non-copyright-related torts (cf the later DMCA, whose scheme is akin to the EU ECD in allowing limited immunity subject to notice and take down and other requirements) - and a series of cases have attempted to rein in that immunity by, eg, re-introducing distributor liability.
This case is a logical progression, but it is unfortunate. (A better solution would be legislative reform of s 230 (c) - but that ain't going to happen.) As the Register point out, what will the implications be for all the sites which "facilitate" or "edit" or "structure" or "filter" or even perhaps "tag" user-generated content - the MySpaces , Facebooks, and even the Googles? MySpace and Facebook both "structure" (some) information via menus and questions. So do many dating sites. What if some of this content is defamatory or obscene? In particular the word "categorized" is worrying. What might this do to the liability of new tagging sites like Digg and Delicio.us so valuable to the Internet at large?
In most these cases - especially the Diggs and Delicio.us es - I think the argument can be developed that they do not "thin down" or restrict or impose structure on the information generated by a third party content provider - which seems the nuance of the case - but merely add value to it separate from the actual text of the third party content. (What will AACS be thinking reading this, I wonder?) Similarly Google can argue that they do not themselves filter content but merely respond to user (ie third party) instruction. Nonetheless Google is usually made available with default on Safe Search, ie filtering out obscene content - so the position is not all that clear.
I await the appeal:-)