By Arthur Chu
Last week I wrote a TechCrunch article about Section 230 of the Communications Disclosure Act, the U.S. law that many argue is responsible for the existence of “Web 2.0” in its current form.
Simply put: I don’t like it.
It’s the law that states that no one who runs a website or online service of any kind counts as a “publisher” in the old-school sense of the term–no one who hosts content online is responsible for content other people create, even if the content is libelous, even if it’s harassment, even if it contains threats.
If you want to sue someone for something they say on the Internet, you have to find the original creator. Facebook, Twitter, Yik Yak, or whoever owns the site and profits from the content bears no responsibility–not even for helping you find the person. Court cases have decided over and over again that websites are free to host anonymously contributed content and bear no responsibility for making the anonymous contributors difficult to find.
That was a problem in 2007 when the first big modern Internet case about widespread harassment on an anonymous forum broke, the AutoAdmit case, where women who had vile rumors and smears spread about them couldn’t do anything against the site that spread them and had to laboriously track down each individual poster hiding behind pseudonyms.
It’s worse today, with the proliferation of imageboard-style services designed to maximize the anonymity and minimize the accountability of their users, hosting communities for the express purpose of spreading personal information, organizing targeted harassment and in some cases trying to get people killed.
Many people besides me have expressed concern about how Section 230 enables crime on a massive scale. 47 State Attorneys General in 2013 talked about the rampant spread of child pornography and advertisement of child sex services on sites like Backpage and Craigslist. More recently, we’re seeing the Jane Doe No. 14 v. Internet Brands, Inc. case work through the system, in which current case law seems to state the website ModelMayhem bears no responsibility at all for hosting a ring of criminals pretending to be casting agents plotting to drug and rape models on the site.
On the other side you have all the tech advocates saying that the legal system is a quagmire, that opening up the door to more lawsuits will have a “chilling effect” on speech, and that Facebook and Twitter and many other blue-chip companies simply couldn’t exist if they had to vet all their user-generated content for liability.
Among the people who came out to yell at me on this issue were EFF, Techdirt and the Popehat legal blog. And I admit, putting an article about increasing the scope of legal liability for people who run tech companies on an Internet run by tech companies was, as they say, asking for it. It’s like going into Vatican City and setting fire to an icon of the Virgin Mary, or venturing into rural Ohio and blowing your nose on the Second Amendment.
Let’s clarify a few things. It’s totally possible to be in favor of limiting liability for platforms without shielding them from liability completely. This is the standard held by most countries that aren’t the United States, which is why Section 230 is to Internet speech as the Second Amendment is to gun rights–it’s us taking as an immutable freedom something most other countries see as a negotiable one, and therefore turns us into a liability haven.
As for whether my article poses an immediate threat to free speech online, well. The financial incentives to keep Section 230 exactly the way it is are enormous, and are incentives held by some of the biggest, best-connected companies with the best track record of getting special treatment from the government. So much like repealing the Second Amendment, I have to accept the depressing truth that repealing Section 230 isn’t going to happen anytime soon, while still doing the best I can to move the Overton Window.
I don’t particularly advocate the immediate and total destruction of all Web 2.0 sites, although I do stand by my irritated response to Twitter critics that most of Web 2.0 is garbage anyway. (A sentiment that in other contexts many of my critics agree with.)
What I mostly ask for is moderation. In both senses of the term.
The problem is Section 230 is defended primarily by people who truly believe in the dream of Web 2.0 (which by necessity includes everyone who works at a Web 2.0 startup). It’s a dream of a kind of alchemy, that if you just make code that makes it “effortless” and “addictive” for users to churn out content, and you sit back and let the market decide which content rises to the top of the heap, you will eventually get the equivalent of a well-edited, well-curated, intelligent and thoughtful Web 1.0 publication without having to actually read or write a damn thing yourself.
It’s an intoxicating image, one that’s deeply attractive to investors who want to make huge amounts of money for very little work. It’s one that, I’ve argued, is mostly false, especially for the companies who’ve been most deeply invested in a pure free-speech model.
You can only push your users to perform the unpaid labor of content creation, curation and propagation so far based on the imaginary rewards of “karma” or “+1”s. When this leads to your content mostly being stupid memes or repetitive magic spells to protect your copyright or low-quality clickbait, that’s annoying and depressing but not a matter for the law.
When it leads to people abusing, harassing, stalking, doxing and swatting each other, though, it’s another matter. Tech companies love to brag about the absurd ratio between the size of their userbase and the size of their staff like that’s a good thing–but when it leads to Facebook’s anti-abuse police for over a billion users being a tiny staff of contractors from Morocco being paid $1/hour, it’s not really a plus.
I always tell people who lose their temper with the futility of reporting abuse on Twitter or Facebook not to take it out on the content managers themselves–those poor 21-year-old kids were hired to do an essentially impossible job. An anti-abuse initiative that was actually effective would require a well-paid, well-trained abuse team that would eat up a lot of the rapid exponential growth tech gurus like to sell to VCs.
Instead, we have “anti-abuse theatre.” We have platforms that treat anti-abuse as, essentially, a marketing/PR expense, since there’s no actual litigation risk they’re offsetting. Their job is to do exactly as much as it takes to make it look like they’re doing something. What’s worse, they have to justify the fact that these companies spend very little on anti-abuse by straight-up gaslighting victims, repeatedly telling them their reports of abuse do not qualify as abuse so that they can go on to tell people that abuse is a minor problem on their platforms that they have well in hand.
I’m not okay with this. I don’t see this improving without a fundamental change to the business model of Web 2.0. The increasing size of the userbase and the increasing effortlessness of publishing and propagating information can only make the problem worse. Twitter is designed to let things go viral much faster than Facebook, which is why it’s so addictive and also why it’s so destructive. The successor to Twitter will be whatever platform they come up with that’s even more frictionless, that reduces the “thought-tweet gap” to an even smaller fraction of a second.
Yes, in the past I’ve defended “outrage culture”; I’ve spoken positively about the changes wrought by Web 2.0, like how Twitter enabled #BlackLivesMatter to emerge as a movement. But even the most positive examples of Internet outrage I can think of were disturbingly casual about collateral damage. I’m still haunted by the time I retweeted a post misidentifying the man who shot Mike Brown and put a family in fear, though in my defense that was less egregious than Spike Lee tweeting full dox (including address) of the wrong George Zimmerman.
Web 2.0 moves incredibly fast, and incredibly recklessly. It does so because it’s allowed to do so, because it’s easy for individual posters to hide behind a mask of anonymity or, even if they’re not anonymous, to get overlooked in a sea of voices.
Twitter and Facebook didn’t create the idea of grassroots protest, as much as some tech VCs like to pretend they did. They enable the kind of “weak-tie activism” that can be and has been built into powerful political movements with effort and leadership – but they also enable grotesque missteps like doxing an old man who shares a killer’s name, that send those movements flying off the rails.
And the same law that enables activism creates ample room for purely destructive applications – using the cloak of anonymity to stalk and bully people physically close to you, publicly “rating” human beings using the same system Yelp uses to enable intimidation and retaliation against businesses, organizing whole forums around clever techniques for being a peeping tom, or just egging on a possible mass killer for the hell of it knowing nothing can happen to you if you do.
This isn’t just “how the Internet works.” This is how we built the Internet. Had we chosen to do so, we could’ve passed a Section 230 regulating print publishers, and allowed newspapers and magazines to print as many anonymous articles containing salacious, defamatory content as they wanted. It would likely have been quite profitable, despite lacking the “scalability” of online platforms–but we didn’t allow this because no one argued that this increased “freedom of speech” would outweigh the societal harm. Even so, defamation law has hardly succeeded in turning print media into an Orwellian dystopia where no one ever says anything controversial or harmful for fear of being sued.
The ultra-rapid, zero-accountability Web we’ve built–the one responsible for pretty much everything Jon Ronson decries in his book— was created by a legal shift masquerading as a technological shift. It’s not the only time this has happened. The whole tech industry is largely founded on finagling a business model based on brazenly ignoring existing laws or regulations on the grounds that they simply don’t count anymore if you’re using the Internet.
It’s the law that states that no one who runs a website or online service of any kind counts as a “publisher” in the old-school sense of the term–no one who hosts content online is responsible for content other people create, even if the content is libelous, even if it’s harassment, even if it contains threats.
If you want to sue someone for something they say on the Internet, you have to find the original creator. Facebook, Twitter, Yik Yak, or whoever owns the site and profits from the content bears no responsibility–not even for helping you find the person. Court cases have decided over and over again that websites are free to host anonymously contributed content and bear no responsibility for making the anonymous contributors difficult to find.
That was a problem in 2007 when the first big modern Internet case about widespread harassment on an anonymous forum broke, the AutoAdmit case, where women who had vile rumors and smears spread about them couldn’t do anything against the site that spread them and had to laboriously track down each individual poster hiding behind pseudonyms.
It’s worse today, with the proliferation of imageboard-style services designed to maximize the anonymity and minimize the accountability of their users, hosting communities for the express purpose of spreading personal information, organizing targeted harassment and in some cases trying to get people killed.
Many people besides me have expressed concern about how Section 230 enables crime on a massive scale. 47 State Attorneys General in 2013 talked about the rampant spread of child pornography and advertisement of child sex services on sites like Backpage and Craigslist. More recently, we’re seeing the Jane Doe No. 14 v. Internet Brands, Inc. case work through the system, in which current case law seems to state the website ModelMayhem bears no responsibility at all for hosting a ring of criminals pretending to be casting agents plotting to drug and rape models on the site.
On the other side you have all the tech advocates saying that the legal system is a quagmire, that opening up the door to more lawsuits will have a “chilling effect” on speech, and that Facebook and Twitter and many other blue-chip companies simply couldn’t exist if they had to vet all their user-generated content for liability.
Among the people who came out to yell at me on this issue were EFF, Techdirt and the Popehat legal blog. And I admit, putting an article about increasing the scope of legal liability for people who run tech companies on an Internet run by tech companies was, as they say, asking for it. It’s like going into Vatican City and setting fire to an icon of the Virgin Mary, or venturing into rural Ohio and blowing your nose on the Second Amendment.
Let’s clarify a few things. It’s totally possible to be in favor of limiting liability for platforms without shielding them from liability completely. This is the standard held by most countries that aren’t the United States, which is why Section 230 is to Internet speech as the Second Amendment is to gun rights–it’s us taking as an immutable freedom something most other countries see as a negotiable one, and therefore turns us into a liability haven.
As for whether my article poses an immediate threat to free speech online, well. The financial incentives to keep Section 230 exactly the way it is are enormous, and are incentives held by some of the biggest, best-connected companies with the best track record of getting special treatment from the government. So much like repealing the Second Amendment, I have to accept the depressing truth that repealing Section 230 isn’t going to happen anytime soon, while still doing the best I can to move the Overton Window.
I don’t particularly advocate the immediate and total destruction of all Web 2.0 sites, although I do stand by my irritated response to Twitter critics that most of Web 2.0 is garbage anyway. (A sentiment that in other contexts many of my critics agree with.)
What I mostly ask for is moderation. In both senses of the term.
The problem is Section 230 is defended primarily by people who truly believe in the dream of Web 2.0 (which by necessity includes everyone who works at a Web 2.0 startup). It’s a dream of a kind of alchemy, that if you just make code that makes it “effortless” and “addictive” for users to churn out content, and you sit back and let the market decide which content rises to the top of the heap, you will eventually get the equivalent of a well-edited, well-curated, intelligent and thoughtful Web 1.0 publication without having to actually read or write a damn thing yourself.
It’s an intoxicating image, one that’s deeply attractive to investors who want to make huge amounts of money for very little work. It’s one that, I’ve argued, is mostly false, especially for the companies who’ve been most deeply invested in a pure free-speech model.
You can only push your users to perform the unpaid labor of content creation, curation and propagation so far based on the imaginary rewards of “karma” or “+1”s. When this leads to your content mostly being stupid memes or repetitive magic spells to protect your copyright or low-quality clickbait, that’s annoying and depressing but not a matter for the law.
When it leads to people abusing, harassing, stalking, doxing and swatting each other, though, it’s another matter. Tech companies love to brag about the absurd ratio between the size of their userbase and the size of their staff like that’s a good thing–but when it leads to Facebook’s anti-abuse police for over a billion users being a tiny staff of contractors from Morocco being paid $1/hour, it’s not really a plus.
I always tell people who lose their temper with the futility of reporting abuse on Twitter or Facebook not to take it out on the content managers themselves–those poor 21-year-old kids were hired to do an essentially impossible job. An anti-abuse initiative that was actually effective would require a well-paid, well-trained abuse team that would eat up a lot of the rapid exponential growth tech gurus like to sell to VCs.
Instead, we have “anti-abuse theatre.” We have platforms that treat anti-abuse as, essentially, a marketing/PR expense, since there’s no actual litigation risk they’re offsetting. Their job is to do exactly as much as it takes to make it look like they’re doing something. What’s worse, they have to justify the fact that these companies spend very little on anti-abuse by straight-up gaslighting victims, repeatedly telling them their reports of abuse do not qualify as abuse so that they can go on to tell people that abuse is a minor problem on their platforms that they have well in hand.
I’m not okay with this. I don’t see this improving without a fundamental change to the business model of Web 2.0. The increasing size of the userbase and the increasing effortlessness of publishing and propagating information can only make the problem worse. Twitter is designed to let things go viral much faster than Facebook, which is why it’s so addictive and also why it’s so destructive. The successor to Twitter will be whatever platform they come up with that’s even more frictionless, that reduces the “thought-tweet gap” to an even smaller fraction of a second.
Yes, in the past I’ve defended “outrage culture”; I’ve spoken positively about the changes wrought by Web 2.0, like how Twitter enabled #BlackLivesMatter to emerge as a movement. But even the most positive examples of Internet outrage I can think of were disturbingly casual about collateral damage. I’m still haunted by the time I retweeted a post misidentifying the man who shot Mike Brown and put a family in fear, though in my defense that was less egregious than Spike Lee tweeting full dox (including address) of the wrong George Zimmerman.
Web 2.0 moves incredibly fast, and incredibly recklessly. It does so because it’s allowed to do so, because it’s easy for individual posters to hide behind a mask of anonymity or, even if they’re not anonymous, to get overlooked in a sea of voices.
Twitter and Facebook didn’t create the idea of grassroots protest, as much as some tech VCs like to pretend they did. They enable the kind of “weak-tie activism” that can be and has been built into powerful political movements with effort and leadership – but they also enable grotesque missteps like doxing an old man who shares a killer’s name, that send those movements flying off the rails.
And the same law that enables activism creates ample room for purely destructive applications – using the cloak of anonymity to stalk and bully people physically close to you, publicly “rating” human beings using the same system Yelp uses to enable intimidation and retaliation against businesses, organizing whole forums around clever techniques for being a peeping tom, or just egging on a possible mass killer for the hell of it knowing nothing can happen to you if you do.
This isn’t just “how the Internet works.” This is how we built the Internet. Had we chosen to do so, we could’ve passed a Section 230 regulating print publishers, and allowed newspapers and magazines to print as many anonymous articles containing salacious, defamatory content as they wanted. It would likely have been quite profitable, despite lacking the “scalability” of online platforms–but we didn’t allow this because no one argued that this increased “freedom of speech” would outweigh the societal harm. Even so, defamation law has hardly succeeded in turning print media into an Orwellian dystopia where no one ever says anything controversial or harmful for fear of being sued.
The ultra-rapid, zero-accountability Web we’ve built–the one responsible for pretty much everything Jon Ronson decries in his book— was created by a legal shift masquerading as a technological shift. It’s not the only time this has happened. The whole tech industry is largely founded on finagling a business model based on brazenly ignoring existing laws or regulations on the grounds that they simply don’t count anymore if you’re using the Internet.
No comments:
Post a Comment