Myths vs Facts of Section 230
One of the cornerstones of the modern Internet economy is a federal telecommunications law known as Section 230.
Enacted with the Communications Decency Act in 1996, Section 230 provides legal and regulatory certainty for websites and online businesses that they will not be held liable for the online activities of third parties.
By limiting the liability of online services for misconduct by third-party users, Section 230 has created a robust Internet ecosystem where commerce, innovation, and free expression thrive — while enabling providers to take creative and aggressive steps to fight online abuse.
Several myths exist about Section 230, however.
Removing Objectionable Content Must be Allowed
Newspaper and television stations are publishers and as such they have a different obligation than those who merely host a platform for commentary, messages and idea exchange. The reason is simple, publishers procure and control the content that is being put forward themselves, in the case of a platform, a third party is creating and making public the content.
For example, while a person may have lots of choices of television stations, what is playing on those stations is managed by the networks. Live broadcasts even operate on a delay so that the station can edit any content they do not like. Newspapers are not designed for everyone to print what they like. Instead, even just to merely have something included on the “public opinion” page, the op-ed section, one must go through a review and edit before work will be accepted and printed. Online platforms covered by section 230 expressly do not have control over the content, it is created by third parties. They simply provide a place for the commentary, as if providing the newsprint but not a coherent newspaper. For example, Twitter does not preview tweets. A person does not have to work through an editor to have their Facebook post approved. But these platforms can create their own standards so they can create an environment where people want to engage. Without section 230 they would not be able to do so short of being exposed to liability for every decision made. The perverse incentive then would be to do nothing to clean up the worst of troll instincts in a morality lacking morass. Platforms, whether software, internet service providers or others, are something more similar to a coffee shop. In these places, conversations are not controlled or guided but are free flowing with the participants creating the content. While the coffee shop owned should not be liable for robbery plans hatched in her shop without her knowledge, wouldn’t we want the owner to take action if she knows something is amiss? Maybe the owner would even to go further in other circumstances and ask patrons who are disruptive to leave so that others may enjoy the environment. Similarly, it seems that a newspaper should not be liable if encrypted messages are hidden within ads in the classified section that facilitate child trafficking. But how much better would it be if they took action to eliminate those ads? Society is best served when there is an incentive to do the right thing, and truly disserved when policy is established that encourages inaction in the face of a wrong. Removing objectionable content must be allowed, and section 230 not only allows it but provides the proper incentives to encourage good Samaritan behavior.
Section 230 was never a “special deal” for the technology industry
The technology and internet industry’s politically motivated opponents have tried to rewrite history asserting that they know the motivation for section 230. They claim that is was designed to give a “special deal” to the online industry. The assertion that the section was part of an “industrial policy” that was somehow approved by a free market congress in 1996 is balderdash. Rather, the online world has the same protection as the analog world, that is, the liability for user generated content still accrues to the user.
As the online world was growing “bulletin boards” were a popular means of online communications. Bulletin boards were a command line software run on a server that allowed those who “dialed in” to upload or download software, post messages, read messages and exchange messages. Following a couple court cases (Compuserve in 1991 and Prodigy in 1995), efforts by bulletin board software platforms to eliminate, restrict or edit content submitted by users would face an increased risk of liability as compared to surrendering control and completely declining to review or edit third party content at all. The common law had resulted in a perverse incentive – abandon any attempt to clean up an online platform rather than face liability or seek complete publisher control thereby restricting the online world to only the publishers voice. Section 230 was conceived as a statutory means for owners to properly maximize their investments, property and abilities while also providing an incentive for good actors. This “safe harbor” was created to encourage owners to provide some level of policing to fight against crimes of all sorts but not least was child pornography and content piracy. The safe harbor language is simple and straightforward, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” With this language online providers, websites, bulletin boards, internet service providers and other platforms, were now free to police their sites as they would like, often cleaning up their sites to attract a broader audience. The language incentivizes and protects good Samaritans. In essence all Section 230 does is make the speaker liable for their statements, not the platform on which the statements are made. This makes sense. But where did the idea come from? From existing law. The notion that a person must have knowledge and control over a situation to be liable is well established throughout existing law. Imagine if FedEx and UPS were liable for the contents of each package shipped through their systems. Likely, packages would have to be opened by the companies to verify the contents. Should landlords be liable for contract fraud committed behind closed doors on their property? Should Starbucks have to vet all the promoted services on flyers tacked to their in-store bulletin boards? Section 230 enshrines our offline “Good Samaritan” rule for platforms. In the offline world, rendering aid to someone who is hurt does not make a person liable for any unintentional damage caused while rendering assistance. Now if platforms moderate the content on their platform to remove objectionable content, they are not liable. At the end of the day, Section 230 is not a “special deal for tech” but simply the transfer of analog laws to an online world.
Section 230 enshrines offline values for an online environment.
We’ve heard the misinformation time and time again: “Section 230 is a special deal for tech.”
Of course this is totally wrong. Section 230 is not a “special deal” — it is a maintenance of our existing law. In essence all Section 230 does is make the speaker liable for their statements, not the platform on which the statements are made. This makes sense. Think about a “dog walker” ad on a bulletin board at Starbucks. If the dog walker does a bad job, it would be absurd to hold Starbucks liable. Section 230 applies the same treatment to online platforms. And Section 230 enshrines our offline “Good Samaritan” rule for platforms. In the offline world, if someone is hurt and you help them, you are not liable for any damage you cause in your effort to render assistance. Section 230 makes this true for platforms. If platforms moderate the content on their platform to remove objectionable content, platforms are not liable because they took down the objectionable content. At the end of the day, Section 230 is not a “special deal for tech” but an enshrining of offline laws for an online world.