Section 230 of the Communications Decency Act

The most important internet law you may never have heard of

Hon. Chris Cox on the creation of Section 230:

The rule established in the bill, which we called the Internet Freedom and Family Empowerment Act, was crystal clear: the government would impose liability on criminals and tortfeasors for wrongful conduct. It would not shift that liability to third parties, because doing so would directly interfere with the essential functioning of the internet.

Rep. Wyden and I knew that, in light of the volume of content that even in 1995 was crossing most internet platforms, it would be unreasonable for the law to presume that the platform will screen all material. We also well understood the corollary of this principle: if in a specific case a platform actually did review material and edit it, then there would be no basis for assuming otherwise. As a result, the plain language of Section 230 deprives such a platform of immunity.

We then created an exception to this deprivation of immunity, for what we called a “Good Samaritan.”18 If the purpose of one’s reviewing content or editing it is to restrict obscene or otherwise objectionable content, then a platform will be protected. Obviously, this exception would not be needed if Section 230 provided immunity to those who only “in part” create or develop content.

Explore Timeline

Section 230 enables online platforms to host user-created content, enabling everyone to exercise free speech and free enterprise online.

Platform Protection helps all parts of America

Protecting online platforms isn’t about protecting tech companies – it’s about enabling American voices and American jobs.

Online platforms are integral to our daily lives. They include Pinterest, NextDoor, Yelp, Instagram, YouTube, GoFundMe, and hundreds more.

Platform liability limits in Section 230 of the Communications Decency Act enable all our user-generated content.

Section 230 helps us connect with our families, our communities, our customers, our small businesses, and turns us all into creators.

Limits on platform liability for content created by other is about protecting an internet by the people, for the people.

World without section 230

Section 230 enables content moderation. Without Section 230, the user-generated platforms Americans use daily would cease to exist.


Without Section 230, platforms couldn’t moderate without risking legal liability.

Spam & Bots

Without platform liability assurances wouldn’t be held liable for content on their sites, platforms likely wouldn’t remove bot accounts and spam.

Inappropriatre Content

Without Section 230, most online platforms would not risk engaging in removal of pornography and other inappropriate content.

Illegal Content

Because platforms probably wouldn’t moderate content, law enforcement would lose a valuable tool in stopping crimes and saving lives.

Hear from the authors of Section 230 about it’s past, present, and future


Myths vs Facts of Section 230


Section 230 enshrines offline values for an online environment.

We’ve heard the misinformation time and time again: “Section 230 is a special deal for tech.”


Of course this is totally wrong. Section 230 is not a “special deal” — it is a maintenance of our existing law. In essence all Section 230 does is make the speaker liable for their statements, not the platform on which the statements are made. This makes sense. Think about a “dog walker” ad on a bulletin board at Starbucks. If the dog walker does a bad job, it would be absurd to hold Starbucks liable. Section 230 applies the same treatment to online platforms. And Section 230 enshrines our offline “Good Samaritan” rule for platforms. In the offline world, if someone is hurt and you help them, you are not liable for any damage you cause in your effort to render assistance. Section 230 makes this true for platforms. If platforms moderate the content on their platform to remove objectionable content, platforms are not liable because they took down the objectionable content. At the end of the day, Section 230 is not a “special deal for tech” but an enshrining of offline laws for an online world.


Section 230 was never a “special deal” for the technology industry

The technology and internet industry’s politically motivated opponents have tried to rewrite history asserting that they know the motivation for section 230. They claim that is was designed to give a “special deal” to the online industry. The assertion that the section was part of an “industrial policy” that was somehow approved by a free market congress in 1996 is balderdash. Rather, the online world has the same protection as the analog world, that is, the liability for user generated content still accrues to the user.


As the online world was growing “bulletin boards” were a popular means of online communications. Bulletin boards were a command line software run on a server that allowed those who “dialed in” to upload or download software, post messages, read messages and exchange messages. Following a couple court cases (Compuserve in 1991 and Prodigy in 1995), efforts by bulletin board software platforms to eliminate, restrict or edit content submitted by users would face an increased risk of liability as compared to surrendering control and completely declining to review or edit third party content at all. The common law had resulted in a perverse incentive – abandon any attempt to clean up an online platform rather than face liability or seek complete publisher control thereby restricting the online world to only the publishers voice. Section 230 was conceived as a statutory means for owners to properly maximize their investments, property and abilities while also providing an incentive for good actors. This “safe harbor” was created to encourage owners to provide some level of policing to fight against crimes of all sorts but not least was child pornography and content piracy. The safe harbor language is simple and straightforward, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” With this language online providers, websites, bulletin boards, internet service providers and other platforms, were now free to police their sites as they would like, often cleaning up their sites to attract a broader audience. The language incentivizes and protects good Samaritans. In essence all Section 230 does is make the speaker liable for their statements, not the platform on which the statements are made. This makes sense. But where did the idea come from? From existing law. The notion that a person must have knowledge and control over a situation to be liable is well established throughout existing law. Imagine if FedEx and UPS were liable for the contents of each package shipped through their systems. Likely, packages would have to be opened by the companies to verify the contents. Should landlords be liable for contract fraud committed behind closed doors on their property? Should Starbucks have to vet all the promoted services on flyers tacked to their in-store bulletin boards? Section 230 enshrines our offline “Good Samaritan” rule for platforms. In the offline world, rendering aid to someone who is hurt does not make a person liable for any unintentional damage caused while rendering assistance. Now if platforms moderate the content on their platform to remove objectionable content, they are not liable. At the end of the day, Section 230 is not a “special deal for tech” but simply the transfer of analog laws to an online world.

View all Myths & Facts

Protect Online Voices is a collaboration of advocates and businesses working to protect free speech and free enterprise online.

This collaboration includes groups advocating for free speech, equal opportunity, limited government, liberal values conservative values, and small businesses.

Protect Online Voices recognizes the critical role that Section 230 of the Communications Decency Act plays in making these opportunities possible.

What We Do

Educate lawmakers on what Section 230 says

Misinformation about Section 230 is common. Protect Online Voices works to educate lawmakers and interested parties about what Section 230 actually says and how it’s been interpreted by the courts.


Illustrate the vast benefits of Section 230

Section 230 has vastly benefitted our society. Protect Online Voices articules these benefits together, showing what would be at stake if Section 230 is undermined.


Connect advocates with resources to amplify their voice

There are many supporters of Section 230 out there, but not all have the resources they need to make their voices heard. Protect Online Voices connects advocates of Section 230 with what they need so we can work together to protect our internet.