Skip to main content

PROTECT Act FAQ

Disclaimer: The questions and answers provided below are for purposes of policy discussion, do not constitute legal advice, and are no substitute for the advice of counsel.

What is Section 230?

Section 230 is a law established in 1996 that says websites aren’t usually legally responsible for what users post on them. If someone posts something harmful or illegal, the platform usually can’t be sued for it, even if the platform helped spread it.

Why is Section 230 harmful?

Section 230 often prevents people from suing social media companies when they’re harmed, which gives platforms little incentive to keep users safe. As a result, harmful content like sexual exploitation, illegal drugs, and self-harm can spread, while victims are blocked from getting their day in court.

Why does Section 230 fail to protect victims in today’s internet?

People can hide behind fake names online, so it’s often impossible to find the person who posted harmful content. Even then, courts often say the platform isn’t responsible, even if it helped spread the content or knew it was illegal. The law was made for an old version of the internet and now puts kids and the public at risk.

Why is today’s internet different from 1996?

In 1996, the average American spent about 30 minutes a month online. Today, it’s about 18,000 minutes a month, almost half of all our time. That's an increase of nearly 60,000%. With the internet now playing such a huge role in daily life, rules written almost 30 years ago clearly need to be changed.

What is H.R. 7045 the PROTECT Act?

Introduced by Congressman Jimmy Patronis, the PROTECT Act is intended to hold Big Tech accountable. The bill would:

  • Eliminate Section 230;
  • Take away special legal protections for content on their platforms and their moderation decisions; ; and
  • Permit people to hold Big Tech accountable and liable for their actions

Would platforms be held accountable if Section 230 were repealed?

Yes, platforms would no longer get automatic legal protection from federal law. People who are harmed would be able to take their cases to court, and platforms couldn’t use Section 230 as a shield anymore.

Would platforms still be able to moderate content after repeal?

Yes, platforms could still set rules in their terms of service and decide what content is allowed. Repealing Section 230 wouldn’t stop moderation, it would just stop platforms from being immune when real harm happens.