Social Media on Trial

Section 230 is frequently referred to as the 26 words that created the internet.

And for the first time, the Supreme Court will interpret the scope of Section 230, potentially reshaping the internet as we know it.

How we got here?

In 1996, websites were picking up steam. As thousands of posts, links, and messages were uploaded, traffic volume quickly exceeded moderation capabilities. This led to a wave of expensive and time consuming lawsuits related to misinformation and defamation.

In response, the government decided to enact Section 230 (as part of the Communications Decency Act), which states,

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Meaning, online platforms were no longer responsible for the content a user may post. Section 230 was meant to encourage moderation and foster civil conversation across these new social theatres.

But at that time, nobody could imagine just how influential these platforms would become. Nobody could imagine that they would be used to influence elections or radicalize extremists. And this brings us to today.

Gonzalez v Google

On 2/21, SCOTUS will hear the oral arguments for the Gonzalez v Google case where the plaintiff is claiming that "YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez."

Up until this point, Section 230 has consistently protected tech platforms from user generated content liability. But now, the algorithm is on trial and the question becomes: is YouTube liable under section 230 for its algorithm's recommendations?

What's at stake?

If Section 230 is altered, the downstream impacts will be dramatic. “The rulings could open all kinds of new doors for future litigation, prompting an influx of legislation by state and federal legislators and fundamentally changing the future of the internet.”

On one extreme, the Court could adopt a broad view of Section 230, strengthening its protections and creating a more unregulated and/or unmoderated internet. “There would be a greater incentive not to restrict content because this would create non-neutral censorship of speech and potentially violate First Amendment rights.” As a result, an influx of hateful or dangerous content may surface more easily and disrupt the safeguards companies designed for their users.

Alternatively, the Court may narrow the protections, incentivizing greater content moderation to avoid legal liabilities. This could also lead to a more closed internet, where user content is excessively removed and social channels are limited in scope. In this situation we could see:

Subscribe to JTS
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.