It will soon become illegal in the state of Texas for YouTube to ban videos by white supremacists or ISIS. The same goes for Twitter, Facebook, TikTok and Instagram.
In fact, with a few limited exceptions, any content moderation these sites do, including “de-boosting” or “de-monetizing” videos or posts, will cause them to be sued in Texas. In fact, they are now legally required in that state to… not manage their sites.
That may sound hyperbolic. It’s not. Last week a three-member jury for the 5th US Circuit Court of Appeals ruled 2 to 1 that HB20 — a Texas law passed last summer that prohibits major social media platforms from doing anything to discriminate against the vast majority of content posted on their sites — is constitutional.
The social media companies that indicted for HB20. to block argue that the First Amendment gives them the right to decide what content to host and what content to ban, and that the government cannot force them to publish or allow them to publish content they find offensive or objectionable.
But Judge Andy Oldham, who wrote the majority opinion, rejected this idea. He believed that social media companies are “common carriers,” closer to telephone companies or railroads than publishers, and that they have no First Amendment right to decide what to publish or not, or even what to publish. or will not encourage . If you have a lot of users and rely on user-generated content, according to Oldham, you should make your site more or less an unmoderated free one for everyone.
This was a very strange decision (which will likely lead to a Supreme Court showdown). First, Supreme Court precedent has held that the First Amendment not only prohibits the government from banning speech, but also prohibits the government from forcing anyone to speak or publish.
Yet that’s exactly what HB20 does: it explicitly requires social media companies to publish content they would otherwise ban. (The bill arose out of conservatives’ anger at the belief that social media companies were removing or de-boosting right-wing content, including allegations of voter fraud.)
Oldham circumvents this problem by stating that social media platforms are not analogous to publishers because they exercise “virtually no editorial control or judgment.”
It’s a bizarre claim given how much time and money sites like Facebook and YouTube spend making curatorial decisions about what content to promote, what content to remove, what content to monetize, what content to ban. , and so on. Users can generate the content, but the platforms constantly use editorial discretion over what happens to that content to ensure that their sites don’t alienate their users or fall into chaos. HB20 effectively prevents them from doing that.
In addition to the First Amendment, the right of platforms to control content is also protected by the most well-known legal provision in the history of the Internet, namely Section 230 of the Communications and Decency Act of 1996. Section 230 is best known for exempting websites of legal liability for content that their users post. But it also explicitly allows websites to remove content they deem “offensive.”
In other words, it says that websites can post problematic content without worrying about prosecution, and that sites can remove or refuse to post content if they wish.
The 5th Circuit opinion is quick to dismiss the claim that the law gives platforms the right to moderate content, saying that the “offensive” content that websites are allowed to ban does not contain “political” content. But it provides no evidence for that argument. And Senator Ron Wyden of Oregon, a CDA co-author, disagrees. As he put it in 2019: “Section 230 is not about neutrality. Period of time. At 230, it’s about letting private companies make their own decisions to keep some content and remove other content.”
Precisely because it gives websites a lot of leeway, Section 230 has become a favorite target of criticism from both conservatives and liberals in recent years, albeit for completely opposite reasons. Conservatives don’t like it because they think it allows websites to get away with censoring right-wing people, while liberals think it allows these sites to take a hands-off approach to checking for misinformation, hate speech , incitement to violence and the like.
Donald Trump tried to withdraw it when he was in office, and just last week President Joe Biden called on Congress to lift “special immunity for technology companies.” But right now, Section 230 is still the law. And Friday’s decision shows why, despite all the criticism, it remains essential for a robust internet.
After all, without legal immunity for things users post, social media companies (and websites of all kinds) would be much more cautious and risk-averse, making them more aggressive in censoring anything that could expose them to a lawsuit. And without the legal right to moderate content, social media platforms would be inundated with even more hate speech and misinformation than they already are.
Section 230 is an imperfect provision. But repealing it would destroy the Internet as we know it.
That doesn’t mean that nothing needs to be done to address the problems that social media has created. The fundamental problem when it comes to social media toxicity is how the algorithms of these sites can promote and amplify things like hate speech and misinformation, sending people down rabbit holes they would never have gotten lost in otherwise.
So getting social media companies to do more to control and limit the reach and power of those algorithms is critical. Conservatives and liberals alike want platforms to disclose much more information to consumers about how their algorithms work and how they shape the user experience, as well as provide in-depth information to regulators about the impact of algorithms on amplifying content. That would be a place to start.
However, in everything we do, it is essential to strike the balance central to Section 230: allowing websites to be places for users to share their opinions and other content, while also recognizing their right to share that content. moderate to prevent their sites from becoming poisonous cesspools.
What makes HB20 such a disastrous law is that in pursuing the first goal it completely ignores the second. Under the guise of offensive censorship, the platform tramples’ First Modification rights, and completely disregards the protection of Section 230.
In upholding the law, the 5th Circuit is wrong. Now we have to wait and see if the Supreme Court is right.