Would you say social media platforms offer “a forum for a true diversity of political discourse?”
Congress used this language over 20 years ago to describe the internet when it passed Section 230, a federal law that provides liability protection for online service providers when they transmit or take down user-generated content. While the internet generally does offer such a forum, on social media platforms, it is disappearing.
Big Tech, including social media platforms, are now under the microscope, and legislators have very different ideas on what, if anything, needs to be done. The recent hearing before the House Energy and Commerce Committee—billed as an investigation of digital misinformation among Facebook, Twitter and Google—showed just how divided members of Congress, both parties and the public are on the future of social media.
Committee members barraged Facebook’s Mark Zuckerberg, Twitter’s Jack Dorsey and Google’s Sundar Pichai with questions. Some threatened to repeal Section 230. Others called for government regulators, such as the Federal Trade Commission, to review their content moderation practices and algorithms.
Many on both the left and the right agree that Section 230 needs to be reformed. But this is generally where the agreement ends.
At the heart of the Section 230 debate is a disagreement regarding the importance of allowing Americans to speak their minds. Some want to reduce the chilling of speech by social media companies. And some want to use Section 230 reform as a way to chill speech still further. They want to ensure that speech communicated online is consistent with their worldviews.
For many on the right, Section 230 needs to be reformed because social media companies have so clearly broadened the types of content that they moderate, demonstrating bias and censorship of content associated with conservatives. Many on the left, however, believe Big Tech companies are not moderating enough content, particularly what they view as harmful or extremist speech.
For example, they want these companies to go after First Amendment-protected “hate speech,” which is so vague that it can mean almost anything, including thoughtful and legitimate discourse on such sensitive topics as gender identity.
They also desire to have social media companies go even further in taking down “misinformation,” as if one side has a monopoly on everything that is true, even in subjective debates. There would be no fact-checking the self-anointed fact checkers. And this so-called fact-checking is arguably a pretext to remove or discredit views inconsistent with their own. In fact, if these companies were so concerned with the facts, they would allow the content to be subject to public scrutiny.
Conservatives and others concerned with bias and censorship should clearly recognize these differences if they hope to achieve their desired Section 230 reforms. There should be wariness of getting on board with “230 reform” without recognizing that many on the left have a completely different view of what reform looks like. Details matter, and Section 230 reform is needed, but the pathway in the current environment could help the left get reforms that would be to the opposite of what many conservatives would want.
To be clear, Section 230 reform shouldn’t be an excuse for the government to trample on the First Amendment, such as by trying to dictate the type of legal speech that private companies must allow or prohibit on their platforms. But Section 230 is a federal government intervention that provides the benefit of liability protection for online service providers, provided they are willing to abide by the parameters set forth in that provision.
To account for the spread of misinformation on their platforms, the CEOs at the hearing explained how difficult it is to moderate the high volume of content uploaded on their sites each day. To help moderate content, the companies have built artificial intelligence algorithms to seek and remove content they deem to be illegal or in violation of their terms of service or community guidelines.
The CEOs blame the algorithms when the companies go overboard on limiting speech. But algorithms are not self-created by computers. Rather, company employees design and code the algorithms based on direction from their company superiors.
And currently, be it through algorithms or other moderation tools, these social media companies are chilling speech on their platforms. This isn’t merely about them removing user content. It also includes the recent proliferation of labeling, delisting and context commentaries from these social media companies.
There’s a wide range of opinions across the ideological spectrum on whether and how to reform Section 230, or to eliminate it entirely. Legislators should reform it, and in so doing, protect the forum for political discourse envisioned when the law was passed 25 years ago.
This piece originally appeared in the Chicago Tribune