The Supreme Court cast doubt Monday on state laws that could affect how Facebook, TikTok, X, YouTube and other social media platforms regulate content posted by their users. Illinois legal experts say the court’s decision could have far-reaching implications for users in the state, especially in future elections.
In nearly four hours of arguments, several justices questioned aspects of laws adopted by Republican-dominated legislatures and signed by Republican governors in Florida and Texas in 2021. But they seemed wary of a broad ruling, with Justice Amy Coney Barrett warning of “land mines” she and her colleagues need to avoid in resolving the two cases.
While the details vary, both laws aimed to address conservative complaints that the social media companies were liberal-leaning and censored users based on their viewpoints, especially on the political right.
Steven Schwinn, a law professor at the University of Illinois at Chicago, said politicians in Florida and Texas have been transparent in voicing their goal of restricting social media platforms from moderating conservative speech, even if it’s “quite dangerous to public health and the health of our democracy.”
“What these laws would do is require social media websites not to moderate the content of false speech, outright lies and otherwise misleading speech about things like COVID-19, the 2020 election or anything related to election security going forward,” Schwinn said.
Experts have warned about conspiracy theories spreading on social media during the presidential election, something Schwinn said could become more rampant if content guardrails on sites come down. Schwinn said it’s inevitable that misinformation would extend to smaller, local elections as well.
He said it could look like posts with false voting hours or voting locations, as well as “grander efforts” by individuals, organizations and bots to manipulate a candidate’s position or question the integrity of the election process.
“I’d expect that in the 2024 election, we’d see vastly more false information and lies about the election itself, and this could have a serious impact on the election,” he said.
Zizi Papacharissi, a professor of communications at UIC, said she’s most concerned about deepfakes — artificial images, videos and audio clips that have already made their way into campaign ads. It’s the first U.S. presidential election where sophisticated AI tools can produce these convincing fake images quickly. Illinois is ahead of the curve, she said, by giving victims of deepfake pornography the right to sue those who use their likeness without their consent.
Differences on the court emerged over how to think about the platforms — as akin to newspapers that have broad free-speech protections, or telephone companies, known as common carriers, that are susceptible to broader regulation.
Chief Justice John Roberts suggested he was in the former camp, saying early in the session, “And I wonder, since we’re talking about the First Amendment, whether our first concern should be with the state regulating what we have called the modern public square?”
Justices Samuel Alito and Clarence Thomas appeared most ready to embrace arguments made by lawyers for the states. Thomas raised the idea that the companies are seeking constitutional protection for “censoring other speech.”
Alito complained about the term “content moderation” that the sites employ to keep material off their platforms.
“Is it anything more than a euphemism for censorship?” he asked, later musing that term struck him as Orwellian.
But Justice Brett Kavanaugh, seemingly more favorable to the companies, took issue with calling the actions of private companies censorship, a term he said should be reserved for restrictions imposed by the government.
“When I think of Orwellian, I think of the state, not the private sector, not private individuals,” Kavanaugh said.
Schwinn said he thinks the word “censorship” was used in arguments to make content moderation decisions sound “nefarious.” Typically content moderation prioritizes or de-prioritizes posts because they are false or misleading, he said. It’s impossible to know for sure what would happen to social media sites if all content moderation were removed, but Schwinn said it would probably be the “wild west.”
“I wonder whether social media corporations would continue to exist in that kind of environment,” he said. “The reason we appreciate it is because the content is moderated. If we’re all subjected to false speech and lies about all manner of things … I don’t think most of us would find social media a useful thing to participate in at that point.”
Papacharissi said content moderation online is similar to content moderation in real life. She said there are rules of conduct at work, at home and even at a bar that dictate how people act. Advertisers also tend to pull away from platforms that host content that is “toxic,” she said.
“When we go to public spaces, if you swear in a bar and you verbally threaten someone, you’re thrown out of the bar,” she said. “So I’m not really sure why we’re not allowed to moderate content in the same way online.”
She said women and other marginalized groups are often targeted on social media platforms. If content threatens someone’s safety — whether it’s conservative or progressive — she said it should be removed.
The precise contours of rulings in the two cases were not clear after arguments, although it seemed likely the court would not let the laws take effect. The justices posed questions about how the laws might affect businesses that are not their primary targets, including e-commerce sites like Uber and Etsy and email and messaging services.
The cases are among several the justices have grappled with over the past year involving social media platforms. Next month, the court will hear an appeal from Louisiana, Missouri and other parties accusing administration officials of pressuring social media companies to silence conservative points of view. Two more cases awaiting decision concern whether public officials can block critics from commenting on their social media accounts, an issue that previously came up in a case involving then-President Donald Trump. The court dismissed the Trump case when his presidential term ended in January 2021.
The Florida and Texas laws were passed in the months following decisions by Facebook and Twitter, now X, to cut Trump off over his posts related to the Jan. 6, 2021, attack on the U.S. Capitol by his supporters.
The Associated Press contributed.