Earlier this year, on the floor of the U.S. House of Representatives, Congressman Jim Jordan, Republican of Ohio, accused the FBI of having paid Twitter “not one, not two, but three million dollars to censor American citizens.”
Like much of what comes out of Jordan’s mouth and Tweet-typing fingers, this claim, which erupted like a supernova across the rightwing firmament, was false. As USA Today reported in a fact-checking piece, it conflated two unrelated events. First, that “the FBI flagged Twitter accounts the agency believed violated Twitter’s terms of service”; and second, that “the FBI paid Twitter $3.4 million for Twitter’s processing of information requests . . . made through the Stored Communications Act.”
Jordan offered this dishonest claim while hawking his resolution to create a select subcommittee to investigate “the Weaponization of the Federal Government.” It passed, and Jordan, tapped as its chair, is now busily working to expose the wholesale abuses of power allegedly being committed by the administration of President Joe Biden. (Of course, Jordan never had a problem with the multiple overt actions of former President Donald Trump to wield the power of the state against his perceived enemies, just as Jordan allegedly looked the other way when he was a wrestling coach at Ohio State University while a team doctor sexually molested dozens of student athletes.)
Republicans see any effort made by Democrats to flag misinformation and disinformation as a brazen violation of the First Amendment. It’s not. Democrats have as much right as anyone else to decry what they see as falsehoods and even to urge social media platforms to remove content and punish violators. (In fact, as Rolling Stone reported, Twitter kept an entire database of GOP calls to purge content.)
Take the case of Representative Adam Schiff, Democrat of California, who in November 2020 sent an email, unearthed in January by journalist Matt Taibbi, urging Twitter to remove posts and ban users who targeted a member of his staff for harassment. Twitter declined to do so. You’d think, from the reaction, that Schiff had gotten caught drinking the blood of children.
“[T]he former Democrat chair of the Intelligence Committee pressured Twitter to censor a journalist,” Jordan shrieked in the same speech from the House floor. “You’ve got to be kidding me.”
It’s all part of the deeply partisan clash that is playing out over what role, if any, the government should play in regulating speech on social media. In general, Democrats want tech companies like Twitter and Facebook to rein in extremist content, hate speech, and lies, while Republicans allege that the goal is to censor conservative points of view. But it’s more complicated than that.
For starters, conservative points of view are not actually being censored. Donald Trump and Representative Marjorie Taylor Greene, Republican of Georgia, had their social media accounts suspended and then restored. These decisions were made not by government officials but by the tech companies, as private entities, as they have every right to do. It’s not the least bit difficult to find conservatives using social media platforms to spread misinformation. There may even be a few liberals out there spreading it, too.
Sometimes social media companies try to remove information they believe is dangerous or offensive; sometimes lies and hate get through. But this inconsistency is better than a system that puts the government in charge of what people are allowed to say.
In an especially thoughtful exchange published last year by the news outlet Divided We Fall, Herbert Lin of Stanford University and Marshall Van Alstyne of Boston University have it out over the question, “Should the government regulate social media?” Lin weighs in with a hard no, saying government action on this front “is neither desirable nor feasible.” Efforts to separate what is true from what is not are prohibitively difficult (consider, he says, the statement “Republicans are more patriotic than Democrats”), and will not stop people from believing and spreading untrue things.
Holding social media outlets accountable for the rancid proclamations of their users is like indicting a megaphone used by someone to shout “fire” in a crowded theater.
“Regulation would have to entirely suppress any support for one point of view or another to have a significant impact on flimsy rationales for outlandish positions,” Lin writes, “and that is a world that should terrify us all.” He does think the government could play a role in promoting transparency, like by requiring companies to disclose who is paying for digital ads.
That’s not enough, Van Alstyne says in response. The theory that “in a market of many ideas, the best ones win,” he argues, does not hold up when only one message is allowed to be heard. He calls for a rethinking of Section 230 of the 1996 Communications Decency Act, which immunizes social media platforms from liability for content posted by others, saying it affords protection to Internet platforms that print and broadcast media do not also enjoy.
Section 230 reads, in its entirety: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” It was passed to allow Internet companies to do what they can to remove objectionable content without holding them accountable for things they overlook—an inevitability today in a social media environment in which millions of messages are posted every hour of every day. It does not absolve the companies of responsibility for their own actions.
Interestingly, both Trump and Biden have called for the repeal of Section 230—Trump in retribution for slights from the social media platforms, Biden because he thinks it lets too much bad content get out. Biden also wants “far more transparency about the algorithms Big Tech is using to stop them from discriminating, keeping opportunities away from equally qualified women and minorities, or pushing content to children that threatens their mental health and safety.”
Republicans are characteristically alarmed. While the First Amendment forbids censorship, said Jeff Landry, Louisiana’s Republican attorney general, “the Biden team has worked to circumvent this fundamental constitutional protection by inducing, threatening, and colluding with private companies to suppress speech. It’s illegal—but when Big Government enters into such a conspiracy with Big Business to violate your rights, it’s also known by another name: fascism.”
I personally find that to be a bit overheated. But Landry has a right to his opinion.
While various measures to regulate social media percolate in Congress, the U.S. Supreme Court is weighing two cases that could limit the reach of Section 230.
The first, Gonzalez v. Google LLC, concerns an allegation that YouTube contributed to the death of a U.S. citizen who was killed in a 2015 terrorist attack, not because it posted a pro-ISIS video, but because its algorithms recommended it to viewers who watched similar material. A second case, Twitter, Inc. v. Taamneh, seeks to hold Twitter liable for “aiding and abetting” an ISIS attack by failing to block or remove pro-terrorism messaging from its platform.
In oral arguments in late February, the court’s conservative super-majority seemed reluctant to issue rulings that would make these social media providers liable for content posted by others. A decision is expected by the end of June.
The American Civil Liberties Union filed amicus briefs in both cases, arguing that any erosion of the protections afforded by Section 230 will chill speech. In Gonzalez, it says, “There is no way to visually present information to users of apps or visitors to webpages without making editorial choices that constitute, in plaintiffs’ terms, implicit ‘recommendations.’ ” In Twitter, it warned that holding the platforms being sued liable for their users’ speech “would necessarily restrict the speech of its hundreds of millions of users in violation of the First Amendment principles enshrined in this court’s jurisprudence.”
As usual, the ACLU is right. Holding social media outlets accountable for the rancid proclamations of their users is like indicting a megaphone used by someone to shout “fire” in a crowded theater. Government officials are fully within their rights to urge tech companies to crack down on hate speech and misinformation, but they cannot require them to do so or punish them for not making the right judgment calls.
As for who gets to be heard, the best answer is everyone. Here’s what ACLU Executive Director Anthony Romero had to say about Facebook’s recent decision to reinstate Trump’s account:
“This is the right call. Like it or not, President Trump is one of the country’s leading political figures and the public has a strong interest in hearing his speech. Indeed, some of Trump’s most offensive social media posts ended up being critical evidence in lawsuits filed against him and his administration. And we should know—we filed over 400 legal actions against him.” He added that, as “central actors when it comes to our collective ability to speak,” tech companies “should err on the side of allowing a wide range of political speech, even when it offends.”
Here’s an example of the system working more or less as it should. In January, just before she was appointed to a select subcommittee to look into the Biden Administration’s handling of COVID-19, Marjorie Taylor Greene tweeted that, unlike other medications, “we have no idea what is in Covid vaccines.” A Twitter user was able to post a “community note” to her tweet, for all the world to see, pointing out that a complete list of ingredients for the Pfizer, Moderna, and Johnson & Johnson COVID-19 vaccines can be found at portal.ct.gov/vaccine-portal.