Social media platforms are increasingly reliant on automated content moderation to filter content that violates community guidelines, including, notably, during the COVID-19 pandemic, when misinformation and disinformation ran rampant in online spheres.
Social media is a primary news source for countless people, especially first-time voters. A 2021 UNICEF-Gallup study revealed that although many young people rely on social media to stay informed, they don’t necessarily trust what they read and watch.
Young people know that private corporations manipulate political issues and other information to suit their agendas, but may not understand how algorithms select the content that they see. Amid a pivotal election cycle, empowering the public to discern how content filtering affects discourse can be a powerful tool to help safeguard democracy.
Power currently falls to private corporations like Meta (owner of Facebook, Instagram, and WhatsApp) and TikTok to establish standards to determine what content is acceptable or not. Placing authority over online freedom of speech in the hands of private corporations presents a plethora of concerns, especially when using flawed algorithmic or artificial intelligence (AI) technology to moderate content.
These content moderation systems are not always precise when identifying or predicting what types of content will violate a given platform’s guidelines—and many social media platforms fall short when conveying to users the exact criteria moderators use to flag unacceptable content and contributors.
In 2023, GLAAD gave TikTok, Facebook, Instagram, YouTube, and X (formerly Twitter) low or failing scores in its Social Media Safety Index, saying the platforms’ content moderation practices don’t do enough to keep users safe from hate speech or harassment, particularly LGBTQ+ users.
A study from 2019 found that AI trained to identify hate speech was 1.5 times more likely to flag tweets as hateful or offensive when they were written by a Black user and “2.2 times more likely to flag tweets written in African American English (which is commonly spoken by Black people in the United States).”
Automated content moderation directly impacts the circulation of political content and social justice issues. Under the guise of protecting users from threats of violence or terrorism, social media companies like Meta have censored pro-Palestine content while employing “far looser prohibitions” on “white anti-government militias,” according to leaked documents published by The Intercept.
However, automated content moderation can also go entirely unannounced or undetected, as happens with “shadowbanning,” a form of algorithmic moderation by social media platforms where a user’s account and content are hidden, typically without the user’s knowledge.
Concerns regarding government overreach become especially pertinent when reviewing Meta’s systemic censorship of Palestine-related content.
Shadowbanned accounts may not show up on other users’ FYP (For You Page) or feeds, may be hidden from search results, and, overall, may be difficult, if not impossible, for others to find. This allows social media platforms to limit the visibility of purportedly objectionable content without informing the affected users, meaning those directly censored and those searching for hidden information.
As reported by The Conversation, shadowbanning affects marginalized groups disproportionately, as was exemplified in 2020 during the Black Lives Matter movement when TikTok flagged creators with the word “Black” in their bios.
Shadowbanning is a form of undeclared control. Without knowledge of it, content creators remain unaware of how they have been erased and are thus powerless to seek remedies (like creating a new account) or to protest (by alerting others, for example).
Informal methods of circumventing shadowbanning and censorship have emerged on social media platforms in response to content moderation frustrations. Notably, “algospeak” has become popular in various online communities to discuss topics the algorithm may otherwise flag and censor. For example, the phrase “unalive” is used instead of the words “kill,” “suicide,” or related terms, and sex workers are referred to as “accountants.”
The prevalence of tactics like algospeak points to a much broader issue: the lack of transparency, accountability, and accuracy regarding online content moderation, for which organizations should be held accountable. Content moderation practices aimed at curbing misinformation often have unintended consequences, as evidenced by leaked documents from The Intercept outlining Department of Homeland Security (DHS) strategies for policing disinformation.
Since its establishment in 2018 under the Cybersecurity and Infrastructure Security Agency Act, the Cybersecurity and Infrastructure Security Agency, or CISA, has intensified efforts against disinformation, particularly concerning topics like COVID-19 and vaccinations; social justice movements, including Black Lives Matter; and geopolitical events, such as the U.S. withdrawal from Afghanistan and support for Ukraine. Consequently, concerns have arisen regarding potential government overreach, especially with the revelation that the DHS has granted government partners direct access to monitor disinformation.
Equally troubling is the apparent intertwining of national security interests and private sector operations. According to an investigation by Alan MacLeod of MintPress News, Meta has hired numerous former intelligence officers working within politically sensitive departments, including “trust, security, and content moderation.”
One prominent hire is Aaron Berman, who left his job as senior analytic manager for the CIA to manage misinformation at Meta. According to a Facebook video, Berman identifies himself as the manager of “the team that writes the rules for Facebook,” which is responsible for determining “what is acceptable and what’s not.”
When platforms like Meta position individuals from intelligence backgrounds in content moderation and trust-related roles, it raises questions about where these employees’ loyalties may lie. Moreover, these conflicting interests can potentially prioritize political or national security over impartially following community guidelines and ensuring users’ safety.
Concerns regarding government overreach become especially pertinent when reviewing Meta’s systemic censorship of Palestine-related content, particularly regarding the application of its “newsworthy allowance” policy. This policy has allowed the tech company to determine what posts are considered newsworthy, resulting in more than 1,050 cases of censorship, as documented by Human Rights Watch. Examples highlighted in the group’s report include censoring the phrase “From the river to the sea,” the Palestinian flag emoji, and any mention of Hamas.
Additionally, prominent Palestinian accounts have been suspended or removed, and criticism of Israel has been categorized as “hate speech” or “dangerous.” Human Rights Watch’s evidence includes instances of shadowbanning where content about Palestine saw decreased views, engagement metrics, and visibility.
Despite these measures, harmful content, including anti-Palestinian and Islamophobic remarks, has remained online, raising concerns about potential bias in Meta’s methods for monitoring political content. Meta’s updated hate speech policy, which aims to limit political content on Instagram, Threads, and eventually Facebook, has also contributed to concerns about the censorship of pro-Palestinian voices and, more generally, freedom of expression on its platforms.
Meta claims that the policy stemmed from users requesting a less politically charged online experience, aiming to create a more enjoyable platform environment. The timing and automatic rollout of this policy raise questions about transparency and user control, especially considering that users have not been provided a clear way to opt out.
According to NPR, Meta’s updated hate speech policy targets content related to governments, elections, and societal issues, broadly defining political content as anything loosely connected to laws, elections, or social topics. This broad definition has led to concerns about what types of content may be restricted under the policy, including Israel’s genocidal war on Gaza.
The Pew Research Center discovered in 2023 that nearly half of American adults rely on Meta platforms for news. Therefore, any censorship of political content has far-reaching implications for public discourse and the diversity of opinions online. As discussions around online content moderation continue to evolve, the balance between addressing harmful content and preserving free expression remains a key challenge. Other major platforms’ implementation of similar policies underscores the ongoing debates and tensions surrounding content moderation practices in the digital age, particularly within the context of the current, high-stakes election cycle.
A 2018 Center for Information & Research on Civic Learning and Engagement survey revealed that 28 percent of respondents, ages eighteen to twenty-four, “heard or read about the election on social media platforms but were not reached by traditional outreach groups such as political parties and campaigns,” emphasizing the role social media plays in shaping political consciousness, especially among youth.
This growing trend spotlights a critical need to safeguard elections against misinformation, disinformation, and increasing polarization while preserving content vital to voters’ political awareness and engagement. At the same time, current strategies—such as shadowbanning and implementing restrictive policies—marginalize users and political issues, making it difficult to access reliable online content.
Automated content moderation directly impacts the circulation of political content and social justice issues.
Legislation seeking to regulate online content can also be problematic and vague, leading to the censorship of perfectly legal and constitutionally protected user-generated speech. For example, the latest iteration of the Kids Online Safety Act (KOSA) is still facing backlash for imposing its overly broad “duty of care” on platforms by the Federal Trade Commission (FTC).
Critics contend this bill will likely automatically censor content related to sex education, abortion, LGBTQ+ identity, and mental health, among other topics, isolating communities and depriving them of vital support and resources. Despite these concerns, there’s been scant coverage in the establishment press.
Automated content moderation frequently misses crucial nuance. Sarah T. Roberts, associate professor of gender studies, information studies, and labor studies at the University of California, Los Angeles and an expert on content moderation work, told the Harvard Business Review that content moderation “requires linguistic and cultural competencies” that machines cannot achieve. That said, human content moderation is intense and often underpaid work that uses warped, out-of-date guidelines.
In 2019, Netzpolitik released an excerpt from TikTok’s moderation directions, which advised moderators to flag most political content as “not recommended.” TikTok maintained these guidelines were no longer in use and that it does not remove content related to political protest, “including reference to Tibet, Taiwan, Northern Ireland, and Tiananmen Square,” but it did not “address the question of whether it keeps these videos from finding an audience.”
Social media algorithms, unique to each platform, are designed to prioritize user engagement to boost advertising revenue, often at the expense of accuracy and diverse perspectives, with “extreme political content or controversial topics” most likely to be amplified, according to a recent study by Northwestern University.
In March, the White House hosted its first-ever “influencer” luncheon ahead of the State of the Union address, briefing young content creators on student debt relief, the economic agenda, and other issues. This came on the heels of the Biden campaign’s launch on TikTok in January, joining a choir of voices recruited to speak on the administration’s behalf. Then in April, President Biden signed a bill requiring the Chinese-owned company ByteDance, which owns TikTok, to sell its U.S. subsidiary to an American owner or be shut down. Biden’s re-election campaign plans to continue using TikTok.
Preserving the integrity of elections should not rely on social media platforms and government agencies whose priorities do not align with establishing fair democratic processes. Social media users must familiarize themselves with how platform algorithms work to assess the credibility and diversity of the information they encounter.
The process of investigating these complicated components of social media use necessitates critical media literacy to better understand which decisions benefit the public and which aim to serve the interests of powerful entities. This could involve an adapted practice of lateral reading to assess the credibility of a source or creator by “searching for articles on the same topic by other writers . . . and for other articles by the author you’re checking on,” according to the News Literacy Project. Instead of diving deeper into content on a specific platform that may, on the surface, seem extreme or misleading, users should look outside of these digital platforms and read work available from other journalistic sources.
Engagement feeds the algorithms that determine what appears on social media feeds, creating never-ending echo chambers where platforms amplify similar viewpoints and stifle opposing positions. In a 2022 article for Common Dreams, broadcaster and author Thom Hartmann said that algorithms function almost exclusively “to make more money for the billionaires who own the social media platform.” As a result, he said, algorithms “don’t ask themselves, ‘Is this true?’ or ‘Will this information help or hurt this individual or humanity?’”
The goal of social media algorithms often diverges from considerations for truth and societal benefit, prioritizing profit for the platform above all else. In this critical presidential election year, voting decisions cannot be based on opaque algorithmic mechanisms, but instead must be made by an informed and media-literate citizenry.