The May 14 live-streamed mass shooting that left ten people dead in Buffalo, New York, reignited debates about the connection between media and racist extremism. The shooter’s manifesto perpetuated the so-called great replacement theory, which posits that there is a concerted effort to “replace” white people with people of color. There are many variations of this baseless white supremacist conspiracy theory that have swirled around for decades.
Surveillance capitalism is a new economic order where the user is the product and platforms profit by collecting and analyzing users’ data.
Immediately following the shooting, pundits argued that the shooter came to internalize white supremacist ideas from social media. Indeed, the shooter’s manifesto and social media activity reveal that he spent a great deal of time online. Following the shooting, social media platforms were chided for not acting swiftly enough to moderate racist content and remove videos of the shooter’s live stream. Lawmakers, pundits, and civil rights groups pressured social media companies to limit or ban white supremacist content from their platforms.
But this, of course, goes against Big Tech’s profit model, which relies on the very content that antiracists oppose. Part of this is because Big Tech is dominated by white people, so much so that a 2018 investigation revealed that ten of the largest companies in Silicon Valley “did not employ a single black woman in 2016” and three “had no black employees at all.” Research has also shown that decades of attempts at creating a more racially inclusive tech world have actually perpetuated racist outcomes. As a result, audiences are right to be skeptical that Big Tech will do anything substantive to mitigate racist attitudes and behaviors on their platforms.
One of the reasons for their inaction, as Silicon Valley investor Roger McNamee explains, is that the culture of big tech companies interprets public requests to moderate content as a public relations problem rather than a threat to communities. And this is compounded by a culture rooted in placing profits over people, as former employees and investigations revealed. A Facebook whistleblower, for example, testified before Congress that the company knew its algorithm was undermining democracy and negatively impacting the mental health of young girls, but refused to make changes because the system was immensely profitable.
There has never been absolute free speech on social media platforms. The platform creator has always been the one to determine the parameters of speech, such as the community standards, amount of characters allowed per post, and which posts appear or do not appear in other users’ newsfeed. These determinations fuel the economy of Big Tech, or what’s known as surveillance capitalism.
Surveillance capitalism is a new economic order where the user is the product and platforms profit by collecting and analyzing users’ data. An analysis of that data is sold to governments and companies interested in modifying human behavior in the form of predicting or nudging users’ actions. The seemingly endless flow of data necessitates keeping users engaged with the company’s platform. To achieve this goal, every user receives a unique search engine result or social media feed that is designed to give them the content that they find most engaging.
Sensationalist content—such as hate speech by white supremacists—has been a reliable source of engagement not just for social media platforms, but for legacy media outlets that thrived on the mantra “if it bleeds, it leads.” The difference with social media, studies show, is that users are presented almost exclusively with content that reinforces their ideological view, even if that content is incorrect or baseless.
And since people often conflate familiarity with accuracy, misinformation like the great replacement theory are more likely to be believed the more they are spread. This can lead to users becoming radicalized to the point of violence, as was the case with Ceasar Sayoc, who in 2018 mailed pipe bombs to Democratic officials and members of the press.
The algorithms that big tech companies rely on play a role here, too. An algorithm, having been coded by human programmers, is not objective and often reflects the creators’ biases. As a result, there is a lot of racism expressed through Big Tech’s algorithms.
Search engine algorithms can privilege racist terms and imagery. Safiya Umoja Noble, in her book Algorithms of Oppression: How Search Engines Reinforce Racism, for example, highlights how search engines like Google promote racist tropes, as in the case of users searching for images of Black people and finding pictures of apes and gorillas.
As the tragedy in Uvalde, Texas, reveals, we suffer from a much larger problem of cultural, social, and political decay.
Big Tech algorithms have revolutionized every aspect of the human experience, but mostly for the worse. They are used by employers to determine a candidate’s qualifications for a position; law enforcement to determine an individual’s potential for criminality, and governments to determine an individual’s eligibility for social services. In all cases, these algorithms reinforce racism by denying people of color access to employment and services while targeting them for more policing.
Ruha Benjamin, a professor in the Department of African American Studies at Princeton University, refers to this as the “New Jim Code.” Benjamin argues that racism has been digitized, and those who benefit and purvey racism hide behind the notion that an algorithm’s designers are not responsible for its decisions.
As the tragedy in Uvalde, Texas, reveals, we suffer from a much larger problem of cultural, social, and political decay. Allowing society to be guided by the economic whims of a handful of oligarchs in Big Tech has proven corrosive. Rather than treat Big Tech as an antiracist ally, we would be wise to view the industry as a source of racism in urgent need of regulation.