Shutterstock
Protesters outside Google’s San Francisco, California, headquarters in mid-December denounce the tech giant’s $1.2 billion deal to provide cloud services for the Israeli military and government.
The protesters outside of Google’s San Francisco office on December 14, 2023, draped themselves in white sheets with the word “Genocide” written in the tech giant’s signature multicolored font and lay down to “die” in the street. The “die-in” targeted Google’s work with the Israeli government, which workers at the tech firm and others have been challenging for years. But the protests took on new urgency last autumn with the start of Israel’s relentless bombardment of Gaza.
The “No Tech for Apartheid” campaign began in 2021, when Amazon Web Services and Google Cloud executives signed a joint contract for Project Nimbus, a $1.2 billion deal to provide cloud services for the Israeli military and government.
“This technology allows for further surveillance of and unlawful data collection on Palestinians, and facilitates expansion of Israel’s illegal settlements on Palestinian land,” the anonymous Google and Amazon workers wrote in an op-ed published in The Guardian.
At last December’s protest in San Francisco, Alex Hanna, who left Google’s Ethical Artificial Intelligence (AI) team over concerns about the harms caused by the company’s products, told the San Francisco Chronicle that the “brutal, indiscriminate bombing of Gaza could not have happened without the support of Google.”
In late November, a haunting report by +972 Magazine highlighted the role of artificial intelligence in Israel’s bombing campaign, explaining that the Israeli military is using an AI system called Habsora (Hebrew for The Gospel). According to the report, an Israel Defense Forces (IDF) spokesperson said Habsora “enables the use of automatic tools to produce targets at a fast pace, and works by improving accurate and high-quality intelligence material according to [operational] needs.” While this is not the system that Google workers created, tech workers have noted that they often are not told exactly what they are making; rather, their labor is Taylorized, broken down into small pieces so that each programmer makes only one part of a larger product.
The Google and Amazon workers involved in the campaign to stop Project Nimbus are fighting—as many tech workers have in recent years—for people other than themselves. Simply put, they don’t want their highly paid skilled labor to be used to kill people. But they are also fighting for their own working conditions, demanding to know what they are making, and to control where and how the products of their labor will be used.
In other words, they are part of one of labor’s oldest struggles that the advancement of AI has brought back to prominence: They are fighting to control the production process itself, rather than just how profits are distributed.
Artificial intelligence, which often is neither artificial nor intelligent, embodies Karl Marx’s insight that “capital is dead labor, that, vampire-like, only lives by sucking living labor, and lives the more, the more labor it sucks.” AI, such as a robot in an Amazon warehouse or a machine on a Ford factory floor, replaces human labor with other products of human labor, living, as it were, with the dead.
As artist and organizer Molly Crabapple noted, the thing most commonly referred to as AI these days—generative image and text programs that can create an illustration or an article in the style of, so the story goes, nearly any artist or writer you plug in—was trained with art and writing made by humans. This, she argued, is “massive theft.” (According to an investigation by The Atlantic, one of my books was part of a collection of pirated texts used to train AI systems.)
The tech we have at our disposal is designed by those who have power. It isn’t designed to make our lives easier or better.
Such computer programs are unlikely to replace human labor entirely, but they are used as a form of labor discipline. Sometimes merely the threat of being replaced with AI is the weapon of choice: The recent strikes in Hollywood hinged on contract language around the use of AI instead of human screenwriters or actors, with the television and film companies aiming to encode their rights to do things for which the technology doesn’t yet exist, and workers demanding control over the production process, including how and when AI might be used.
Companies benefit from overstating the capabilities of computers. As Craig Gent put it in his forthcoming book Cyberboss: The New Struggle for Control at Work, “AI doesn’t have childhood trauma, as one viral placard put it. But it could ‘gig-ify’ writing work. In this scenario, writers—especially junior writers—might be hired to rework AI-drafted scripts, or conversely, write drafts for AI to redraft. Others might even be hired to front AI-generated work, undermining other writers in the process. It is less about automation and more to do with labor control in what is a highly unionized industry.”
Technology, as Gent writes, and Marx long ago noted, is deployed to control labor, whether by fragmenting and isolating workers from one another by automating parts of their jobs, or by increasingly common “algorithmic management,” where a computer measures a worker’s speed and success, and even makes decisions about hiring and firing. Algorithmic management is also the automation of labor; in this case, it’s the labor of the boss. Drivers with Uber and other app-based companies for years have protested being “deactivated” from the apps with little explanation and no way to talk to a human to find out why they were cut off from their work.
Algorithms are not “intelligent,” let alone super-intelligent; they contain the biases of the society and the companies that produced them. As geographer Dalia Gebrial noted, despite attempts by companies to portray their tech as neutral, in practice, racialization is “‘coded’ into the legal, technological, and social dynamics of the platform’s model.” This is bad enough when it comes to Uber drivers being treated as disposable and easily replaced by the next worker. It becomes dystopian when you consider “The Gospel,” where the exceptionally grim expertise of picking targets to bomb is encoded into an algorithm in which the process of killing is automated, civilian casualties are precalculated, and screwups are unimportant, because Israel doesn’t seem to care how many children and other civilians die.
Would it be better if humans were making each decision to kill? The process itself is horrific, which is perhaps why the United Auto Workers (UAW), shortly after calling for a ceasefire in Gaza, also announced the creation of a “political-education panel” called the Divestment and Just Transition Committee. Among other things, according to The Nation, the committee “will examine the size, scope, and impact of the U.S. military-industrial complex that employs thousands of UAW members and dominates the global arms trade.” The UAW is reviving an old tradition by not only weighing in on U.S. foreign policy, but also demanding a say in what they will produce. This is an important shift for the iconic industrial union away from a deal it made in 1950 to stay out of what was euphemistically called “management rights,” or the right to decide what to make and where and how to make it.
This matters in the age of high-tech tools, because, as Crabapple and others highlight, AI tends to automate away the very work that workers want to do—the so-called good jobs that often are also the higher-paid ones. As Helen Hester and Nick Srnicek note in their book After Work: A History of the Home and the Fight for Free Time, this has even affected supposedly un-automatable caring labor, in which “the more enjoyable aspects of child care—engaging, playing, interacting with children—have been automated via screens, while the more routine and burdensome aspects have remained largely untouched.” A fight to control the use and development of technology has ramifications for all aspects of our lives.
The tech we have at our disposal is designed by those who have power. It isn’t designed to make our lives easier or better. “Convenience” is a substitute for pleasure, but it rarely results in actual saved labor. Rather, it is designed to make us easier to control, and in the worst cases, easier to kill.
Such a realization is bleak. But it is not inevitable that our world, or even our tech, will look this way. In the streets during the past year, workers have won important victories in the fight to control the conditions of their labor, and tech and auto workers are both pointing the way to a better future, in which we control the things we produce and refuse to continue making weapons of war.