Imagine a world where artificial intelligence has for years been aggregating every input you've tapped into your computer, tablet, and phone via Google searches and Amazon purchases, the contents of your text messages, the many apps on your phone, and your Instagram and Facebook likes, posts, and messaging. Imagine that, with this data set, you are being nudged and herded to commercial action based on your own behavior and perceived needs.
It turns out that’s exactly the world we’re living in today.
According to Shoshana Zuboff, a professor emeritus of business administration at Harvard Business School, we are living in an era of “surveillance capitalism,” in which our online behavior is constantly being monitored, recorded, and analyzed. The big pioneers of this approach, including Google and Facebook, have become quite wealthy by selling predictions of our individual behavior to targeted online advertisers. Now the customers for this data range across the entire economy.
“The result is that both the world and our lives are pervasively rendered as information,” Zuboff writes in her recently published 700-page book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. “Whether you are complaining about your acne or engaging in political debate on Facebook, searching for a recipe or sensitive health information on Google, ordering laundry soap or taking photos of your nine-year-old, smiling or thinking angry thoughts, watching TV or doing wheelies in the parking lot, all of it is raw material for this burgeoning text.”
Surveillance capitalism, a term coined by Zuboff, is a digital-born market form governed by novel and even startling economic imperatives that are producing unprecedented asymmetries of knowledge and power. The stark social inequalities that characterize this market project enable new forms of economic and social domination, while challenging human autonomy, elemental and established human rights, including the right to privacy, and the most basic precepts of a democratic society.
“We think we’re searching Google,” Zuboff said in a recent speech at the University of California, Berkeley. “Google is actually searching us. We think that these companies have privacy policies; those policies are actually surveillance policies.”
One troubling example that Zuboff cites in her book was recently in the news. Cambridge Analytica was a private company owned by the billionaire Robert Mercer, who also funded the Donald Trump campaign. It harvested profiles from approximately eight-seven million Facebook users so that the campaign could target ads that played to their fears. This information was used to direct subliminal messages to individuals in order to manipulate their political attitudes and influence their voting behavior.
“What we’ve read about when it comes to the Cambridge Analytica scandal, that’s really sort of a garden-variety day in the life of a surveillance capitalist,” Zuboff says in an interview. “For many, it was most likely one of the first times that a tech corporation’s wrongdoings had been so clearly laid bare.”
Zuboff believes Cambridge Analytica, which closed up shop in 2018, “had tremendous influence both on the Brexit vote of 2016 and in the 2016 U.S. presidential elections.” But it is only a taste of what she believes is yet to come.
Zuboff has pulled back the veil on a real and present danger to our democracy. She says these companies are mining information “that gives us our individual spirit, our personality, our sense of freedom of will, freedom of action, our sense of our right to our own futures.” This is information that needs to be private, “because that is how it grows and flourishes and turns us into people who assert moral autonomy—an essential element of a flourishing, democratic society.”
Cindy Cohn, executive director of the Electronic Frontier Foundation, a nonprofit group devoted to defending digital privacy and free speech, suggests that Zuboff may be taking her analysis a bit too far.
“The idea that algorithms are going to rob us of free will and cause us to become automatons like Manchurian candidates, and think things that algorithms want us to, is not my central worry,” Cohn says. “But Shoshana Zuboff is not wrong in her assessment. We need real science-based evidence of the impact of this trend. There is reason to be very concerned.”
Zuboff has pulled back the veil on a real and present danger to our democracy.
Cohn outlines the danger, as she sees it: “If tracking people limits the information we receive, it can limit the opportunities we get. We really need to pay attention to our lack of freedom from being surveilled all the time. People need privacy zones. Neither governments nor corporations should be doing constant tracking of us.”
“Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias,” a report by the group Upturn released in December 2018, shares Cohn’s concerns. It found that online predictive tools affect equity throughout the entire hiring process. Employers and vendors are using sourcing tools, like digital advertising and personalized job boards, to proactively shape their applicant pools. Hiring tools that assess, score, and rank job seekers can overstate marginal distinctions between similarly qualified candidates. After researching popular tools that many employers currently use, the study concluded that “without active measures to mitigate them, bias will arise in predictive hiring tools by default.”
Professor Brett Frischmann, a scholar in intellectual property and Internet law who teaches at Villanova University Law School, believes that surveillance capitalism puts our very humanity at risk.
“My concern is with the techno-social world we’re building, a world governed by supposedly smart tech, built by people infatuated with computational power, and driven by the logic of efficiency,” says Frischmann, the co-author of the 2018 book Re-Engineering Humanity, published by Cambridge University Press. “They erroneously believe that all problems are comprehensible in the language of computation and thus can be solved if they only have enough data. But many of the most fundamental social problems are not comprehensible in the language of computation. Justice is not. Who we are, and aspire to be, as flourishing human beings is not. And so there is [an] existential conflict.”
Frischmann frets that we are building a world governed by engineered determinism, in which “fully predictable and programmable people perform rather than live their lives. Such a world would be tragic. People living there could be described as human and still would qualify as Homo sapiens. But they would have a thin normative status as human beings because much of what matters about being human would be lost.”
Maria Brincker, an associate professor of philosophy of mind at the University of Massachusetts Boston, feels our freedom is restricted by the current surveillance and personalization practices of various digital platforms. While this can have huge consequences for individuals, as when they are denied insurance, parole, or opportunities, “the algorithmic decision processes themselves are generally kept systematically out of sight.”
Indeed, these functions are often considered proprietary so their inaccessibility is legally protected. “This means that our behavior fairly easily can be nudged, directed, or controlled without us noticing,” Brincker says. For instance, Amazon uses information about her buying tendencies to steer her toward specific products, “yet I am making the choice and it therefore feels like it was my preference guiding the choice, even if I was herded to this outcome by an algorithm.”
“Google is actually searching us. We think that these companies have privacy policies; those policies are actually surveillance policies.”
But the danger goes far deeper than buying things because a website is smart enough to suggest it. We are also being nudged to accept or reject certain kinds of ideas. “It biases my ideas about the world and the kind of possibilities and options and also dangers it has,” Brincker says. “In other words, the sequences of manipulated experiences now seriously affect what I think my world is like and therefore what I will seek to do later.”
Brincker feels the need for a political discussion about what kind of practices and environments we want to support and allow.
Zuboff believes we must apply our democratic institutions to enforce current laws and regulations, such as privacy and antitrust laws, and develop new ones that specifically address the imperatives of surveillance capitalism. Surveys of Internet users who become aware of surveillance capitalism’s shady practices show that they will choose to reject them. So there is a great opportunity for companies to build an alternative ecosystem, one that returns us to the earlier promise of the digital age as an era of empowerment and the democratization of knowledge.
As Zuboff sees it, surveillance capitalism challenges principles and practices of self-determination—in social relations, politics, and governance—for which humanity has suffered long and sacrificed much. “For this reason alone,” she writes, “such principles should not be forfeit to the unilateral pursuit of a disfigured capitalism. Worse still would be their forfeit to our own ignorance, learned helplessness, inattention, inconvenience, habituation, or drift. This, I believe, is the ground on which our contests for the future will be fought.”