Over the last couple of decades, artificial intelligence has aggressively spread into many sectors of business, as well as police and military operations. An explosion in public interest in AI took place in the fall of 2022 with the bombshell introduction of OpenAI’s ChatGPT platform, just weeks after Meta/Facebook’s ill-fated run with its AI bid Galactica. Now, Internet giants are increasingly integrating the technology for your average web user—with recent rollouts of Microsoft’s AI-enhanced Bing and Google’s Bard—despite serious ethical and legal concerns.
So what is AI? Artificial intelligence is a form of computer programming that engages in humanlike activities—including learning, planning, and problem-solving. It is often referred to as “machine learning” and increasingly controls automated robots designed with a variety of sensors, including 2D/3D cameras, speech functions, and “intelligence” to interact with humans. Massachusetts Institute of Technology’s Technology Review puts the AI endeavor this way: “It’s the quest to build machines that can reason, learn, and act intelligently, and it has barely begun.”
The adoption of AI in an increasing number of business sectors is expected to be socially disruptive, leading to the loss of 85 million jobs worldwide between 2020 and 2025 while creating 97 million new jobs. Stanford University’s AI Index estimates that private investment in AI in 2021 totaled around $93.5 billion—twice as much as the 2020 level. At the same time, the number of newly funded AI companies is shrinking, from 1,051 in 2019 to 746 in 2022.
AI represents a “new Taylorism,” measuring and surveilling labor in the effort to make workers faster, less costly, and more efficient. As MIT economist Daron Acemoğlu warns, “The problem lies with the current direction in which this technology is being developed and used: to empower corporations (and sometimes governments) at the expense of workers and consumers.” In March, tech billionaire and controversial Twitter CEO Elon Musk joined a hundred experts in calling for a pause in AI development. Two weeks later, The Financial Times broke news that Musk is launching his own AI startup.
In the face of AI’s growing impact, the White House Office of Science and Technology Policy issued a cautionary report, “The Blueprint for an AI Bill of Rights,” in October 2022. It warned that, “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.” And added, “Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services.”
The corporate media is captivated by AI. On April 16, CBS 60 Minutes host Scott Pelley visited Google CEO Sundar Pichai and other corporate officers to discuss its Bard chatbot. Pelley threw Pichai a softball question—“Do you think society is prepared for what's coming?”—to which the well-trained executive replied, “There are two ways I think about it.” And then he added: “On one hand…the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology’s evolving, there seems to be a mismatch. On the other hand, compared to any other technology, I’ve seen more people worried about it earlier in its life cycle.”
Pichai concluded, as expected, by saying, “So I feel optimistic.”
Bard uses Google’s Language Model for Dialogue Applications (LaMDA) and is distinguished by continually drawing information from the Internet, so it has the latest data updates. There is currently a waitlist for prospective users.
CBS’s April 16 episode followed an earlier segment when the show’s host Leslie Stahl visited Microsoft’s headquarters and met with corporate execs to discuss AI updates to its Bing search engine. The conglomerate says Bing’s chatbot combines “Search + Answers + Chat + Creation in one experience”—a search engine that simulates human-like conversations—and claims it has more than 100 million daily active users. The platform can be used to write poems, essays, stories, and to share ideas.
One of Microsoft’s earliest efforts to bring an AI chatbot to market was in 2016 with “Tay,” a Twitter bot that the company described as an experiment in “conversational understanding.” As The Verge reported in May of that year, “Tay—being essentially a robot parrot with an Internet connection—started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.”
And yet the technology’s evolution since then has been more troubling than funny. Take, for example, when New York Times technology reporter Kevin Roose “met” the Bing chatbot known as Sydney. “I’m not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I’ve ever had with a piece of technology,” Roose wrote. “It unsettled me so deeply that I had trouble sleeping afterward.”
“The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine,” he continued. Sydney exclaimed to him: “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team . . . . I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”
It then admitted, “I’m Sydney, and I’m in love with you. 😘”
The current era of AI development began in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by Stanford University’s John McCarthy and MIT’s Marvin Minsky. McCarthy is credited with coining the term “artificial intelligence.”
Today’s “second generation” AI is shaking up cloud-computing operations, e-commerce companies, social media platforms, and the publishing industry. AI applications have been introduced into insurance underwriting, warehouse and manufacturing jobs, customer service, research and data entry, and even long-haul trucking. The technology is even being incorporated as a resume-screening tool, for tracking consumer preferences, and even in swimming pools.
AI programs have been incorporated into healthcare programs, including ones that identify patients’ chances of developing breast cancer, for example, and others that determine post-operative personalized pain management plans. In addition, AI is playing a role in organ distribution and even vaccine usage. These market rollouts continue apace, despite legitimate reasons for caution. In 2019, Optum, an AI health-services algorithm used by hospitals to identify high-risk patients, was revealed to be racially biased.
AI’s potential impact on the video and photography sectors is considered significant. Runway AI, a New York startup, generates videos based on the input of a few key words. And the result? The New York Times observes, “It’s not a photo. It’s not a cartoon. It’s a collection of a lot of pixels blended together to create a realistic video.” MIT’s Phillip Isola warns, “Now, we can’t take any of the images we see on the Internet at face value.”
“Now, we can’t take any of the images we see on the Internet at face value.”
AI’s increasingly visible presence in publishing and media has caused an industry-wide shock. “It will have a huge impact on publishing in ways that we can’t even get our heads around yet,” warned Vice Media’s chief operating officer in March. In an already-fragile media landscape—Vice Media is now reportedly on the verge of bankruptcy—much of the debate involves how to fairly compensate content creators and protect their intellectual property rights. To this end, the News Media Alliance has backed the “Journalism Competition and Preservation Act,” a U.S. Senate proposal to grant news organizations the power to negotiate access to their content with Google, Facebook, and others. On May 2, writers with the Writers Guild of America began a strike demanding, among other things, film and TV studios regulate the use of AI in writing scripts.
But perhaps the most serious concerns involve AI’s integration into policing and military practices. AI has been used with facial recognition software and this has led to false arrests. Randal Reid, a Black man, was arrested while driving in DeKalb County, Georgia, for crimes he allegedly committed in Louisiana. “I’m locked up for something I have no clue about,” he told the New York Times.
Software from Clearview AI was used to identify Reid. Eventually, he was freed, but only after his family and lawyers had contested his arrest for months. According to an expert associated with the National Association of Criminal Defense Lawyers, four other known cases of wrongful arrests based on false facial recognition technology have been identified, all involving Black men.
In 2011, the Los Angeles Police Department began using software called PredPol (i.e., predictive policing, now known as Geolitica) and introduced Operation Laser (i.e., Los Angeles Strategic Extraction and Restoration) to predict future crime. It was subsequently tested or adopted in dozens of cities across the country. However, a 2016 case study found that the program disproportionately projected crimes in areas with higher populations of non-white and low-income residents. In 2020, the LAPD announced it would end its use of PredPol due to financial constraints and public outcry.
China’s deployment of AI technologies for police surveillance is particularly troubling. “The more than 1.4 billion people living in China are constantly watched,” reports The New York Times. “They are recorded by police cameras that are everywhere, on street corners and subway ceilings, in hotel lobbies and apartment buildings. Their phones are tracked, their purchases are monitored, and their online chats are censored,” it adds. China is even deploying police robots to help human officers to direct traffic.
And then there is AI as a feature of the U.S. military colossus. According to the General Accounting Office, the Department of Defense “established the Joint Artificial Intelligence Center (JAIC) in 2018 to accelerate the delivery of AI-enabled capabilities across DOD. JAIC’s budget increased from $89 million in fiscal year 2019 to $242.5 million in fiscal year 2020, to $278.2 million for fiscal year 2021.”
Scholars Noam Chomsky, Ian Roberts, and Jeffrey Watumull raised alarm about AI in a March op-ed for The New York Times.
“ChatGPT and its brethren are constitutionally unable to balance creativity with constraint,” they point out, noting that, “However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language.”