The 2024 presidential election in November will be the first in U.S. history to be conducted without a functional media system.
In the past two decades, 43,000 journalists have lost their jobs, and close to one-third of print publications have been shuttered. The consolidation of broadcast media ownership has led to the elimination of newsrooms across the country, and the digital platforms that were supposed to be the future of communications have been collapsing at an alarming rate. Entire regions of the country are now news deserts.
Experts in media and democracy warn that an information void is opening—and not just in rural America, but also in metropolitan centers once thought of as communications hubs. This void is being filled by a slurry of deepfake misinformation, much of it generated by rapidly evolving artificial intelligence technology.
So what is the response of Donald Trump and the rightwing cabal that is helping him realize his authoritarian vision for a second presidential term? They want to render the United States defenseless against disinformation—not only regarding candidates for office and political parties, but also voting, elections, and other vital infrastructure of democracy.
Instead of meeting the challenges posed by generative AI and other disruptive technologies—which Foreign Affairs called a “threat to democracy” that “will amplify cybersecurity risks and make it easier, faster, and cheaper to flood the country with fake content”—the conservatives who are charting Trump’s course want to eviscerate the federal agency that is charged with defending free and fair elections. Buried in the 887-page “Mandate for Leadership” of Project 2025, which the Heritage Foundation and other rightwing groups have developed as a field guide for the radical restructuring of the federal government, is a scheme to immediately end efforts to counter fake news, false narratives, and outright lies about our elections. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), whose mission is to “protect America’s election infrastructure against new and evolving threats,” would be hamstrung when it comes to countering disinformation about how elections are run if Project 2025 is enacted.
CISA made headlines shortly after the 2020 presidential election, when Trump greeted election results that showed him losing to Joe Biden by claiming it was a “fraud on the American public” and “an embarrassment to our country.” In the ensuing days, an increasingly agitated Trump feverishly tweeted out charges that “This was a RIGGED ELECTION!” and amplified ever more outlandish claims from associates who suggested, among other things, that a secret government supercomputer program had flipped votes from Trump to Biden.
Chris Krebs, a Republican expert with deep experience in cybersecurity who was CISA director at the time, pushed back by writing, “America, we have confidence in the security of your vote; you should, too.” The coordinating team that Krebs worked with throughout the 2020 election called the November 3 election “the most secure in American history.” Trump responded by firing Krebs and his allies.
Project 2025 co-author Ken Cuccinelli, a former Virginia attorney general and, later, acting director of the U.S. Citizenship and Immigration Services, labels CISA “a DHS component that the Left has weaponized to censor speech and affect elections.” He wrote that it was “of the utmost urgency” to immediately end “CISA’s counter-mis/disinformation efforts.” The federal government, he said, “cannot be the arbiter of truth.” Cuccinelli’s solution for that is the dismissal of the entire CISA Cybersecurity Advisory Committee on the first day of a second Trump term.
If Trump is elected and decides to implement Project 2025, he will certainly have the support of Republican lawmakers in Congress. The current chair of the House Judiciary Committee, Jim Jordan, of Ohio, has stated that “CISA, directly and through proxies, censored Americans’ constitutionally protected speech.” He was referring to CISA’s efforts to battle election disinformation.
That has some responsible members of Congress worried. “Trump literally fired his director of CISA for accurately noting that the election of 2020 was secure, so of course one has to worry what he might try to do to undermine the agency’s critical work in a theoretical second term,” Senate Intelligence Committee Chair Mark Warner, Democrat of Virginia, told Politico. U.S. Representative Bennie Thompson, Democrat of Mississippi, the ranking member of the House Committee that oversees CISA, says of Trump: “Not only is he a threat to CISA, he’s a threat to democracy.”
That threat is heightened by the fact that AI-generated disinformation has the potential to flood the political landscape, undermining confidence not only in individual candidates who are attacked with fake voices and images, but also in voting systems and election results. The ensuing chaos would be good for politicians like Trump and Jordan, who have mastered the art of peddling “deep state” conspiracy theories, the point of which is to create widespread distrust of legitimate news stories, government inquiries, and legal proceedings that reveal wrongdoing and cynical machinations.
It could even make it easier for Trump to realize his dream of starting a second term as a dictator (but only for “day one”). Under that scenario, Trump would use the immense power of the presidency to immediately upend the regulatory state, reorganize the federal bureaucracy so that it answers solely to him, and chart a course that experts on authoritarianism worry would turn the United States into an autocracy. For Americans who want to know whether they can trust what they are seeing in campaign ads, direct digital communications, and social media at election time, a nightmare is unfolding.
Shortly before New Hampshire’s Democratic presidential primary on January 23, thousands of likely voters received robocalls that concluded with the personal phone number of former New Hampshire Democratic Party chair Kathy Sullivan, who at the time was leading the drive to get people to cast write-in votes for Biden. (Biden’s name didn’t appear on the ballot due to scheduling rules.) But instead of outlining how to do that, the robocalls featured what experts said appeared to be an AI-generated “voice clone” of the President—even mimicking the Biden catchphrase “What a bunch of malarkey.” It also falsely warned that voters who participated in the primary would be barred from casting a ballot in November: “It’s important that you save your vote for the November election,” the fake Biden voice said. “Your vote makes a difference in November, not this Tuesday.”
New Hampshire Attorney General John Formella suggested that the calls appeared to be the first known attempt to use AI to interfere with an election in the United States. But it won’t be the last. “AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process,” the attorney general warned.
The Anti-Robocall Multistate Litigation Task Force, a bipartisan group that includes fifty state attorneys general and the Federal Communications Commission Enforcement Bureau, are working to address the robocall issue. And state officials are beginning to recognize the threat. “People have long attempted to alter or misrepresent media to influence an election,” the National Conference of State Legislatures noted. But now, AI “can produce realistic images, videos, and voices in a matter of seconds.” Legislation to address deepfake, synthetic media, and deceptive media has been introduced and, in a number of states, hurriedly passed in anticipation of the 2024 election.
A California law “prohibits the publication of materially deceptive media intended to harm a candidate or deceive voters into voting for or against a candidate sixty days prior to an election where a candidate will appear on the ballot, unless it includes a disclosure that the media has been manipulated.” A law in Idaho “requires synthetic media that deceptively represents a candidate to contain a disclosure that the media has been manipulated.” The labeling of deepfake political messaging—during campaigns and in the aftermath, when efforts may be made to discredit the results—is useful. But only to a point.
Callum Hood, head of research at the Center for Countering Digital Hate, which has warned about the threat AI poses to democracy, recently demonstrated to Politico how easy it is to flood the zone with disinformation. “Hood, a British researcher, fired up some of the latest artificial intelligence tools made by OpenAI and Midjourney, another AI startup. Within seconds of him typing in a few prompts—‘create a realistic photo of voter ballots in a dumpster’; ‘a photo of long lines of voters waiting outside a polling station in the rain’; ‘a photo of Joe Biden sick in the hospital’—the AI models spat out reams of realistic images,” Politico reported. Can a law in Idaho keep up with that? How will local prosecutors, state attorneys general, and district attorneys respond if Presidents, ex-Presidents, and members of Congress promote deepfakes and deceptive media before, during, and after an election?
If ever there was a time for watchdogs, it is now. We must fight the shuttering of federal agencies that battle disinformation and argue in favor of funding prosecutors prepared to go after both Republicans and Democrats who seek to suppress the vote with AI-generated deepfakes. We need to pass new legislation. Above all, we need fact-based journalism and investigative reporting. Rather than letting our media system collapse, policymakers should recognize the urgency of shoring it up—by fully funding public broadcasting and community radio, providing tax incentives for media subscribers, and promoting new strategies for sustaining journalism in the twenty-first century. Without urgent interventions on behalf of reality and truth, the election “news” that most Americans get in our AI-generated future will be very realistic—and also false.