If last year left me with one question, it was this: How much should I panic about artificial intelligence?
Remember when ChatGPT seemed to appear out of nowhere? To many, the AI text chatbot seemed like a novelty. You could ask it to write a speech for the best man at a wedding or a finely tuned insult aimed at your mother-in-law. It was like one of those Snapchat filters that made your face look like a mouse: You could mess around with it and have a laugh for a while until you realized that you can only make your face look like a mouse so many times before it becomes old.
But when the novelty of ChatGPT passed and the tourists went back to old-timey Google searches, AI remained. For those who understand it, it’s clear AI can be incredibly useful—for research, coding, calculations, and rote database tasks. Great, sign me up! But AI can also be useful for anyone with any motive: a student who doesn’t want to write an essay; a hacker who needs extra horsepower; an entity who is hellbent on digital warfare—anybody.
When that panic finally emerged, it took some expected forms like, “Oh my God, the computers will become self-aware! This is Skynet but for realsies!” But the far-fetched scenarios aren’t what experts fear; in fact, most tech experts like technology. They believe in it and see a world where AI makes things better.
The actual skills AI has right now—not in some imagined future—are what could cause a worrisome sea change, especially for writers, actors, and artists . . . like me. AI can tweet like me, write a book in my style, mirror my opinions, and with several hundred hours of my voice being available online, it can even create a persona that sounds just like me.
All right, mild hives are breaking out at that thought. This stage of panic feels real because it has turned into a kitchen-table issue. How will I bring home the bacon if artificial intelligence can mimic my output? Worse, AI doesn’t even need bacon.
Instead of outright panic, I’d like to focus on “reasonable alarm,” brought to me by experts who don’t need to watch a YouTube video to understand the technology. Tom Graham, CEO of AI tech firm Metaphysic, fits the bill. His company does extremely believable deepfakes. (Remember those Tom Cruise deepfakes? Those were made by the people who later created Metaphysic.) Graham has a sobering opinion of AI: It can, in fact, take over some jobs that humans do. He has a company that hopes to sell these services.
But he also believes that it can be done responsibly and ethically. Metaphysic built a tool to let actors scan themselves, store their AI personas, and copyright them. While it’s a clunky process, it’s a step in the right direction. We need to start thinking of worker protections, and people like Graham are out there offering solutions.
Others, like Tristan Harris and Aza Raskin, have been sounding the alarm on social media about the threats of AI. They’re not Luddites; they’re actually big fans of technology’s potential, seeing AI as a possible way to find a cure for cancer, open entire fields of animal linguistics, and much, much more. But they’re also fans of establishing AI guardrails, regulations, and laws.
In 2023, Harris, a former Google design ethicist, and others helped the White House develop a comprehensive Executive Order to regulate AI that creates a standard for watermarking content so that users can tell if it’s a deepfake or not. It’s fun to see a fake Tom Cruise, but it’s scary to see a fake Joe Biden.
We’ve been duped by social media before, when we were so taken by everyone’s vacation photos that we didn’t see their negative psychological consequences. Harris, Raskin, and other “reasonable alarmists” don’t want us to make that mistake again with AI. They’re working with governments to create a regulatory environment that allows us to roll out AI’s capabilities slowly and thoughtfully without being overwhelmed by them.
Maybe this is a little like Y2K. Everyone freaked out about that, but then nothing happened—because experts took the nation’s freakout seriously and turned it into action.