The warning consisted of a single sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The pithy statement, published in May by the nonprofit Center for AI Safety, was signed by a number of influential people, including a sitting member of Congress, a former state supreme court justice, and an array of technology industry executives. Among the signatories were many of the very individuals who develop and deploy artificial intelligence today; hundreds of cosigners — from academia, industry, and civil society — identified themselves as “AI Scientists.”
Should we be concerned that the people who design and deploy AI are now sounding the alarm of existential risk, like a score of modern-day Oppenheimers? Yes — but not for the reasons the signatories imagine.
As a law professor who specializes in AI, I know and respect many of the people who signed the statement. I consider some to be mentors and friends. I think most of them are genuinely concerned that AI poses a risk of extinction on a level with pandemic and nuclear war. But, almost certainly, the warning statement is motivated by more than mere technical concerns — there are deeper social, societal, and (yes) market forces at play. It’s not hard to imagine how a public fixation on the risk of extinction from AI would benefit industry insiders while harming contemporary society.
How do the signers think this extinction would happen? Based on prior public remarks, it’s clear that some imagine a scenario wherein AI gains consciousness and intentionally eradicates humankind. Others envision a slightly more plausible path to catastrophe, wherein we grant AI vast control over human infrastructures, defense, and markets, and then a series of black swan events destroys civilization.
The risk of these developments — be they Skynet or Lemony Snicket — is low. There is no obvious path between today’s machine learning models — which mimic human creativity by predicting the next word, sound, or pixel — and an AI that can form a hostile intent or circumvent our every effort to contain it.
Regardless, it is fair to ask why Dr. Frankenstein is holding the pitchfork. Why is it that the people building, deploying, and profiting from AI are the ones leading the call to focus public attention on its existential risk? Well, I can see at least two possible reasons.
The first is that it requires far less sacrifice on their part to call attention to a hypothetical threat than to address the more immediate harms and costs that AI is already imposing on society. Today’s AI is plagued by error and replete with bias. It makes up facts and reproduces discriminatory heuristics. It empowers both government and consumer surveillance. AI is displacing labor and exacerbating income and wealth inequality. It poses an enormous and escalating threat to the environment, consuming an enormous and growing amount of energy and fueling a race to extract materials from a beleaguered Earth.
These societal costs aren’t easily absorbed. Mitigating them requires a significant commitment of personnel and other resources, which doesn’t make shareholders happy — and which is why the market recently rewarded tech companies for laying off many members of their privacy, security, or ethics teams.
How much easier would life be for AI companies if the public instead fixated on speculative theories about far-off threats that may or may not actually bear out? What would action to “mitigate the risk of extinction” even look like? I submit that it would consist of vague whitepapers, series of workshops led by speculative philosophers, and donations to computer science labs that are willing to speak the language of longtermism. This would be a pittance, compared with the effort required to reverse what AI is already doing to displace labor, exacerbate inequality, and accelerate environmental degradation.
A second reason the AI community might be motivated to cast the technology as posing an existential risk could be, ironically, to reinforce the idea that AI has enormous potential. Convincing the public that AI is so powerful that it could end human existence would be a pretty effective way for AI scientists to make the case that what they are working on is important. Doomsaying is great marketing. The long-term fear may be that AI will threaten humanity, but the near-term fear, for anyone who doesn’t incorporate AI into their business, agency, or classroom, is that they will be left behind. The same goes for national policy: If AI poses existential risks, U.S. policymakers might say, we better not let China beat us to it for lack of investment or overregulation. (It is telling that Sam Altman — the CEO of OpenAI and a signatory of the Center for AI Safety statement — warned the E.U. that his company will pull out of Europe if regulations become too burdensome.)
Some people might ask: Must it be one or the other? Why can’t we attend to both the immediate and long-term concerns of AI? In theory, we can. In practice, money and attention are finite, and elevating speculative future risks over concrete immediate harms comes with significant opportunity costs. The Center for AI Safety statement itself seemed to acknowledge this reality with its use of the word “priority.”
To be sure, the generative AI behind this latest wave of chatbots and image or voice generators can do amazing things, leveraging a gift for classification and prediction to create original content across a range of domains. But we don’t have to imagine fantastical scenarios to appreciate the near-term threats it poses.
Addressing AI’s harms while harnessing its capacity requires all hands on deck. We must work at every level to build meaningful guardrails, protect the vulnerable, and ensure that the technology’s costs and benefits fall proportionately across society. Prioritizing a speculative, dystopic risk of annihilation distracts from these goals. Pretending that AI poses an existential threat only elevates the status of the people closest to AI and suggests easier rules they’d love to play by. This is not a mistake society can afford to make.
Ryan Calo is the Lane Powell and D. Wayne Gittinger Professor at the University of Washington School of Law and a Professor in the UW Information School and (by courtesy) Paul G. Allen School of Computer Science & Engineering.
Comments are automatically closed one year after article publication. Archived comments are below.
Maybe they want you to believe it because they believe it. Because they think it is true. Because they believe the world faces a collective action problem and must coordinate against developing these technologies too far.
Not everything is about advertising a product or service.
Consider that by not developing AI, we could also mitigate or outright avoid all the other harms of AI you mention, the ones you hint they are trying to distract us from.
The strange thing about all these warnings is than none of them are specific about what the AI will do to endanger our lives or our businesses. In other words some people are so scared that before they even can decide on where the danger lays they want to make us feel that we need to stop the progress of the AI.
The intelligence that AI can access is ONLY the quantitative intelligence that we possess, the intelligence that must center around an object. But the intelligence that we use to wonder, meditative or contemplative intelligence, cannot be accessed by AI because it has no object. The Buddhists call it “objectless thinking.” It is pure awareness. This is the thinking we use to solve very abstract problems, such as Schrödinger’s, “Is the cat dead or alive?” or Gödel’s undecidability theorem. This type or kind of thinking or intelligence cannot be tapped by artificiality, so it is not threatened. But few people understand this distinction between “l’esprit de geometrie et l’sprit de finesse” (Pascal), between quantitative thinking and meditative thinking (Heidegger).
Thank you for this, it helps me understand the profound unease I’m feeling. People seem to know so much, but think so little. But am I being elitist here? Hasn’t it always been this way? I guess I’m just disappointed that we have access to so much information and such great tools and in using them we seem to be becoming complicit in the oppression and immiseration of others – and ourselves.