An AI computer chip locked in a birdcage.

Opinion: Abstinence From AI Is Not the Answer

Refusing to use AI won’t protect society. Responsible resistance must include gaining knowledge about it.

During 2025’s New York Comic Con, Jim Lee — the president, publisher, and chief creative officer of DC Comics — spoke out against the use of artificial intelligence in the creation of the imprint’s comic books.

“DC Comics will not support AI-generated storytelling or artwork. Not now. Not ever — as long as Anne DePies and I are in charge,” Lee said, punctuating his remarks with a concise observation: “AI doesn’t dream. It doesn’t feel. It doesn’t make art — it aggregates it.”

To many, Lee’s comments are worthy of praise — a leader of a revered institution taking a firm position against AI. Stands like his arise from AI’s negative impacts — especially those of generative AI — on actors, writers, and others in creative industries. Analogous concerns have arisen where health, justice, and education are concerned.

Many thoughtful voices in politics and culture are calling for controls on AI to protect the rights of those it impacts, be they artists, patients, renters, or defendants. Much of this has been captured in a modern movement labeled “AI Luddite.” Like the original Luddites of the 1800s, these AI Luddites are not so much anti-tech as pro-human. They demand that technology reinforce human creativity and the dignity of work, rather than replacing skilled workers with an unskilled precariat, or replacing them altogether with machines. They want technology to be a choice made by those who use it and those it affects, rather than being forced on them.

We argue in favor of movements like these that embrace complexity in our approach to technology. In that vein, we focus on an underappreciated (and maybe ironic) necessity: Socially responsible resistance to AI must include gaining and democratizing knowledge about it — especially among communities likely to be negatively affected by it. Criticisms of AI must include guidance for responsible engagement with it, not just exhortations to avoid it. Like the Luddites from the past, we should demand ownership and control, not abstinence.

The AI Luddites are emblematic of a growing anti-tech movement driven by the many problems caused by artificial intelligence, by its indiscriminate use, and by the dubious ethics of the techlords who are building it. In the extreme wing of this movement, some suggest that the way to prevent AI’s harms is simply to refuse to use it. Even if this fundamentalist take is driven by good intentions, it puts vulnerable people at risk, and undermines the best weapon against AI’s dysregulated conquests: a shared understanding of its powers, limitations, and risks.

Like the Luddites from the past, we should demand ownership and control, not abstinence.

We must not make the same mistakes as abstinence movements of the past. For example, advocates of abstinence-only sex education tell young people to avoid sex completely until marriage. A body of literature has shown that this approach is ineffective, or even counterproductive, for reducing teenage pregnancy, along with HIV/AIDS and other sexually transmitted diseases. It is driven by ideology, not by evidence or effectiveness.

For a more recent example, consider the ZeroCovid movement that emerged during the pandemic within the broader public health community. Most famously enacted in China, this movement took a hard-edged approach to eliminating the virus — with strict lockdowns, limits on travel, and behavioral mandates taking precedence over individual liberties. Many ZeroCovid advocates seemed to believe that any argument for mitigation was complicit in harming the vulnerable and in aiding and abetting their anti-vaccine, anti-science enemies.

Time has not been kind to this perspective. Evidence has mounted that lockdowns had negative consequences on education, mental health, and inequality. We remain strong advocates of coordinated, centralized public health policy. But ZeroCovid’s extremism caused a strong backlash in China and heightened polarization in America, arguably undermining the public health goals it claimed to support.

Despite the differences between sex education, a virus, and a slew of software products, these analogies are helpful. For example, ZeroCovid failed to recognize that people’s willingness to accept some risk to preserve normal life doesn’t make them anti-science — it makes them human. It was naive because you cannot prevent people from wanting to live, work, love, and interact with others. In the same way, Zero AI approaches are naive because you cannot prevent billions of people from using a technology that is widely available and that makes their lives easier or more entertaining.

As of today, billions of smartphones are in use, and projections suggest that many of them will be equipped with AI-capable hardware in the coming years. AI tools will be sitting on our shoulders, in our eyes and ears, and on our desktops. For better or worse, AI systems already help run businesses and organizations around the world. And large language models, the technology that powers AI chatbots, not only help with technical tasks like writing code and analyzing data. They organize wedding lists, summarize meetings, and whip up apps and web pages for those who are not computer literate. AI cannot yet compete with the best artists or programmers, but it brings wide swaths of technological and artistic creation within the reach of people who would otherwise lack the skills or the time to pursue them.

Like many new technologies, AI can either amplify inequality or ameliorate it, depending on how it is deployed. And fears about the likelihood of it amplifying stratification and segregation are valid. But advocating for abstinence will deny communities access to the tools the privileged are already using to help them write college essays, do their homework problems, and learn a second language. Puritanical stances leave people ill-equipped to use this technology responsibly and unable to benefit from it.

Our perspective can be criticized on multiple grounds. Maybe we are engaging in tedious bothsidesism or offering a vague “harm reduction” approach to a problem that requires more drastic intervention. Or perhaps we are wimps, conceding to the status quo and failing to dream of an AI-free world.

But nothing requires less effort than demanding that the world simply rid itself of the things that trouble us. Ignoring AI will not make it go away; it will simply create a population unable to think critically about the technology’s strengths and weaknesses, making people sitting ducks for bullshit and manipulation.

Puritanical stances leave people ill-equipped to use this technology responsibly and unable to benefit from it.

The good news is that solutions aren’t hard to imagine. Despite the absolutist connotations of its name, the AI Luddite movement incorporates much of the nuance we’ve outlined for responsible engagement. We now have the ability to probe this new technology, test it for accuracy and fairness, and combine social concern with technical expertise. As in public health, we need to build an expert workforce with detailed knowledge of how AIs operate, including experts in the public sector who are not beholden to corporate profits. These public experts can demystify AI and build a broad democratic understanding of it. They can work with communities, advocates, and policymakers to devise strategies and regulations that will let us enjoy AI’s benefits while mitigating its harms — to demand that it be accountable and transparent to every stakeholder, and help us decide when, where, how, and if to use it.

In accordance with this, many institutions and scholars are drafting strategies to preserve the quality of education in the face of AI. These policies deal squarely with the reality that AI is already a significant part of how people produce and consume information. We dread a future where students pretend to write essays and teachers pretend to read them; we agree that writing is thinking, and outsourcing either one is perilous. But in theory, AI can help close gaps in language, vocabulary, and access to knowledge that currently prevent many from enjoying the fruits of education. And part of this education must be empowering humans to look under the hood and behind the curtain, to understand how AI works and what it can and can’t do.

The past has taught us the dangers of fundamentalism. Abstinence-only sex education was moral and religious posturing masquerading as public health policy, and absolutist takes on Covid-19 were driven by magical thinking rather than science. We must not make the same mistake with AI. Choices we make now will determine whether AI will be a tool for the powerful, dazzling the rest of us with its hype and subjecting us to its harms, or whether it will be a tool — imperfect but useful — in everyone’s hands.


Cristopher Moore is a professor at the Santa Fe Institute and the coauthor of “The Nature of Computation” (Oxford University Press, 2011). He studies connections between computer science and physics, and the uses and misuses of AI in the criminal justice system.

Republish

C. Brandon Ogbunu is an associate professor in the Department of Ecology and Evolutionary Biology at Yale University, a professor at the Santa Fe Institute, and the author of Undark's Selective Pressure column.