The first test of a hydrogen bomb, Operation Ivy, is performed at Enewetak Atoll in the Marshall Islands in 1952.

Opinion: Ambivalence Over AI: We Are All Prometheus Now

Society must face its moral obligation to understand the consequences of AI and develop clear guidelines for its use.

Revolts against science are often deeply irrational, as we witnessed during the Covid-19 pandemic, with political polarization around lifesaving vaccines and critical public health measures. But public distrust of science has too often been enabled through its manipulation by corporate interests, including big tobacco and oil, as well as by the dangers associated with its use in war.

The closing moments of Christopher Nolan’s Academy Award-nominated film “Oppenheimer” — based on the biography “American Prometheus” — depict a fictional scene in which the physicist is speaking with Albert Einstein by a pond at the Institute for Advanced Study in Princeton, New Jersey. Oppenheimer reminds Einstein of the time several years earlier, during the Second World War, when he asked him whether he thought there was any chance that nuclear fission might set off a chain reaction that would destroy the world. Einstein had refused to offer an opinion, and Oppenheimer continued his work to develop the bomb.

Looking wistfully away from Einstein, he says that perhaps it did — that the bomb did set off an uncontrollable chain reaction that would, in the end, destroy the world. These final words give the lie to his hope that such a terrible weapon would stop all wars, an admission of his failure to fully anticipate the political reality of nuclear power.

Oppenheimer’s ambivalence is in a larger sense the story of modern-day science. Quantum physics might have remained an obscure field of research were it not for the recognition that it had powerful applications, in areas ranging from the development of medical technologies to the development of radar. When nuclear fission was discovered in 1939, it promised not just new forms of energy, but new ways to harness this energy for weapons of mass destruction. The U.S. recognized that the next war would be won by the nation that managed to use science and technology most effectively.

The fear of nuclear obliteration, however, cast a dark shadow over the ebullience of postwar optimism and prosperity. There had been a revolt against science in the years after the First World War, and serious criticisms returned with a vengeance during the 1960s, when the postwar generation came of age and reacted viscerally against what Dwight Eisenhower dubbed the “military-industrial complex.”

Both revolts were driven by unease with science and the pervasive sense of a loss of moral values. Today, we confront escalating crises around climate change, and grow concerned that progress in areas such as gene editing and artificial intelligence means that humanity is once again on high alert. Oppenheimer’s ambivalence is firmly lodged in our contemporary subconscious.

Now that it is possible for nuclear weapons to be detonated by autonomous lethal systems powered by AI, our anxiety about the dangers of science and technology seems more palpable and pervasive, as computer scientist Stuart Russell has discussed. More and more AI researchers are warning about imminent doom, as pioneers in the field such as Geoffrey Hinton and Yoshua Bengio give voice to their own growing concerns about extinction risk as a result of advances in AI.

Other sensible figures in AI such as computer scientist Yann LeCun and entrepreneur Reid Hoffman are far less concerned. Demis Hassabis led Google’s DeepMind to show how transformational AI will be in areas of biological research — determining 3D protein structures, for example. And yet leaders who espouse some versions of techno-optimism have a worrisome tendency to express political views that range from libertarian distrust of government to a new doctrine labelled as “Effective Accelerationism.” (The latter calls for drastic intensification of capitalist growth and technological change and is a countermovement to “Effective Altruism,” which instead focuses on finding the best ways to help others.) Accelerationism has recently been popularized by venture capitalist Marc Andreessen, who is unreservedly bullish about (and a funder of) all things technological, but it has also become associated with far-right political views, as in the writings of the philosopher Nick Land.

To be sure, the future is too often held captive by extremists, who are either riddled with fear of — or overly excited by — ideas of apocalypse and radical change. The future also, however, reminds us of our responsibilities to new generations inheriting the world we have made, and of the need to take the kind of long view that Oppenheimer, however flawed his predictive powers, insisted upon.

It is surely possible to raise serious concerns about both short- and longer-term futures and to insist on the moral obligation of scientists, engineers, and others to work deliberately to shape a better future. We need not be caught in a doom loop when we consider the potentially devastating consequences of our technological prowess, even as we use our new powers to provide better health care, advance science, create economic value, and discover new forms of energy, materials, and knowledge. And, yes, this knowledge should also be used to better understand and then work to ameliorate the negative consequences of technology.

We also need, however, to fashion new collective means and principles to discover better ways to take control of our future and assert the priorities of human values — however contested those may be. This will require neutral bodies that can bring together scientists and social scientists, philosophers, business leaders, government regulators, community leaders, and other civil society actors to fashion common goals and clear guidelines for use. As Oppenheimer argued, these deliberations must be genuinely international rather than dictated by geopolitical divisions. Last year, more than 1,000 technology experts called for a pause in the race to develop AI. We face perils, and we will make mistakes, but we know that in matters of science and technology, no call for a pause — let alone a reversal — stands a chance.

The god who suffered for giving fire and its destructive power to humans is no longer relegated to Greek myth. We are all Prometheus now.


Nicholas B. Dirks is president and CEO of The New York Academy of Sciences. He is the author of “City of Intellect: The Uses and Abuses of the University.”