When Artificial Intelligence Gets Too Clever by Half

Some experts say ‘superintelligence’ could be the death of humanity. Others say such fears are a distraction. But this isn’t an ‘either/or’ debate.

Picture a crew of engineers building a dam. There’s an anthill in the way, but the engineers don’t care or even notice; they flood the area anyway, and too bad for the ants.

Just as we now have power to dictate the fate of less intelligent beings, so might such computers someday exert life-and-death power over us.

Now replace the ants with humans, happily going about their own business, and the engineers with a race of superintelligent computers that happen to have other priorities. Just as we now have power to dictate the fate of less intelligent beings, so might such computers someday exert life-and-death power over us.

That’s the analogy the superstar physicist Stephen Hawking used in 2015 to describe the mounting perils he sees in the current explosion of artificial intelligence. And lately the alarms have been sounding louder than ever. Allan Dafoe of Yale and Stuart Russell of Berkeley wrote an essay in MIT Technology Review titled “Yes, We Are Worried About the Existential Risk of Artificial Intelligence.” The computing giants Bill Gates and Elon Musk have issued similar warnings online.

Should we be worried?

Perhaps the most influential case that we should be was made by the Oxford philosopher Nick Bostrom, whose 2014 book, “Superintelligence: Paths, Dangers, Strategies,” was a New York Times best seller. The book catapulted the term “superintelligence” into popular consciousness and bestowed authority on an idea many had viewed as science fiction.

Bostrom defined superintelligence “as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest,” with the hypothetical power to vastly outmaneuver us, just like Hawking’s engineers.

And it could have very good reasons for doing so. In the title of his eighth chapter, Bostrom asks, “Is the default outcome doom?,” and he suggests that the unnerving answer might be “yes.” He points to a number of goals that superintelligent machines might adopt, including resource acquisition, self-preservation, and cognitive improvements, with potentially disastrous consequences for us and the planet.

Bostrom illustrates his point with a colorful thought experiment. Suppose we develop an AI tasked with building as many paper clips as possible. This “paper clip maximizer” might simply convert everything, humanity included, into paper clips. Ousting humans would also facilitate self-preservation, eliminating our unfortunate knack for switching off machines. There’s also the possibility of an “intelligence explosion,” where even a modestly capable general AI might undergo a rapid period of self-improvement in order to better achieve its goals, swiftly bypassing humanity in the process.

Many critics are skeptical of this line of argument, seeing a fundamental disconnect between the kinds of AI that might result in an intelligence explosion and the state of the field today. Contemporary AI, they note, is effective only at specific tasks, like driving and winning at “Jeopardy!”

Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, writes that many researchers place superintelligence “beyond the foreseeable horizon,” and the philosopher Luciano Floridi argues in Aeon that we should “not lose sleep over the possible appearance of some ultraintelligence … we have no idea how we might begin to engineer it.” Roboticist Rodney Brooks sums up these critiques well, likening fears over superintelligence today to “seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.”

To these and other critics, superintelligence is not just a waste of time but, in Floridi’s words, “irresponsibly distracting,” diverting attention from more pressing problems. One such problem is inequality: AI software used to assess the risks of recidivism, for example, shows clear racial bias, being twice as likely to flag black individuals incorrectly. Women searching Google are less likely than men to be shown ads for high-paying jobs. Add to this a host of emerging issues, including driverless cars, autonomous weapons, and the automation of jobs, and it is clear there are many areas needing immediate attention.

To the Microsoft researcher Kate Crawford, the “hand-wringing” over superintelligence is symptomatic of AI’s “white guy problem,” an endemic lack of diversity in the field. Writing in The New York Times, she opines that while “the rise of an artificially intelligent apex predator” may be the biggest risk for the affluent white men who dominate public discourse on AI, “for those who already face marginalization or bias, the threats are here.”

But these arguments, however valid, do not go to the heart of what Bostrom and like-minded thinkers are worried about. Critics who emphasize the low probability of an intelligence explosion neglect a core component of Bostrom’s thesis. In the preface of “Superintelligence,” he writes that “it is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur.” Instead, his argument hinges on the logical possibility of an intelligence explosion — something few deny — and the need to consider the problem in advance, given the consequences.

That superintelligence might distract us from addressing existing problems is a legitimate concern, but aside from an (admittedly successful) appeal to intuition, no evidence is actually offered in support of this claim.

There is room to consider both long-term and short-term consequences of AI.

It’s more likely Bostrom and company have had the opposite impact, with the problems of contemporary AI benefiting from increased political, media, and public attention, as well as the accompanying injection of funds into the field. A case in point is the new Leverhulme Center for the Future of Intelligence. Based at the University of Cambridge, the center was founded with $13 million secured largely through the work of its sister organization, the Center for the Study of Existential Risk, known for its work on advanced AI risks.

This is not an “either/or” debate, nor do we need to neglect existing problems in order to pay attention to the risks of superintelligence. It is important not to allow concerns for short-term exigencies to overwhelm concern for the future (and vice versa) — something at which humanity has a very poor track record. There is room to consider both long-term and short-term consequences of AI, and given the enormous opportunities and risks it is imperative we do so.

Robert Hart is a researcher and writer on the politics of science and technology, with special interests in biotechnology, animal behavior, and artificial intelligence. He can be reached on Twitter @Rob_Hart17.

Top visual: Bettmann/Getty Images
See What Others Are Saying

5 comments / Join the Discussion

    uhm, I just want to ask how can the machines overpower humans when humans were the one who made them? is it possible that a machine, robot and etc. can go havoc and decide on it’s on like terminator or some other sci-fi movies ? I don’t think machines can overpower the one who created them because they are only a data inside. They may be capable of doing something humans can’t, but shouldn’t the credit of the achievement belong to the one who created the machine? all the machine did was do something that the human programmed or odered it to do. Why would humans be pressured or intimidated by machines isn’t the one we should fear is the human mind? because human mind is powerful it can create something it want if it wants to. I just to be enlighten in this topic because I love robots and I am not yet that well informed about it and I just want to voice out my opinion and if my perception about this topic is wrong I would like to be corrected. Thank you.


    Geological, biological and linguistic evolutions work more or less in the same way. The basic Principal are the same. Any cultural evolution is bound to follow those principles. These principles (in fact, observations and logical steps) are like basic laws of physics, such as two objects with the same mass show exactly same weight at a fixed place on the earth.

    You may answer the question of Superintelligence yourself if you learn the principles of evolution.


    The problem lies with approachable threshold. Even if legitimate research is stopped before threshold, illegal research by some maniacs or evil organizations cannot be stopped unless the world gets to know about it.


    This is a well-written piece. It fits a trend though. The discussion of AI is forever being spun into whether or not AI represents an existential threat. I think that talk is red herring from the real debate. The real risk is automated totalitarianism, which is emergent, and which could feature soft eugenics and genocide.

    AI may not kill us all immediately. Instead it will be used to set up a total control grid. That’s real. It is happening now. And it is not being discussed.


    Agreed. It’s not AI we need to worry about, because honestly, what desires would pure logic possess? It’s the corporations and oligarchs behind the guys coding the AI that are the real threat to humanity.

Join the discussion

Your email address will not be published. Required fields are marked *


& Tipsters

Corruption in science?
Academic discrimination?
Research censorship?
Government cover-ups?

Undark wants to hear about it.

Email us at tips@undark.org, or visit our contact page for more secure options.