Since March or so, a running joke — or perhaps wry observation is the better term — has been that each month seems to last about a decade. In early spring, the Covid-19 pandemic came to utterly dominate the news cycle, only to be pushed aside by unrest in America’s cities as throngs took to the streets to express outrage over police killings of Black citizens. One could be forgiven for thinking that we might not make it to August. And so it was somewhat surreal to be reading Toby Ord’s book, “The Precipice: Existential Risk and the Future of Humanity,” during this tumultuous time — because Ord’s concern is whether our civilization can hang on to last millions, or perhaps billions, of years.
Ord’s premise is straightforward: He believes that we are living in a unique moment in human history, and that the decisions we make in the coming decades will determine which of two fates awaits humankind. In one scenario, our numbers diminish, our buildings crumble to dust, and eventually the Earth forgets that we were ever here. In the other scenario, we continue to flourish, safeguarding what we have created against both natural and human-made risks so that we last as long as the planet itself; perhaps our distant future will see us populating the galaxy.
While the perils of the present day are not to be dismissed, Ord, a philosopher at the University of Oxford, has his sights clearly set on our long-term prospects, and draws on voluminous research and the latest science to weigh the risks that humanity faces — and urges us to confront these risks head-on. Early in the book, he states that “safeguarding humanity’s future is the defining challenge of our time.” The 20th century gave us the capacity to destroy ourselves — but for Ord, such a catastrophe would mean much more than just the demise of the nearly eight billion humans currently on the planet. The more profound tragedy is that it would destroy “our entire future and everything we could become.”
And what, exactly, might bring about our downfall? There are natural risks of course: A comet or asteroid might collide with the Earth; the clouds of ash unleashed by such an impact could circle the planet for years, leading to mass extinctions. A “supervolcano” (like the one underneath Yellowstone National Park) could erupt, darkening our skies and triggering a “volcanic winter” that could be as bad, or worse than, the aftermath of an asteroid impact. A nearby star could explode in a supernova, unleashing a deadly burst of radiation and triggering deadly chemical changes in our atmosphere.
As worrying as these catastrophes are, Ord believes that human-caused, or anthropogenic, risks are the greater concern. There is the risk of nuclear war, of course (metaphorical as the notion of a precipice may be, he says we’ve been sitting on it since the day of the Trinity nuclear test on July 16, 1945). Climate change is, not surprisingly, another paramount concern, as are pandemics. (Here it should be pointed that although pathogens would exist even in a world without human beings, Ord asserts that human activity has exacerbated the frequency and the severity of pandemics.)
Ord believes that we are living in a unique moment in human history, and that the decisions we make in the coming decades will determine which of two fates awaits humankind.
Perhaps the greatest catastrophe to strike humankind thus far was the Black Death of the 14th century; it eventually killed between one-quarter to over one-half of the population of Europe, and possibly as much as 20 percent of the entire population of the world. There is no mention of Covid-19; Ord presumably finalized his manuscript shortly before the severity of the outbreak was recognized. But, interestingly, he doesn’t think that even a severe pandemic would spell the end of civilization; there would be enough survivors to eventually rebuild whatever was lost.
Ord’s sober analysis of such horrifying events raises many questions. For example, how many people need to die before there is no longer any hope of rebuilding? Even if some catastrophe killed off 50 percent of the population, it would likely not spell the end. Here he cites his mentor, the philosopher Derek Parfit, who asked his readers to consider the difference between a disaster that swept away 99 percent of the population, and one that killed off 100 percent. Either scenario implies the deaths of billions of people — but the second scenario is unfathomably worse, for it kills off our future as well. This difference, Ord writes, is what makes existential catastrophes unique, and “makes reducing the risk of existential catastrophe uniquely important.”
And so we come to what Ord sees as the greatest risk of all — that of unrestrained artificial intelligence. In particular, he is concerned with AI systems that may strive toward goals that are very different from our own: what he terms “unaligned AI.” He estimates the risk that unaligned AI will end civilization within a century at one in 10. Adding up the various risks, he gives us a five in six chance (83 percent) of making it through the next hundred years.
Ord is hardly the first to explore this territory. In his 2003 book “Our Final Hour,” astronomer Martin Rees similarly argued that we are at a unique moment: If we get through the next hundred years or so, we may very well succeed in populating other planets, greatly reducing the risk of annihilation. Slightly more pessimistic than Ord, Rees pegged our chances of making it to the end of the current century at 50-50.
And in his 2016 book “Homo Deus,” historian Yuval Noah Harari presented a slightly different argument: His concern is that humans, together with our technology, are evolving at such breakneck speed that it will soon be unclear what we mean by “human.”
Ord’s book appears to be more thoroughly researched than either of these: His appendices and end-notes run for some 170 pages, roughly two-thirds the length of the main text, drawing on scholarly books, journal articles from the physical, environmental, and social sciences, websites, and media analysis. And he is more sharply focused on the issue of existential risk, with an eye on the kinds of disasters that could end the human adventure once and for all.
As worrying as natural catastrophes are, Ord believes that human-caused, or anthropogenic, risks are the greater concern.
In spite of the book’s heft, however, there are a few places where important questions are given only brief treatments. Can the nations of the world work together to ward off the various catastrophes that he has cataloged? This issue could be a book in itself; here, it receives a few pages of discussion. And how realistic are our prospects for colonizing the solar system, or beyond? And if we did succeed in such a staggeringly ambitious undertaking, how much security would it bring? This, too, could be a Ph.D. thesis and then some; here, it gets about a page worth of analysis.
Since Ord is a philosopher, it is perhaps not surprising that some of the deepest questions he raises are philosophical ones. What do we owe to our descendants? This is a remarkably difficult problem. What did the builders of the pyramids owe to us? That question defies the imagination; those people, dead for 50 centuries, could not have conceived of the world we now live in. Our lives would have been beyond their comprehension, and likely beyond their caring. What do we owe to those who will live 50 centuries from now?
No amount of introspection seems to make the answer any clearer. We watch the evening news and scroll through our social media feeds, and struggle to think of events that lie more than six months in the future, let alone six years. But this book argues, rather persuasively, that we should at least occasionally lift our heads up from our screens and consider our more distant future — and work toward ensuring that it arrives.