illustration of two people talk face to face.

Opinion: Transhumanism Should Focus on Inequality, Not Living Forever

Instead of trying to extend the lives of a privileged few, we should invest in improving the lives of all people today.

Director Jesse Armstrong’s recent HBO film “Mountainhead” satirizes the sophomoric philosophies of today’s tech billionaire oligarchs. Steve Carell’s character — who has been likened to Palantir cofounder Peter Thiel — exclaims, “I take Kant very seriously!” and spends much of the movie evaluating how he and his ultrawealthy friends can reshape the global order with the digital tools they’ve collectively invented.

While the film is hyperbolic at points, the sentiments it portrays about the characters’ beliefs and expectations are not. Armstrong doesn’t take the time to unpack the pseudo-philosophical thinking underlying some of the tech billionaires’ more outrageous quotes, but he does allude to what kind of future they want.

This was a central discussion at a conference on aging I recently attended at King’s College London, where transhumanism movements across the world came up a lot. These movements — which are backed by people including Thiel, Elon Musk, and influential AI researcher Eliezer Yudkowsky — aim to use science and technology to help us overcome the boundaries of our biology: for example, to stop processes like aging (or cure the “disease of aging,” as they see it) or to enhance cognition. Yet the academics, advocates, and tech oligarchs promoting transhumanism miss a central point. Overcoming our biology isn’t about immortality or the digitization of the human, but putting a stop to the self-interest that has led to the unequal societies, including widespread health inequalities, we see in countries like the United States today.

Understanding our evolutionary heritage helps to explain why. Although humans are broadly a hyper-cooperative species, the most successful people over our evolutionary history are those who promote their own self-interest while masquerading as altruistic. In game theory scenarios like the prisoner’s dilemma, for example, getting your partner to keep cooperating while betraying them is the best outcome for you. I call this invisible rivalry: taking for yourself when you can, and pretending to be acting in others’ favor when necessary, is a more effective strategy than cooperating consistently. This is probably one reason psychopathic people tend to disproportionately occupy positions of power — appearing trustworthy is a better strategy than being trustworthy.

This Machiavellian nature, which I think is too often ignored in the social sciences today, has a dark implication: To the extent that maliciously exploitative people take power, they will structure society in a way that benefits them and their progeny over time, and that is almost always at the expense of others around them. This is how hierarchies form within cultures, and from these structures comes the belief that some people are just inherently better than others. Together, this manifests in power dynamics in which “established hierarchies organise the distribution of power and privilege by identity,” as the authors of a recent comment in The Lancet on inequality and racism wrote.

Overcoming our biology isn’t about immortality or the digitization of the human, but putting a stop to the self-interest that has led to the unequal societies, including widespread health inequalities.

Today, we are witnessing that same process, which has surely been repeated countless times over our evolutionary history. Except unlike how oligarchs, despots, and plutocrats have taken power in the past, the techno-oligarchs of today have their vast digital empires, ridiculed in “Mountainhead,” to support them. “The biggest technology platform companies are dominated to a singular extent by a small group of very powerful and extremely wealthy men who have played uniquely influential roles in structuring technological development in particular ways that align with their personal beliefs and who now wield unprecedented informational, sociotechnical, and political power,” Julie E. Cohen, a professor of law and technology at Georgetown University, recently wrote.

The way tech billionaires discuss and promote modern transhumanism movements is a symptom of the cancerous shape invisible rivalry takes. Those who back transhumanism talk, conveniently, about the disease of aging and the ways in which the research they back can extend lives or digitize the mind, leading to extended, if not limitless, longevity and dramatically boosted IQ.

These movements take different forms, but each ultimately — and unsurprisingly — benefits or aims to benefit the people who espouse them. One example is longtermism, or the philosophical position that future people, and even future digital people, matter as much as anyone does today. This line of thinking, detailed by the philosopher William MacAskill in his 2022 book “What We Owe the Future,” takes its fundamental insight from the late Derek Parfit, who asked whether harm to a child living 100 years in the future matters as much as harm to a child today.

The soundness of this equality is less important, however, than how proponents of longtermism use it to defend the amassment of wealth for protecting future people. Rather than investing in social goods today and aiming to reduce the huge inequalities in, for example, health outcomes and nutrition worldwide, longtermists believe instead that we should direct efforts on improving the lives — digital or otherwise — of those who aren’t here yet.

The thinking goes further, though. Philosophers like Nick Bostrom, who founded Oxford University’s now-closed Future of Humanity Institute, claim that mass deaths caused by climate-related catastrophes are not significant in the full view of humanity’s future. Rather than mitigate these largely preventable deaths, it’s better to invest in the future — and specifically, in the people who can make the future a better place for more people than are alive today. Figures like Musk, Thiel, and Facebook co-founder Dustin Moskovitz — all of whom back longtermism, ideologically or financially — are, according to this view, better positioned to improve the future than someone living in a slum in Mumbai. It’s no surprise that most of them, and others including Amazon founder Jeff Bezos, believe in and fund research related to longevity, and it’s further no surprise that people like Musk who back natalist philosophies also seem to spread their own genes as much as possible.

As a social scientist who studies trust and exploitation, I can’t imagine a greater irony than watching the stories of transhumanism and longtermism play out today. Our evolutionary heritage is one of darkness, exploitation, and selfishness — and that is what we need to change about ourselves, not our mortal status or IQs.

Overcoming our innate self-interest and reducing inequalities today is what transhumanism should instead entail. If we want to create a world where we’ve moved beyond our biological programming, it is essential to start instilling the view that we should treat each other well today, invest less in our personal longevity or reproductive success, and invest more in ensuring that others don’t need to needlessly die from preventable disease or climate catastrophes.

Even Julian Huxley, the 20th century biologist and eugenicist, who introduced the term transhumanism, wrote in 1957 about how it should improve social goods: “We must study the possibilities of creating a more favourable social environment, as we have already done in large measure with our physical environment.”

Our evolutionary heritage is one of darkness, exploitation, and selfishness — and that is what we need to change about ourselves, not our mortal status or IQs.

I believe this must involve widespread education to improve our collective ability to question why it is that people like techno-oligarchs have the power, influence, and philosophical views that they do. For example, school initiatives that teach media and statistical literacy and how to ask questions effectively are essential, as are broader programs to help people recognize falsity. As AI progresses in sophistication, each of us will need critical thinking more than ever to ask the questions these people don’t want us to ask — and that, because of the veils provided by philosophers like Bostrom and MacAskill, seem impenetrable. They are not.

Parfit, who did take Kant seriously, concludes his seminal book “On What Matters” by writing that “what now matters most is that we rich people give up some of our luxuries, ceasing to overheat the Earth’s atmosphere, and taking care of this planet in other ways, so that it continues to support intelligent life.”

People backing contemporary transhumanist movements and longtermism ignore Huxley and Parfit’s inconvenient exhortations. But if we want to overcome our dark heritage and create a future that puts people besides the techno-oligarchs first, the rest of us should listen closely.


Jonathan R. Goodman is a social scientist based at the Wellcome Sanger Institute and the University of Cambridge and author of the new book “Invisible Rivals: How We Evolved to Compete in a Cooperative World.”