Extrapolating to Infinity (and Beyond)
A rave review of Adam Becker's More Everything Forever
Someday I’m going to write a book full of case studies of people unrealistically extrapolating exponential trends to infinity in order to support their political arguments. But I’ve decided I should wait until I’ve established more of a name for myself as a nonfiction author and built up enough goodwill with my readers that they’ll tolerate a whole book of my pedantic complaining about other people’s math.
Instead, everyone should read Adam Becker’s More Everything Forever, a fantastic analysis of the Silicon Valley longtermist ideology driving much of the conversation around AI and space colonization. Becker’s writing mirrors my own weary frustration with the idea of unlimited exponential growth as a driver of policy, but instead of just spending all his time explaining why everyone is Doing Math Wrong (which will be the title of my book-length rant, forthcoming in 2035 or whatever), he explores tech billionaires’ vision for humanity’s future and describes both the logical flaws and the potentially harmful consequences in this vision.
More Everything Forever specifically examines the ideology known as “longtermism,” which overlaps with related ideologies currently popular in Silicon Valley and space settlement advocacy, including transhumanism, cosmism, extropianism, and effective altruism. The general argument of longtermism begins with the idea that we should morally weigh the interests of all future humans in decisions we make today. Taking our descendants’ needs into account is not an uncommon concept in ethics, but longtermists take this to a utilitarian extreme, arguing that since the group of all “future humans” may eventually vastly outnumber the humans alive on Earth today, in the interest of optimizing our ethical decisions, we should do everything we can to ensure that these future humans eventually come into existence, regardless of the cost for living humans now. Goals for longtermists include (a) preventing existential risks, even unlikely ones, that could completely wipe out humanity and prevent future humans from being born, (b) developing technological solutions for aging and death, including simulating human minds in computers, and (c) advancing human space settlement, to reduce our existential risk and provide more room and resources for all those future humans. The longtermist dream is similar to that of many space settlement advocates: that human civilization will eventually spread beyond our planet, making us uncountable, unkillable, and never-ending.
The Problem with Extrapolating Exponential Growth
As we all discussed at length in 2020, humans have trouble intuitively grasping exponential growth. And I don’t deny that underestimating unchecked exponential (or even linear!) growth can get us into bad situations very quickly. But no trend lasts forever in a finite universe: as Becker notes in an interview with Ars Technica, “The one thing we know that’s absolutely always true about exponential growth is that it ends.” This applies to the exponential growth of “good” things (GDP, improvements in technology, human population levels) as well as “bad” things (pandemics, also human population levels).
Longtermist space settlement advocates acknowledge the problem with expecting eternal exponential growth in a system with finite resources (like, say, our planet). The solution they propose is for humanity to escape Earth and access what they consider to be the infinite resources of space. But the reachable parts of space are not infinite, nor are the energy or useful resources we can access there.
As Becker describes, tech billionaires see two alternatives to the never-ending exponential growth of human technology, wealth, and civilization. One of these scenarios is a human society that develops a sustainable relationship with its environment. In other words, what longtermists would call “stagnation.” (I explored the fear of stagnation as an argument for space settlement in my own book.) Becker quotes Jeff Bezos arguing that the “stasis world” we’d be stuck with if we stay on Earth “would be population control combined with energy rationing,” as well as Ben Reinhardt warning that “Sustainability means perpetual scarcity—in our ability to explore, build, and create. It means a fixed pie, and the conflicts that inevitably erupt from it.” This horrified rejection of the very concept of sustainability is often used to sell technoutopian dreams like space settlement, which in turn attempt to absolve us all of the responsibility of having to do the work of implementing more sustainable practices today. (Which I’ve always seen as particularly ironic for space settlement advocates, as no one will depend on circular, sustainable, artificial ecosystems and economies more than space settlers.)
The other alternative to eternal exponential growth proposed by longtermists is the Singularity, named after one of the few examples of infinity in real-world physics: a black hole. The Singularity describes the hypothetical future point in time when artificial general intelligence learns how to build smarter versions of itself, leading to exponentially increasing intelligence faster that we can control. Beyond this point, the argument goes, we cannot predict what the superintelligence will do, or what human society will look like. But that doesn’t stop people from trying! As Becker notes, “There’s little scientific basis for the idea of a Singularity and all the attendant miracles it will supposedly perform.” Even more bluntly, he calls belief in the Singularity “a religion predicated on growth,” a perspective that parallels MJ Rubenstein’s description of the corporate space race as a religion in her book Astrotopia.
We can’t count on an unprecedented superintelligence to save us from the harmful consequences of overshooting our environmental resource use. Whether part of our species ends up living permanently in space or not, humans must figure out how to build communities that meet our needs without irrevocably destroying our environments. Sure, that’s hard work— both technologically and in terms of the messy social and ethical side of things (see below)— but that’s where we should be directing our optimism about the power of human intellect, not by trying to develop technology to outrun our problems forever by spreading throughout space.
Humanities Denial in Space Settlement Advocacy
Another trend that Becker identifies in this Silicon Valley ideology is what he calls “humanities denial”: “a systematic and sometimes willful ignorance of the arts and humanities.” My own journey into the field of space ethics began during conversations I had with space industry entrepreneurs in which they dismissed my concerns about things like labor rights or environmental protection. Their argument was that these kinds of questions were less urgent than the technological challenges of space, and could be put off until later. But I recognized that didn’t have the answers anymore than I did, because like me, their experience and training was all in science and engineering, not useful historical contexts or relevant sociology.
Becker also notes the relationship between humanities denial and the better-known “‘engineer’s disease,’ the belief that expertise in one field (usually in STEM) makes you an expert on everything else too.” As a physicist, I’ve certainly experienced this from the inside: as students, we’re explicitly trained to figure out how to solve math and physics problems “from first principles,” leaving us with the impression that, given enough time, we could derive the solution for any problem from the information we already have. This, combined with the fact that most space scientists can go their whole careers without ever encountering an Institutional Review Board or an Environmental Impact Assessment, means that most people working in space fields are simply not in the habit of prioritizing the concerns of humans or the environment in their work— or they have an unconscious, unexamined confidence that if an ethical issue crops up, they can just figure it out from first principles.
This is how we end up having conversations about humanity’s future in space without spending any time considering the effects of our plans on the lives and happiness of individual humans. But as Becker reminds us while discussing Star Trek’s vision of the future: “The stars are just the setting. They’re incidental. The people are what matter.”
Other News
If you’re reading this the day it comes out, I’ll be participating in a virtual panel today at 5pm EDT with Cambridge Forum. You can register for the webinar here. If you’re reading this later, I expect that the recorded event will be made available on their website at some point.
Thanks for reading along this year! I’ll likely be taking a bit of a break from regular newsletters in early 2026 to work on a book proposal (not future bestseller You’re Doing Math Wrong, sorry to disappoint). In the meantime, go read More Everything Forever, it’s great!


