Last month, I kicked off a new topic I’ll be exploring for the next few issues of this newsletter: the various questions of personhood that we’ll encounter as we increase human activities in space. Today, I’m going to dive into one of the most common personhood-related topics that I encounter when I talk about space settlement ethics: artificial intelligence (AI).
I noticed early on, when I started doing a lot of interviews and public talks for my book on space ethics, that I almost always received a question or two about how we’ll incorporate AI into space exploration and settlement. This is partly just a coincidence of timing: I was hitting the podcast circuit as ChatGPT was in the news, and people wanted to discuss AI in every context. But I suspect I would have fielded a few of these questions anyway, given how much the idea of AI and related ethical conundrums come up in science fiction, overlapping with stories about space travel. There’s Data from Star Trek: The Next Generation, HAL in 2001: A Space Odyssey, the lunar computer Mike in Heinlein’s The Moon is a Harsh Mistress, and more.
We’re already using AI to study space in real life. Astronomers use machine learning, a subfield of AI, to process and analyze the huge datasets that are produced by modern observatories. We also use AI in rovers and spacecraft that need to be able to operate semi-autonomously due to the communication lag between Earth and locations like Mars. These technologies aren’t replacing the need for human space researchers— sociologist Janet Vertesi argues that the use of AI in space science is better understood as the building of human-robot teams.
In general, I’m a skeptic regarding how close we actually are to being able to build true “AGI”: artificial general intelligence, a Data-like algorithm that could reason the way humans do, or at least accomplish the same kinds of intellectual tasks that humans can. (I highly recommend Erik J. Larson’s The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do for more on AGI.) I also agree with AI ethicists who argue that we should prioritize focusing on the harm that is being caused today by people and companies using AI algorithms trained on biased datasets.
But as we continue to develop both robotics and AI technologies, we’ll certainly continue using both in space, so if AGI is achievable, humans living and working in space will need to think about the rights of these “team members”, and asking whether artificial minds should be considered people is one way to explore these ideas. As I mentioned last month, though, “personhood” can encompass a lot of different definitions in different fields.
Legal personhood, for example, focuses specifically on what rights and responsibilities an entity (like a robot or AI software) has within a legal system. Non-human entities like corporations, intergovernmental organizations, and even certain rivers are considered legal persons in various jurisdictions. Saudia Arabia granted citizenship to— and thus recognized the legal personhood of— a humanoid robot named Sophia in 2017. The move was generally dismissed as a publicity stunt, but I think the legal model of personhood is interesting because it describes both society’s obligations to the “person”— its rights within that society— and the person’s obligations to society. Both sides of this definition come up in science fiction stories about AGI: we need to ensure that we’re not exploiting the new artificial lifeforms we may one day create, but we also may need to protect society from dangerous or anti-social actions by these creations.
The latter concern ties into an aspect of personhood from philosophy: a person as a moral agent who has the ability to act based on some understanding of “right” and “wrong”. The question of whether an artificial being can be morally responsible for its actions— and at what point in development that responsibility moves from its human creators to the being itself— is an ongoing area of discussion. This debate is urgent and relevant today, long before we have human space settlements or AGI, as we increase the use of AI in warfare, policing, and our criminal justice systems.
And what about protecting the rights of an artificial intelligence within a society? A key question to ask, and one which is often asked regarding animal rights and intelligence, is whether the AI can suffer. Does it simply gather data about its environment in order to perform the appropriate actions, like AI-enhanced Mars rovers? Or does it actually experience the sensations of being in that environment? And if some of those sensations are unpleasant, don’t we have an obligations to avoid inflicting them on the AI— to avoid harming it in a way that one can’t harm a non-sentient computer? Adjacent and overlapping with this concept of sentience is the idea of consciousness, or self-awareness. How can we measure when an AI becomes aware of its own existence, and develops preferences regarding the continuation of that existence? These questions have become especially tough as large language models (LLMs) like ChatGPT have gotten better and better and mimicking human-sounding language, without actually having a conscious thought process behind the algorithm. Various conversational tests have been proposed for identifying sentience and consciousness in AI, but I suspect we’ll fool ourselves with ever-more-convincing LLMs over and over again long before a conscious AGI actually emerges.
As you can probably tell, I don’t consider AI personhood to be one of our more urgent ethical problems, in space or on Earth. If imagining the ethical implications of our technological tools coming awake in our hands leads us to recognize and work towards solving related ethical problems affecting people today, then perhaps it’s a useful exercise. But I differ with many of my fellow futurists here, in that I think there are some much more interesting and time-sensitive questions of personhood related to space than AI, and I’ll be getting into them in the coming months.
Other News
On February 17, I’ll be speaking at the St. Louis Science Center planetarium, asking “Can Our Earthly Ways Thrive in the Cosmos?”. The event is free and organized by Missouri Humanities, and after the event, there will be snacks, a book signing, and access to Planetarium exhibits.
Late last year, I spoke with David Lütke of About Trust magazine about the ethical challenges of space settlement, and the article was just published last month.
I’m organizing a virtual conference through my nonprofit, the JustSpace Alliance, called the Environmental Justice in Space (EJiS) Workshop. It’ll be held on June 20-21, 2024, and our goal is to bring space experts together with environmental justice activists and researchers to discuss areas of concern in the space environment, lessons learned from the history of environmental justice movements on Earth, and ideas for ensuring an equitable and sustainable future for humanity in space. Registration is free(!) and open now at this link.