FYI.

This story is over 5 years old.

Health

Imagine How Creepy It’ll Be When Everyone Refuses to Age

A new book sheds light on the latest attempts to hack death.
Image: Sonja Lekovic / Stocksy

Journalist Mark O'Connell has some good news and some bad news. The good news is that humans might be able to outrun death. The bad news? "It's going to be Peter Thiel and a bunch of mega-rich assholes who are going to live forever." In his new book To Be a Machine, O'Connell introduces the "transhumanists" who believe that either through better gene therapy or by uploading our brains to the cloud, we can eventually solve the "modest problem" of death. He visited cryogenics facilities where bodies lay waiting in semi-suspension for medicine to bring them back to life; he spoke with Silicon Valley scientists who want to convert our brains to code—making maps of the mind that could run on any platform—or completely merge men and machines. And he developed a very real fear of artificial intelligence (and a weird relationship with his body) in the process.

Advertisement

I sat down with O'Connell to talk scary supercomputers, the inconvenience of occupying a fleshy human form, and how decidedly dystopian beating death would actually be.

First things first: How good do you think my odds are for achieving immortality?
Okay, well, you might not live forever—and transhumanists hate terms like "live forever." They would say, "I'm not talking about immortality, nobody's talking about immortality. I'm talking about eradicating aging as a cause of death." They really stress that point. You could get hit by a bus tomorrow, so it's not immortality.

Right. In terms of eradicating aging, then.
There's something called "longevity escape velocity," which is a term transhumanists are very fond of. The idea is that gerontology and gene therapy and various other biological sciences will keep progressing to the point where eventually, we'll be able to push back the average lifespan of a human being by more than a year for every year that passes. If the average lifespan goes up by 1.2 years every year, then effectively you've reached "longevity escape velocity," whereby you're pushing back the horizon by just a little bit more every year and the science effectively led you to outrun death.

After meeting with people who have ideas from "longevity escape velocity" to brain uploading, which do you see as being, like, the most out there?
There's this whole theory—this theology almost—of what's called "the singularity," which is this idea that our fate as a species is to merge with technology and to literally become technology. Through things like brain uploading, we'll be able to become disembodied, pure minds floating around in the ether, uploading ourselves into the cloud. We'll finally be free of the laws of physics because we [won't] have bodies, and we'll be able to infinitely explore the universe. It's some pretty crazy, out there stuff.

Advertisement

What's some of the stuff that's more realistic in the short term?
Artificial intelligence is such a big aspect of the book, the effects of artificial intelligence. That feels like something that's close. Some kind of human-ish level artificial intelligence is probably on the horizon, and even artificial intelligence that exceeds human capabilities is on the horizon. I came to realize that this is maybe not the big thing that's approaching us—I think climate change is probably still the most shit-your-pants scary thing that's out there right now—but economically speaking, artificial intelligence is an extinction-level event, I think, for the employment economy.

People are talking about that, but it hasn't really become part of the political conversation. Which is kind of scary. Huge numbers of jobs are going to go from automation of all kinds, but artificial intelligence particularly is going to be a tricky one.

Right. And as if we weren't troubled enough by everything else that's going on in the world right now, there's apparently a team of scientists that are "urgently" investigating how to protect humanity from AI. How big of a concern is that?
I ended up worrying about my future and my son's future—I have a three-year-old son—a lot. What is he going to do when artificial intelligence takes all the jobs? That's the level I was operating on in terms of my anxiety.

Those guys, the "existential risk" people, are operating on a whole other level: that that's not the problem, the problem is that artificial intelligence is going to wipe us all out and obliterate humanity not by any sort of Terminator level malice, but just because you give something the wrong instructions. Computers are incredibly intelligent, but they're also incredibly stupid.

Advertisement

Stuart Russell, who's a professor of computer science at Berkeley, came up with this example. You have an artificial intelligence, and you want it to figure out the best way to cure cancer. The thing that it does is wipe out every carbon-based life form on earth that's capable of having abnormal cell division. You inadvertently wipe out all of life, because that's the easiest and quickest way to cure cancer. You have to define, in a really granular way, what you want the computer to do. If you don't, it's going to do it in a way that's going to backfire.

So computers will either destroy the economy by making work obsolete or accidentally kill us all.
You get people who believe both things at once, and it's not really a contradiction, I suppose. Some people predict that artificial intelligence, if we don't contain it, is going to wipe us out. And if we do manage to contain the problem, we'll then have a utopian scenario where we're all floating disembodied intelligences and we don't have to worry about death or the inconvenience of having a fleshy human body anymore. One thing I talk about in the book is my inability to get past the paradise scenario that they put forward, because that seems to me completely hellish. I couldn't even get around to worrying about the worst-case scenario, because I was so worried about their best-case scenario. Neither future is particularly appealing to me.

That's what I kept coming back to. That this seems not so "utopian" in the long-term.
Most of it is a little scary or dystopian. But at the same time, most of the people that I talked to I found quite appealing on a personal level. That was the thing that surprised me most, was that they were all really, deeply human. And I do identify with these people—the book springs out of this weird identification with transhumanists. Like, yeah, it is completely fucked up that we have to die, that we have to get old in these bodies and watch ourselves and our loved ones decay. That seems wrong and like an inadmissible reality. Transhumanism comes out of that sense of the wrongness of our condition.

In a recent New Republic article , Anna Wiener posed the question of why immortality is suddenly such a booming business. Why do you think that is?
In a way, it's always been the case. I believe there's always been an obsession with immortality, and we always find some story to tell ourselves that it's going to be possible eventually. Those stories have been told through religion and various mythologies, but what's different now is that there's at least a theoretical possibility that technology and science can address these problems. There's a chink of light that actually existing science and technology lets through.

So what's the upshot here? Will we beat death in my lifetime? Is "lifetime" even a measure we can use anymore?
In Silicon Valley, it's the tech world, there's a crazy, delirious sense of wild optimism about the power of technology to change the world, to use an overused phrase. In a way, it's amazing and cool that there are people who believe that these problems are solvable and push at them. I wouldn't want everybody in the world to be like me—a stoic, depressive, pessimistic person who's like, "We're all gonna die. It's terrible. What are you going to do about it?"

But there's a huge element of delusion about what technology can do. And what it does best for them is make shit loads of money. Maybe it has something to do with having an almost infinite amount of money and an almost infinite sense of the possibilities of how you make your money, which leads to: I want to spend money forever, why would I die? But it's a frightening prospect as well. I don't go into the socioeconomic implications of immortality all that much in the book. But I think about the scariness of it, of a certain class or caste of people living forever. Let's be honest—it's not going to be this egalitarian, go to the hospital or the National Health Service and get your immortality pill, it's going to be Peter Thiel and a bunch of mega-rich assholes who are going to live forever. Maybe I'm being pessimistic about that again. But that seems to be the most likely scenario.