Writer Douglas Rushkoff: ‘The tech elite are planning for the apocalypse’
In his latest book, the American author examines the mindset of technology billionaires and their pessimistic view of the world
In 2017, Douglas Rushkoff had a revelation. He was invited to a conference at an exclusive resort in the California desert, only to realize it was actually a private meeting with five billionaires. They didn’t want to talk about technology trends, Rushkoff’s specialty as a writer and university professor. Instead, they wanted to discuss how to prepare for a catastrophic event that could potentially destroy civilization. Events like an environmental collapse, social unrest, nuclear explosion, solar storm, uncontrollable virus, major computer sabotage or a rebellion by artificially intelligent machines. The billionaires discussed things like whether to build private, underground bunkers in remote parts of Alaska and New Zealand, whether climate change was more likely than biological warfare, and the best strategies for ensuring that their security guards wouldn’t turn against them during the apocalypse.
The media theorist, writer, columnist, lecturer, graphic novelist and documentarian best known for his association with the early cyberpunk culture and Marxist ideas was shocked by that meeting. He knew, of course, that the tech elite is immensely rich, but he didn’t know they also assume the world is going to hell. They have a Plan B for that — fleeing to somewhere safe. The ultra-rich have grand aspirations beyond luxury bunkers. Jeff Bezos plans to travel to space, while Elon Musk aims to colonize Mars. Venture capitalist Peter Thiel seeks to reverse the aging process, while OpenAI’s Sam Altman and Google’s Ray Kurzweil envision uploading their minds to computers. Meanwhile, Mark Zuckerberg finds solace in the metaverse. These are the various ways they hope to escape the world’s problems and leave the rest of humanity behind, problems that they helped create.
Douglas Rushkoff, author of 20 books, three documentaries and a weekly podcast, explores the thinking and behavior of the tech super-elite in his book, Survival of the Richest (2022). “Many people look at these tech titans as our heroes, as people to emulate,” he writes. “But the vision of the future that these people have is very dark, and I want to help people see that their ideas are silly.”
Question. Have you been contacted by any technology executives since you published the book?
Answer. After the book launch, I had a couple of talks booked — one with a hedge fund and another at a conference for top executives — and both got cancelled. But I receive daily emails from startup owners seeking my feedback on their business plans. You know, I go to business schools and try to convince the young audience that it’s okay to earn just $50 million. That’s what I do. I tell them, if you set your sights at $50 million, you can have a much more relaxed career and a greater probability of success. And you can build a company that does much less damage, that maybe even does good. Usually I start by saying, “Is anybody here willing to make $50 million?” No one raises their hand. And by the end of the talk, maybe I’ll get four or five people out of 100 to say, yes, I’m willing to settle for $50 million and take this other business path. And that’s a start.
Q. Is your main criticism directed towards capitalism itself, the technology tycoons or a combination of both?
A. I explore this question extensively in my book. For over a decade, I attributed the transformation of the open, collaborative and creative culture of the early internet into just another consumer product solely to capitalism. I’ve learned to see this kind of techno-autism as the same essential urge as extractive capitalism, as ways of understanding the world through the balance sheet, through numbers and symbols, rather than acknowledging other sorts of human values.
Q. The mentality you talk about in the book — how would you define it?
A. It’s the idea that with enough money and technology, these people can escape from the damage they’re creating with money and technology. It’s a way of thinking that there’s always another technological or market solution to the last set of problems. And that wealthy individuals can avoid any harm, that they can keep rising above the rest. This mindset is an extreme form of atheism that says that human beings are just material and have no soul, that there’s no meaning in reality, that all of life is just information, and that all that matters is to spread your genes and your memes. It’s thinking that success is buying Twitter and getting as many women pregnant as possible.
Q. There have always been dominant and powerful elites. What makes the ones today any different?
A. Two things. First, they never had the ability to destroy the world before. Carnegie and Rockefeller owned large monopolies, but tech billionaires go much further. Julius Caesar or Alexander the Great could conquer, kill and rape, but they did not endanger all of humanity. That only came about with the nuclear trigger. The other difference is that, as individuals, they have more power over more things. Elon Musk, for example, not only owns the main digital public square. He also has companies that are leaders in space travel and make the satellite systems that militaries depend on for navigation. These people are not controlled by any government and have very little sense of social responsibility.
Q. You say in the book that they are selling their escape plans as solutions for all of humanity.
A. Bezos has demonstrated that in our world, individuals can accumulate enough wealth to develop their own space program and pursue the ultimate exit strategy. They rely on astute marketing advisors and espouse effective altruism, believing that the progress they create — be it artificial intelligence, robots or augmented humans — are of greater significance than human beings. Their vision extends to trillions of future artificial intelligences inhabiting the galaxy, prioritizing their experiences over the current population of eight billion individuals. They are smart enough and clear-headed enough to see that. They’re not trapped in human emotionality, and they’re able to pull back and see the equation from a much more rational place, like Ayn Rand or Jeremy Bentham.
Q. Some of the billionaires you write about recently testified in the U.S. Senate. Do you think that will bring them back to Earth?
A. When I first saw Sam Altman [CEO of OpenAI] calling for regulation, my first thought was very cynical. Here’s the guy who has the first monopoly on AI, and of course he wants regulation right now. But as I watched him, I started to get the feeling of, here is a nice Jewish boy who built this technology and now realizes he’s over his head. He doesn’t know what to do. It’s more powerful than he realized it would be, and he’s asking for help. If we don’t engage with them, the Oxford philosopher who invented effective altruism will and teach them that it’s okay to make all the money you want and not worry about a thing as long as you give money to charity. And that’s dangerous. So, I think we need to take them at their word that they need to be regulated. We could create some kind of data pool that all AIs have access to and learn from, but its use is subject to rules and AIs that don’t behave appropriately are banned. A friend of mine, science fiction writer David Brin, came up with the idea of incentivizing AIs to monitor each other and report those who are doing something wrong.
Q. In the book you describe many technological solutions developed by this elite to save the planet. Which ones most appealed to you?
A. I mean, the smaller and more local they are, the more appealing they are to me. People who are looking at embracing the complexity of nature rather than oversimplifying it. Regenerative agriculture is interesting to me. And people who want to make the electric grid more intelligent, even using AI to help distribute electricity. I don’t believe we can generate enough power to support the world as we currently operate it. I’m more a fan of what we call degrowth, something as simple as borrowing a tool from your neighbor instead of buying a new one yourself. It’s not a great techno solution, I guess, but I love the idea of libraries where you borrow tools instead of just books so that we don’t need to manufacture so many things.
Q. You say that technology cannot be the solution to our problems because no one has been able to stop fascism when inequality soars, and no society has avoided collapse when resources have been overexploited. Are we doomed?
A. If an addict can’t overcome addiction, he will eventually die. In Porto Alegre [Brazil], there was a tornado two weeks ago that killed 47 people. Floods in Libya may have killed over 10,000. All this is happening right now. We may not be able to avoid catastrophe, but we can choose how we deal with it. Are we going to do it as compassionate human beings who care for each other, or will we each go our own way? That will determine everything. You know, the more that we depend on each other, the less things we need to buy, the less energy we need to spend, the less slaves we need, the less war and strife we need to create. Most of us don’t need to understand the geopolitics. What we can do is relieve the stress on the global supply chain. We can relieve the stress on world leaders and on geopolitics by taking better care of ourselves and each other.
Q. You compare these billionaires to the cartoon coyote that falls off a cliff chasing the roadrunner and looks down to find nothing underneath its feet.
A. That’s the moment they said, “Uh-oh, I pushed too far.” One of those moments was the election of Trump when all of the technologists who think of themselves as progressives, liberals and climate-caring people — they realized they created this monster. They realized their platforms helped to generate the confusion. They made a lot of humanity vulnerable to authoritarian rulers. Now they want to fix things using the same kinds of tools that they used to break things. And that doesn’t work.
Q. Why do their plans always involve starting from scratch?
A. I don’t think we need to find another place to start over, whether it’s the Moon, Mars, the ocean or a new piece of land. This is our adolescence as a civilization. We are in that toxic phase prior to a big change. And I believe we can do it. But the change we need is more of a mental shift than a technological one. I hate to say it, but the greatest probability we have of avoiding a major disaster may look more like magic than science. If there was a sudden shift, a global shift in how we think about things... I know it sounds like a fantasy, but that’s what it’s going to take.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition