AI researcher Gary Marcus: ‘The future of artificial intelligence is darker with Trump in the White House’
In his latest book, the expert argues that, given Washington’s close relationship with Big Tech, citizens must ‘get loud about wanting protection’ from the dangers posed by the technology
Gary Marcus, 54, has spent recent months fervently warning on social media about the dangers of Donald Trump returning to the White House, particularly if it’s with Elon Musk by his side. Now that it has happened, Marcus finds himself deeply discouraged. A leading voice in the U.S. on artificial intelligence (AI) and its risks, he testified last year alongside Sam Altman, CEO of OpenAI (the company behind ChatGPT), before a Senate subcommittee on how to regulate AI. In his latest book, Taming Silicon Valley, Marcus argues that if left unchecked, generative AI — the technology behind tools like ChatGPT and Gemini — will make the world a worse place.
A professor of psychology and neuroscience at New York University, Marcus has spent decades researching the intersection of cognitive psychology and AI. He has also founded two startups. The first, Geometric Intelligence, was acquired by Uber in 2016 and became a deep learning research lab. The second, Robust.AI, which he co-founded with one of the creators of the Roomba vacuum, focuses on developing open-source software for autonomous robots.
Marcus is active on platforms like X, where he is a vocal critic of Musk and engages in daily spats with figures like Yann LeCun, a leading figure in modern AI and Meta’s current AI chief. Marcus is blunt about his views on LeCun: “He is an intellectually dishonest egomaniac who did everything he could to deplatform me when I first criticized large language models [LLMs], only to do an about-face when ChatGPT eclipsed Meta’s work.”
Question. How do you see the future after Trump’s victory in the presidential election?
Answer. Dark. Generative AI comes with many risks, short-term and long, and I think the prospects for meaningful regulation under the Trump administration are poor. The EU has its AI Act; the U.S. has very little law directly around AI to protect its citizens, and I don’t see that changing in the next few years.
Q. There have been some attempts to regulate AI in California. Do you think it’s possible that some states will pass their own AI laws?
A. California did pass some laws around things like data transparency, but lobbyists from Silicon Valley helped block SB-1047, which would have made the companies liable for “catastrophic harm.” This was in my view a mistake. We can still hope that some states try, but it’s going to be an uphill battle, unless citizens get really, really loud about wanting protection. Otherwise, it might take some huge mess, like a giant AI-fueled cyberattack, before anything significant happens on the legislative front.
Q. Trump has chosen Elon Musk to lead the Department of Government Efficiency. What do you expect from him?
A. Elon was one of the first people to warn about the risks of AI, but now he has heavily invested financially in the success of AI, and it is hard to see how that wouldn’t color his recommendations to Trump. I imagine that he will do everything he can to get the government to subsidize the development of AI, including his own companies, despite the risks that he once warned about. Remember, this is the guy who signed the “six-month AI pause” letter and spent those six months amassing a giant GPU cluster for his own AI.
It’s also ironic that Musk made most of his money — and hence gained much of his power and influence — from Tesla, which builds electrical cars and is in principle eco-friendly. It is ironic because the form of AI that Musk is excited about is not very environmentally friendly, consuming huge amounts of power and water and creating massive emissions. And yet, I expect the Trump administration to push hard to relax environmental regulations, to allow for more power generation in order to feed AI.
Q. Microsoft, Amazon, Google and Meta are all interested in using nuclear power plants, in some cases their own, to power their data centers. Some of these companies have discussed this with the Biden administration. Do you see this plan as more feasible under Trump?
A. I am fairly confident that the Trump administration will be supportive, unless there is an angle I haven’t seen. I actually think nuclear power makes a lot of sense, but pouring all that power into giant large language models is probably not the best use of that energy, as opposed to reducing our dependence on fossil fuels.
Q. Going back to Musk, what do you think about the government hiring the richest man in the world? Can a member of the government own a major social media platform?
A. I don’t think that Trump has a notion of “conflict of interest,” and he has generally disregarded prior norms. I don’t think it is a good idea for the nation, but I doubt that will stop Trump from proceeding. Who’s going to stop him? America 2025 is almost certainly going to be very different from what it was in previous years.
Q. What do you mean?
A. Trump will disregard previous norms, and to some degree laws in general. He will appoint an attorney general who will be enormously sympathetic [following the interview Trump appointed Matt Gaetz], and the Supreme Court has recently greatly broadened presidential immunity. Trump will take that as a mandate to do whatever he likes, whatever the written law may be, and I don’t expect him to be significantly challenged in doing so.
Q. What about the big tech companies? Do you think they will prosper under his leadership? In his first term, Trump considered Facebook and Twitter to be liberal-leaning companies.
A. Twitter (now X) has changed enormously, under Elon. I don’t think Meta has changed as much. I think the biggest problem for Big Tech is that they have invested hugely in generative AI, on the fantasy that it will evolve into “AGI” (artificial general intelligence), and in reality it’s just not a sound enough technology to support the revolutions people are envisioning. If generative AI doesn’t become profitable, relatively quick, the bubble will burst — and neither Trump nor Musk can fix that.
Q. Social media has destroyed privacy and paved the way for surveillance capitalism. What can we expect from AI?
A. Generative AI will advance surveillance capitalism. Some people pour their innermost secrets into chatbots, and the makers of LLMs are hoping they will get access to everyone’s file, emails, calendars, and even passwords. LLMs themselves, by how they answer, can subtly shape people’s beliefs, and even, according to a recent study by Elizabeth Lofus, implant false beliefs. We are giving the makers of LLMs extraordinary power. Meanwhile, LLMs are already being used to generate misinformation, making biased decisions in job hiring, fueling cybercrime, and more. There are some upsides to the technology, but it is not at all clear they are a net benefit to humanity. Instead, most of the gains will go to their makers, with most of the costs absorbed by society.
Q. You argue in your book that the tech oligarchs will increasingly have more control over American society. Were you already thinking about Trump’s victory when you wrote it?
A. I was concerned that Trump might win, yes, though I think we would have faced challenges either way. But the basic point of the book is now even more urgent: we can’t trust big tech to regulate itself, and the U.S. government is too much enthralled by Big Tech to get what we need. The only way that U.S. citizens will be able to protect themselves from AI is if they are very, very loud — maybe through boycotts.
Q. One of the members of that technological oligarchy will have a government position.
A. Correct. And we can expect Musk to have an extremely strong voice in policy, much stronger than most other billionaires have ever had. It would not be surprising to see Trump largely defer tech policy to Musk, despite the immense apparent conflicts of interest. The world that I warned about has arrived. What we do about it is up to us.
Q. How can we tame Silicon Valley?
A. The people of the world have to unite and say: “We don’t want AI that destroys the environment, rips off artists, writers, defames people, and underwrites mass propaganda, especially when the makers aren’t taking any real responsibility for the harms that they cause.” Only if we insist on big tech better will we see real improvement.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition