Skip to content
_
_
_
_

Mustafa Suleyman: ‘Controlling AI is the challenge of our time’

The CEO of Microsoft AI and co-founder of DeepMind in 2010 reflects on today’s technological challenges: ‘My main hope is that everyone alive will feel the benefits of a revolution in intelligence that empowers them to achieve and do more’

Mustafa Suleyman

Over the past decade—and especially in the last five years—artificial intelligence has advanced at a breakneck pace. New AI applications appear daily. Nvidia, whose chips power this revolution, is now the world’s most valuable company, and the seven giants known as Big Tech have become the economic engine of the United States. Some warn this boom could mirror the dot-com bubble, but few doubt the profound impact AI will have on human life in the decades ahead.

At the center of this upheaval is British technologist Mustafa Suleyman, now CEO of Microsoft AI and co-founder of DeepMind. When he joined DeepMind in 2010 as head of product and later oversaw applied AI, the field had been inching forward quietly, almost demurely, for years. Then DeepMind delivered what once seemed out of reach. AlphaGo, the early AI system that stunned the world by defeating Go champion Lee Sedol 4–1, showed how neural networks combined with large-scale reinforcement learning could generate strategies no human had imagined. It was a true watershed moment.

By the time Google acquired DeepMind in 2014 and the AI race accelerated, Suleyman was already one of the field’s leading explorers. His vantage point gave him a rare view of both the promise and the peril. The biggest challenge, he argues, is the sweeping societal transformation triggered by the convergence of AI, robotics, and synthetic biology, from the future of work and human relationships to the defining issues of our time: climate change, health systems, biogenetics, and geopolitical rivalry.

These ideas shaped The Coming Wave: Technology, Power, and the Twenty-First Century’s Greatest Dilemma (with Michael Bhaskar as a co-author), his compelling and cautiously optimistic history of technology’s rise and the storms ahead. AI may transform everything, but it also carries real and urgent risks that, if left unmanaged, could fuel social conflict, nihilism, and instability. Suleyman’s engaged, wary optimism underscores the need for a political framework to govern AI. Reaching a consensus in an age of polarization, authoritarian drift, and U.S.–China rivalry will be no small task.

This interview is the product of an extended email exchange.

Q. As a pioneer in AI and co-founder of DeepMind, did you ever imagine in your youth that you would play such a fundamental role in this era of accelerated technological development? How did you personally experience the “frenzy phase” described by my fellow Venezuelan Carlota Pérez, whom you mention in your book?

A. I’ve always been interested in things that can have a massive positive impact on the world, but I had an unusual pathway into artificial intelligence. I became involved in the Copenhagen climate negotiations in 2009. It was an important experience because it taught me that many of the traditional institutions we rely on to solve our biggest and most pressing problems simply aren’t up to the task. At the same time, I could see digital platforms rolling out at massive scale and having huge impact. It seemed to me that artificial intelligence could bridge these worlds. That was the motivation: to build AI that could make a meaningful difference in addressing the huge challenges of our time — climate change, rising healthcare costs, stagnant productivity, loneliness, and disconnection.

More than the frenzy, one of the biggest challenges we faced in the early days of DeepMind was that almost no one was talking about AI. It was unfashionable and seen as pretty weird. Although AI is everywhere now, back then it was only re-emerging after a long “winter.” So we had to work hard just to convince people that AI and Artificial General Intelligence were real ideas worth pursuing.

Q. From your vantage point within the industry, how much control do today’s technologists actually have over the rapid change and disruption they help create, and how do they navigate this environment with competitors and allies?

A. Technologists should always take responsibility. We cannot control everything that happens downstream of what we create, but that doesn’t remove the responsibility to make the right decisions.

Q. Technological advances like large language models have moved from science fiction to everyday reality for billions. What strategies can help society prepare for such sudden, large-scale changes?

A. Transitions like this are complex. In the past, these shifts happened relatively slowly, so their consequences were blurred into the background. No one quite remembers when ATMs appeared or when supermarket kiosks became normal. This transition will be starker because it’s faster, more direct, and will affect nearly everyone. That’s why I’ve argued for containment and guardrails. How fast can we retrain and upskill? How much can the welfare state support people as they switch jobs? That’s probably the biggest challenge we face right now because there are overwhelming forces driving AI’s deployment to millions and billions of people. We have to manage that while softening the transition wherever possible.

Q. You’ve anticipated an era of surprises brought on by new technologies. What kinds of positive and negative surprises should we expect as this wave unfolds?

A. My main fear is that bad actors will use the technology in dangerous ways. My main hope is that everyone alive will feel the benefits of a revolution in intelligence that empowers them to achieve and do more, wherever they are.

Q. Robots, brain implants, genetic editing, synthetic life, and artificial intelligence are, as you write in The Coming Wave, markers of a historic turning point. What are the main promises or positive aspects of this disruption if things go well?

A. AI distills the essence of the world economy — intelligence — into an algorithmic construct. In the short term, AI will help make people more productive, which should drive meaningful global economic growth and offset any losses. But this will require a massive response from governments to ensure that everyone maintains living standards, gets retrained, and enjoys a better quality of life than today. Those building AI should focus on augmentation rather than replacement. Regulators and policymakers should already be thinking about the right tactics and mechanisms to help everyone through this transition. If we get that right, we could tackle some of humanity’s biggest challenges, from clean energy to affordable healthcare for all.

Q. Many fear AI is already making crucial human skills obsolete. How do you envision AI not only compensating for job losses but also tackling massive challenges like climate change — given its own voracious energy demands — or making healthcare more accessible and workers more empowered? As Nobel laureate Daron Acemoglu has argued, the current trajectory of AI seems focused on automation and displacement, not augmentation.

A. I recently announced the formation of a new team at Microsoft AI — the Superintelligence Team — which is built to pursue a new vision of Humanist Superintelligence.

Here’s how I define it: Humanist superintelligence is advanced AI designed to remain controllable, aligned, and firmly in service to humanity. It’s AI that amplifies human potential, not replaces it.

This is our answer to what I see as the most important question of our time: how we ensure the most advanced forms of AI remain under human control while making a tangible difference.

Humanist Superintelligence offers a safer path forward. Imagine AI companions that ease the mental load of daily life, enhance productivity, and transform education through adaptive, individualized learning. Think medical superintelligence delivering expert-level diagnostics with an accuracy and affordability that could revolutionize global healthcare — capabilities already previewed by our health team at Microsoft AI. And consider AI-driven advances in clean energy that enable abundant, low-cost renewable generation, storage, and carbon removal to meet soaring demand while protecting the planet. The prize for humanity is enormous: a world of rapid advances in living standards and science, and a time of new art forms, culture, and growth.

With HSI, I think these are not speculative dreams. They are achievable goals that can deliver for people around the world with concrete improvements to everyday life. We should celebrate and accelerate technology because it has been the greatest engine of human progress in history. That’s why we need much, much more of it.

Q. How close are we to technology surpassing human agency and control? What does it mean to face the “gorilla problem,” meaning creating something smarter than ourselves?

A. The goal should be to create AIs that are supportive and empowering to humans. That means building systems that are contained and aligned, designed with clear intent, trade-offs, and proper guardrails. It’s about making key design and engineering decisions early and then sticking to the principles behind them.

Q. AIs are often described as black boxes. Is it realistic to hope that we can control them and ensure humans retain a meaningful role, given their drive toward autonomy?

A. Yes, I think it is. At Microsoft AI, we’re building Copilot, an AI companion for everyone. It’s a very new and different kind of technology — not like any tool we’ve used before. It’s far richer and more dynamic. An AI companion will travel with you through life, grow with you, adapt to your needs and quirks, remember what matters, navigate the web, and act on your behalf — from booking a trip to managing daily admin to helping with complex tasks. And it will do all of this in your corner, aligned to your interests. This is something new: it’s about supporting human roles and bringing out the best of us.

Q. Even AI experts sometimes don’t fully understand how their systems work. What serious risks do you see in this opacity, and how can they be minimized?

A. It’s incredibly important that we take full responsibility and accountability for what we build. Microsoft has one of the strongest security teams in the world, and security is our number-one priority. Containment means that AIs must always be provably accountable and controlled. Accountable means transparent. We should always have a clear explanation for what they’re doing and why. And there must be enforceable limits on their capabilities with provable, verifiable boundaries.

Q. The rise of a techno-elite or “superclass” is shifting power from states toward those who control digital infrastructure, data, algorithms, and biogenetic advances. What threats do democracy and social equity face as a result, and what measures could prevent a future dominated by oligarchy?

A. While training large models isn’t something everyone can do, there are opposite trends worth noting. The technology is diffusing extremely fast, moving from cutting-edge to open source within months. Small, lightweight models get better every day. That means that although big tech companies will play a role, so will many others. Beyond that, governments and companies still have a huge role in underwriting and supporting our social contract. Both should be vocal about that — I certainly am.

Q. You argue that regulation alone cannot contain these technologies. What would a practical and effective containment strategy really involve?

A. Containment should not only keep technology in check but also manage its consequences for societies and individuals. It must integrate engineering, ethics, regulation, and international collaboration into a single, coherent framework. To manage the AI wave, we need a program of containment working in ten concentric layers, from the technical core outward.

It begins with built-in safety measures — concrete mechanisms to ensure safe outcomes — and continues with audit systems for transparency and accountability. It involves using choke points in the ecosystem to buy time for regulators and defensive technologies; fostering responsible creators who build contained systems, not just critique from outside; and reshaping corporate incentives away from a reckless race. Governments must license and monitor technologies, while international treaties — even new global institutions — will be needed to coordinate oversight. We must also cultivate a culture that embraces the precautionary principle, while social movements press for responsible change. All these measures must cohere into a comprehensive program: mutually reinforcing mechanisms that maintain societal control over technology in a time of exponential advance. Without that, every other debate — about ethics, benefits, or risks — becomes inconsequential. And none of it will be easy.

Q. Is it feasible to push for an international “Paris Agreement” for AI, or to create an independent oversight body with real power, accepted by the major players?

A. Yes, but it will take a huge amount of work. The key is finding ways to create net-win situations where countries can collaborate to secure benefits for their populations while managing risks together. There are good precedents in history: the Montreal Protocol on CFCs, the Paris Agreement on climate change, or weapons bans. That’s the challenge for our time.

Q. What role should the humanities — fields like philosophy, history, ethics, and the arts — play in shaping AI’s research, development, and application? Are interdisciplinary perspectives being adequately integrated, or are we at risk of overlooking crucial humanistic insights in the rush to innovate?

A. I studied the humanities, and it’s deeply important to me. I believe there’s a huge role for diverse backgrounds and perspectives in AI. In fact, it’s essential. We’re now at a point where the tools are so advanced that you don’t have to be an engineer to lead product or engineering teams. We have a new kind of clay to sculpt experiences in new ways. That’s an incredible opportunity for writers and artists to get involved. Many people at Microsoft AI come from diverse backgrounds: educators, therapists, linguists, comedy writers, advertisers, designers, gamers. I’m keen to bring in true creatives who don’t fit traditional molds but have range and breadth, and to put them at the heart of product creation, alongside the engineers and managers.

Q. Are these voices truly influencing core design decisions? Or are they more often used to humanize products already engineered within a corporate framework?

A. My call to action for everyone is to get involved. There’s a huge role for people to play in shaping outcomes; nothing is certain or inevitable, and everyone alive has a stake in what happens next. Ultimately, society will decide what it does and doesn’t build. We tend to overestimate the short-term impact of technology and underestimate its long-term consequences. That means there’s still enormous scope — and time — for all of us to engage, join movements for positive change, and learn how to influence these tools for the best possible outcomes.

Q. Finally, on a human note: can empathy ever be truly programmed into AI, or is that a dangerous illusion?

A. Yes, it can. But we shouldn’t confuse that with human empathy or see it as a replacement. The advances of the past few years show that it’s possible. At Microsoft AI we call it “personality engineering,” and it’s an important part of designing systems that are supportive, accountable, and aligned to your interests. It can create a genuinely rich emotional experience. But it’s not about pretending it’s real emotion or substituting for it. An empathetic AI should help you reconnect with human beings. It won’t pretend to be something it’s not because it will puncture its own illusions. That’s a tricky balance, but one we’re determined to get right.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_