Mark Coeckelbergh: ‘Weak democracies, capitalism, and artificial intelligence are a dangerous combination’
The Belgian philosopher says policy-makers should consult experts when regulating new technologies, but must also take into account the rights and concerns of citizens
Before this interview with EL PAÍS, Mark Coeckelbergh (Leuven, Belgium, 48 years old) delivered a lecture to an audience unaccustomed to philosophical debates: engineering students, who packed a room at the Polytechnical University of Catalonia in Barcelona, Spain, to listen to the expert on the ethics of technology, who had been invited to speak by the Institute of Robotics and Industrial Informatics. A prolific author, whose numerous published works on the philosophy of robotics and the ethics of artificial intelligence (AI) include, most recently, Digital Technologies, Temporality, and the Politics of Co-Existence (Palgrave, 2023) and Robot Ethics (The MIT Press, 2022), knows how important it is to build bridges between those who develop technologies and those who are tasked with thinking about how to use and regulate them.
Question. Do you think engineers, students of robotics, and big tech companies tend to consider the ethical problems posed by AI?
Answer. I think more and more, people realize that the ethics of technology is going to affect their own lives. They see that it’s everywhere. And so I think there is an awareness, but people are confused because the changes are happening so fast, and it’s kind of complex. So I think it’s important that we, as educators, have that awareness, and also that we take an interdisciplinary approach. We need to collaborate. We need to work together.
Q. And to address the politics and policies?
A. Yes, we need to create more links between the experts and the politics. Okay, so we asked an expert about it — but there’s no systematic or institutional way of connecting the two realms, the political and the technical. How can we do that? How can we organize our democracy so that we take into account the vision of the technical experts, but make the decision ourselves, as a society. The power is more and more on the side of technology. It’s really a problem. The sovereignty of the nation-state is less powerful than these big companies. So in that sense, it’s also a political problem. How much do we place our technological future in the hands of private initiative? And how much in the hands of public and democratic control. This is a big part of the problem.
Q. Is the problem that artificial intelligence is a threat to democracy, or that our democracies are already weak?
A. Democracy is already vulnerable, because we don’t have full democracy. It’s like when Gandhi was asked what he thought about western civilization, and his response was that he thought it would be a good idea. It’s the same with democracy. It would be nice to have democracy. It’s a great idea. But our democracies are not full democracies. We have majority voting, but majority voting is not enough. It’s too vulnerable to populism, for example. It’s not participatory enough. It doesn’t take citizens seriously enough. People aren’t used to thinking about politics in the broader sense, about how to organize society. For example, is it correct that, in a democracy, the infrastructure for communication is in private hands?
We use technologies uncritically, but only a few benefit, while the rest of us are milked for our data
Q. How does artificial intelligence threaten democracy?
A. We use technologies uncritically, but they shape us and they are used as instruments for power and control, for exploiting our data. And we’re not the ones who reap the benefits. We’re creating a society where only a few benefit, while the rest of us are milked for our data. This is also connected to democracy, which is not resilient enough, which is vulnerable to populism and is taking this conservative turn. For example, when we think about social media, with the polarization and the bubbles and so on, it would not be so bad if it was just a technology, but it’s a technology that enlarges tendencies that are already there. Societies are more and more polarized. And so yes, I think this combination of weak democracies, capitalism and technology is dangerous. But I do think we can design the technology differently, try to make our democracies more democratic and more resilient, and use technology in a more constructive way to create a good life for the many, and not just for few.
Q. Some see artificial intelligence as a way to work less and have more freedom, while others see it as a threat to their jobs.
A. I think AI empowers people who already have a privileged position or a good education: For example, they can start a company using AI. So for them, it works. There will be changes in employment and there will be a transformation of the economy, so we need to be prepared. On the other hand, there’s this idea that automation will lead to a society of leisure and will make everything easier… But in practice there’s a divide between two categories of people in society: those who don’t have jobs — or have very bad jobs, where AI is used for surveillance, like Uber drivers — and people like me, who have good jobs, but are stressed as hell. The argument is always that technology makes things easier. But just take the example of email: It was invented as a solution, it’s easier and faster than writing a letter, but now we’re all slaves to our inboxes.
Q. So the problem isn’t so much technology, but the system?
A. It’s the combination of the two. But I also think these new technological possibilities will force us to question the system more than we did before, because the effects of the system will be more pronounced. Technology today is the site where political struggle plays out.
Q. What effect does this have on media?
A. I think there is a lot of confusion. We are in a difficult epistemic environment, where people are not sure anymore what is true and what is not. And it’s not only about facts; it’s about interpreting the world, trying to understand what is happening. I think this is also why quality journalism is so important, and can help us have a facts-based understanding of the world, even if outlets are using AI to perform some tasks. Philosophers, journalists, people in the humanities, educators and teachers — we all have an important role to play in helping citizens interpret the world, because once your knowledge base is no good, then anyone can come along with a simple solution. And so you get populism and so on, like we see happening already in many places in Europe.
Q. Can technology make governments more technocratic and authoritarian?
A. Politicians panic and feel confused, so they give more of a voice to experts and to these companies. They feel the pressure from lobbyists and they create certain regulatory frameworks, but the question is: where is the citizen in all this? When do I have a say, or did I ever have a say, in any of this? I think states will become more bureaucratic, because they are putting the power in the hands of the people who control AI. And it’s a tempting tool, because, in a welfare state for example, you can monitor poor people more effectively. From the perspective of a management firm, it’s great, because you have more control. But as Hannah Arendt warned, this kind of system, where you treat people as a means to an end rather than as human beings, can lead to all kinds of horrors. So we should, in a democracy, fight against this. Absolutely. We should regulate AI in such a way that some of its political uses, or other uses that are against human rights, that are ethically and politically problematic, are restricted. And we should have regulations that allow us to see how algorithms make the decisions they make, and that show us who is accountable.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition