Yuk Hui, philosopher of technology: ‘We cannot let economic reason and individualism dominate our use of technology’
The thinker, born and trained as a computer engineer in Hong Kong, warns that artificial intelligence has become a financial tool to attract investments
Yuk Hui, one of the most influential philosophers of technology in AI debates, also uses ChatGPT. “Immanuel Kant writes very long sentences in German with no punctuation. It can be very confusing. So, it suggests where to put commas or a full stop, to break the sentence [when translating]. It does it much better than me,” he explains in a small room at the Center of Contemporary Culture in Barcelona, Spain, where he gave a lecture recently. Born in Hong Kong, he never tells his age. His sober style — turtleneck, black jacket, discreet glasses — and a wise man’s gaze that lights up with curiosity, make it impossible to guess.
He studied computer engineering, but the questions he asked himself led him to philosophy. He earned his doctorate from Goldsmiths, University of London, under the supervision of French philosopher Bernard Stiegler, and now teaches at Erasmus University Rotterdam. He has published several books that have been translated into a dozen languages. His position on artificial intelligence differs from the hegemonic view, which expects this technology to reach a point where it either frees us from work or kills us all. Influenced by Gilbert Simondon, Martin Heidegger, Henri Bergson and the science of cybernetics promoted in the 1940s by Norbert Wiener, he tries to understand how our relationship with technology works, and advocates a view that takes into account the diverse forms of knowledge of each culture.
Question. How did you go from computer engineering to philosophy?
Answer. I studied in Hong Kong, and I was very interested in artificial intelligence. I found that what artificial intelligence was doing was actually philosophy. It’s a philosophical question. For example, what is perception? What is action? What is morality? If a robot comes into this room and looks at us, how can it know what is important in this setting? This kind of questions led me to the phenomenological critique of artificial intelligence that started in the 1960s. There was a famous American philosopher called Hubert Dreyfus who said that the AI that they were doing at MIT at the time was Cartesian AI. And he said Cartesian AI is actually a mistake when you look at the history of philosophy. He proposed what he called Heideggerian AI.
Q. What does that mean?
A. It’s an AI that is embodied, it embodies the world and it’s embedded in the world. He believed that the AI scientists at the time were not really understanding what we mean by intelligence, that everyday experience was not well understood in the research of AI. It was really eye-opening for me to understand that, when you are a computer science student, you learn how to program, you know that if you want to do this, then you copy and paste this algorithm, but without really understanding what is happening. Now the situation has changed a little bit.
Q. It has also accelerated. In your essays, seven years ago you quoted Putin: “Whoever leads in AI will rule the world.” Where are we now?
A. Before Putin, Xi Jinping was saying this. And last week, Emmanuel Macron said that the European countries are going much slower in this domain. So in general, politically speaking, we are in an AI competition. So where are we going? Where are we speeding to? This falls into what the transhumanists call technological singularity, a superintelligence with which we won’t need governments anymore. But this narrative of going towards a technological singularity is more or less like a narrative of apocalypse: we move towards somewhere we don’t know. What I’m trying to propose is moving towards a technological diversity, a diversity of thought, a biodiversity, as an alternative to this narrative of apocalypse.
Q. Technology companies also exploit this apocalyptic narrative.
A. AI right now is not just a technology; it is a financial tool to get investments. This fear of AI is what the industry has to say in order to justify what they are doing. To attract investment.
Q. With the tension in the world — in Taiwan, for example — can this competition lead to a war?
A. I am trying to analyze this question in my new book, Machine and Sovereignty. What is the relation between technology and war? Of course, technology can be used in wars, but it’s not so simple. I try to perform a new reading of a speech given by the French philosopher Henri Bergson in 1914, right after the outbreak of the First World War. He believed that in the past 100 years Europe had produced many machines, and that each machine is a new organ for us. Before the First World War we witnessed a sudden expansion of our artificial body. But we were not able to deal with this sudden expansion of the artificial body. That is the source of war for him. The Greek code of hubris cannot be pacified. We see that now with the reactionary movements in Russia. Alexander Dugin was a big voice behind that. There was a lot of discussion about how Russia was repressed by Europe and by the West, especially in terms of technology and science.
Q. You write about the paradox of intelligence: it produces tools that can threaten it. Is it inherent to human evolution?
A. Humans are technological beings. We invent technology, but at the same time we are invented by technology: we develop our gestures, we reconfigure our central nervous system. And the technological evolution is much faster than the biological evolution. Before the industrial revolution, craftsmen worked with multiple tools. And then in the time of the encyclopedia there were bigger factories, but people still worked manually, with simple mechanical machines. But then came the industrial revolution with a different kind of machine. That is the machine that Karl Marx described: closed, autonomous. All the workers have to do is put things in and collect the result at the end. They are no longer using their body as they did before. They lost their knowledge. The machine is purely the externalization of the intelligence, but they don’t know how to deal with it. That’s one of the dimensions of the source of alienation. Today we are confronting a different kind of machine that is almost biological. It comes from the development of cybernetics, proposed in the 1940s: machines govern themselves through feedback.
Q. How can we understand technology from other places?
A. This is what I call technodiversity. I am not referring to the defense of the local and the traditional that the right does. It has nothing to do with identity, but with the fact that each locality has a form and history of knowledge. With modernization, these forms became fragile: indigenous knowledge cannot be used to make a machine. It’s not about preserving local knowledge in a museum; it’s about understanding how it is relevant to what we are doing today, how it helps understand technology. We cannot let economic reason and individualism dominate our use of technology. Let’s study ways to develop alternatives that serve the community.
Q. Can art play a role in this matter?
A. In the past century, art has been pressured by a kind of technological determinism. Like Walter Benjamin said years ago about the work of art in the age of technical reproducibility: let’s not ask if cinema and photography are art or not, rather, let’s ask how the nature of art is transformed by technology. This still continues. Art, business, everything is transformed by AI. My proposal is to think, through technological diversity and artistic varieties, how our experience on Earth could help us transform technology.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition