_
_
_
_
_

Marc Serramià: ‘If we all trust tools like ChatGPT, human knowledge will disappear’

The Spanish engineer talks to EL PAÍS about the need for ethical algorithms, society’s permissive response to AI and why more needs to be done to address the risks of the technology, such as autonomous weapons

Marc Serramià
Marc Serramià, professor of Computer Science at City, University of London, at the BBVA Foundation in Madrid.Jaime Villanueva
Manuel G. Pascual

Marc Serramià, 30, is concerned that the dizzying rise of artificial intelligence (AI) into our lives has not come with a serious debate about the risks involved with this technology. Given the ethical dilemmas it raises, Serramià has decided to focus his research on developing techniques to ensure “that the behavior of these systems is consistent with human values and social norms.” His work has earned him the Spanish Computer Science Society and BBVA Foundation Award, which every year honors young researchers for their innovative doctoral theses.

The Spanish researcher compares his work in the field of AI with the establishment of behavior standards in traffic regulation. “We have speed limits on the road because we value the lives of drivers more than reaching our destination quickly,” says Serramià, a doctor in Engineering (specialized in artificial intelligence) from the University of Barcelona, who is currently a professor in the Department of Computer Science at City, University of London.

Question. Some experts say that the risks of AI should be taken as seriously as the climate emergency. What do you think?

Answer. I agree. A good example is medication. To put a drug on the market, not only must it be shown that it has a positive primary effect, but the side effects must not be worse than the primary effect. Why doesn’t the same happen with AI? When we design an algorithm, we know that the main function is going to make it look good, but not if it will have side effects. I think that in the case of drugs or weapons, we see this very clearly, but with AI not so much.

Q. What dangers are we talking about?

A. There are many. One of them, on which I focus part of my research, is privacy. Even if we anonymize data, it is always possible to reverse engineer and infer things about you that are used for personalized advertising, to decide whether you are granted a bank loan or not, or for a potential employer to judge whether you are the profile they are looking for. This is what our work suggests: since we use algorithms to study you, why not use them for good things too, like learning your privacy preferences? In other words, if I tell you that I don’t want you to share my location, don’t ask me again. What we have proposed is that AI can learn from the user and can act as a representative in this process and define their preferences by predicting them from the information it has about them. We made a very simple AI tool and yet our data shows that it was able to predict actual user preferences with good reliability.

In October, Serramià obtained one of the six awards from the Spanish Computer Science Society and the BBVA Foundation awarded to promising young researchers.
In October, Serramià obtained one of the six awards from the Spanish Computer Science Society and the BBVA Foundation awarded to promising young researchers.Jaime Villanueva

Q. What other problems would you point out beyond privacy?

A. Smart speakers, like Alexa, were launched on the market very quickly, but they are failing. For example, sending sensitive conversations to contacts with whom you do not want to share information. A less everyday but more transcendent problem is the danger posed by autonomous weapons.

Q. To what extent should we fear autonomous weapons?

A. They are very advanced at the production level. My thesis director participated in a conference at the United Nations on this topic and most of the discourse from the politicians and soldiers present was: well, we don’t want them, but if we don’t develop them, another country will. The balance is very complicated. There will always be someone willing [to take the step], and that will drag others along.

Q. When we talk about autonomous weapons, are we referring to drones?

A. For now, I think it is the most widespread autonomous weapon. In the future we could be talking about armed humanoid robots. At the moment, drones with explosives are being used in the war in Ukraine and Russia. But you can also give them weapons to shoot.

Q. Is there a way to stop that? Or is the automation of war inevitable?

A. What we recommend is to try to stop or slow down the development of autonomous weapons with decision-making capabilities, because we are actually creating things and we don’t know how they work and what effects they can have. And this is very dangerous. The problem is that companies know that if they don’t do it, others will, and in the end a kind of competition sets in. It would be good if there were some kind of certification in this area. It should start with consumer products, such as smart speakers: if you go to a store and see one that is certified, according to an ethical study that ensures that it respects privacy, you are likely to buy that one and not another.

Q. Does ethical artificial intelligence really exist?

A. Yes, although it is not very visible. It’s new ground: the first International Conference of the Ethics of Artificial Intelligence was in 2018. One topic I’m working on is using AI to improve participatory budgeting processes, like Decidim Barcelona [a digital platform that aims to give citizens a voice on the future of their surroundings]. One of the problems Decidim Barcelona has is that few people participate, and it has been studied that generally the most disadvantaged classes vote less. As a result, this implies a bias in which projects are selected. We made them an algorithm that could implement the value system of people who do not participate, either because they cannot or because they do not want to, in a way that took their sensitivities into account. The objective is to minimize the possible biases that may arise from decisions voted for by only a few. The interesting thing is that in our experiments we have seen that we can find a good balance in which the participants are happy and we also represent those who did not participate.

Q. Is it possible to code algorithms to be ethical?

A. On a theoretical level, yes. My research is limited to that plane, I focus on multi-agent systems [several intelligent systems that interact with each other]. The idea is to think about how to design tomorrow, when AI surrounds everything, a system of standards that ensures that the systems will be aligned with our values. Then there is another investigation that is on how we transfer this to a practical level, but we would not go into it.

Q. And how can it be done?

A. Artificial intelligence can be seen as a mathematical formula that tries to change the state of the world by trying to maximize that formula. Although it seems to have intelligent behavior, it is still an optimization mechanism. You can put rules in the code, or also modify that mathematical formula to penalize when the rule is broken. It just wants to do it right, it will go with whatever helps it to achieve the design goal of that system, but it doesn’t know what its doing.

Q. But then those algorithms are used by someone who can bypass those rules.

A. Of course, in the end intelligence is as ethical as the person using it. But our research focuses on seeing how we can make algorithms free of bias. It is a theoretical work for a future in which we imagine that we will coexist with sophisticated AI systems.

Q. What do you think of generative AI, the one behind ChatGPT or Gemini? What ethical problems does it raise?

A. They focus more on explaining what is generated, or on pointing out that you cannot ensure that what is generated makes sense. The algorithm doesn’t understand anything, all it does is find things similar to what you’ve shown it, puts them together and generates something. The term machine learning can be misleading, because the machine has not learned or understood anything. It has a sophisticated mathematical formula that is modified, so that if you ask it to give you an illustration of a cat, it will look for an illustration of a cat, but it does not understand what a cat is.

Q. It’s not yet known how these tools could affect certain profiles. One person killed themselves after talking with an intelligent chatbot that encouraged him to take that step.

A. There are several things here. The first is that there is a problem of ignorance: people do not know how these systems work. No matter how human-like the text it produces, it only bounces back probable results. It is not at all intelligent, and even less emotional, although it can give that impression. There is also a problem when it comes to education. It is no longer just that students use ChatGPT to do their homework, but if we all rely on these types of tools, human knowledge will disappear. The algorithm will make a mistake and no one will know it did it. And it has already been seen that many models invent answers. On tobacco packets it says that smoking kills. The same should happen with AI.

Q. You are referring to a type of label or certification.

A. Exactly. The industry has grown rapidly and governments are always slower. We are at that moment in which there is a lot of development and little certification and regulation. I believe that in the end this will be fixed and we will even be better. But now is a dangerous time.

Q. What do you think of the Europe Union’s AI regulation?

A. I think it’s a good first step. In any case, perhaps we have been too permissive with generative AI. For example, ChatGPT and other similar tools are language models. Its virtue is writing text that sound human, not writing real text. However, companies are selling them to us as such. Can we be sure that putting a message that says “generated by AI” is enough? Incredibly, people ask ChatGPT things like which party they should vote for in the next election, whether they should hire a certain person, or what medication to take if they have such-and-such symptoms. And let’s not even mention questions like “I don’t want to live, what should I do?” I think more should be demanded of generative AI. There are issues that the chatbots should not be able to talk about and others that, if they are allowed, guarantees should be demanded of them. Much of the debate so far has focused on copyright, which is also very important, but this other debate also seems crucial to me.

Q. Should we be afraid of AI?

A. No, I think we should have respect for it. And we should demand as citizens that governments get to work and properly regulate it. We, the consumers, should not use products or services that we believe do not meet certain standards. If we all behave like this, we will force the industry to opt for more ethical options.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition


More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_