Paula Gordaliza, mathematician: ‘Algorithms do not work alone. Whoever uses them should know what they are doing and why these decisions are being made’
The associate professor at the Public University of Navarra (Spain) has just received the Vicent Caselles Mathematical Research Award for her work to eliminate biases in artificial intelligence
Paula Gordaliza uses mathematics to try to make society a little fairer. The young researcher, winner of the Vicent Caselles prize awarded annually by the Royal Spanish Mathematical Society (RSME) and the BBVA Foundation, has developed a system to correct the bias of artificial intelligence (AI) algorithms, which are able to make more accurate predictions than an expert. “The problem is that these decisions are not always socially responsible,” Gordaliza explains in a video call. A researcher at the Basque Center for Applied Mathematics in Bilbao and associate professor at the Public University of Navarra, she began studying a way to eliminate algorithmic bias during her PhD at the University of Toulouse, when AI was not yet under the scrutiny of regulators and subject to public opinion. “Things have evolved very fast during these last five years. Now more than ever, it is important to work on the effects that artificial intelligence has on people’s lives,” says the researcher.
Question. Do you use artificial intelligence a lot in your work?
Answer. I like to remember that, first of all, I am a mathematician, and what I do is research in mathematics. My work consists of founding the theoretical bases needed to develop any technology, in particular artificial intelligence. So I am more concerned with studying mathematical problems and how, once these questions have been solved from the theoretical point of view, they can be applied to real problems. In my case, we are talking about machine learning and algorithmic fairness, which are encompassed in the field of artificial intelligence.
Q. How are mathematics and artificial intelligence related?
A. Mathematics is behind all scientific and technological breakthroughs, and in recent years AI is the most fashionable form of breakthrough. What mathematics does is to establish the theoretical basis for solving the problems we face, which in the case of my research would be the algorithmic biases of artificial intelligence.
Q. What is an algorithmic bias?
A. It is something that is complex to explain, because they are words that have been used so much, they have been given many meanings depending on the context. In statistics, something biased is something that does not behave as expected. Whereas, if we go to the field of artificial intelligence, where this word is widely used, it refers more to inclinations or prejudices for or against a collective or an individual on the basis of certain characteristics, such as gender or skin color. Perhaps it is this that contributes to the fact that algorithms evoke fear and distrust in people.
Q. Why is that?
A. We are witnessing a widespread use of artificial intelligence systems, particularly algorithms, and this is being seen in aspects that directly affect people’s lives. Granting credit, selecting personnel for a job, or, in the clinical field, deciding to whom to apply a treatment or make a diagnosis. There are many more examples, but these are perhaps the most common. And of course, the fact that algorithms can decide on these issues generates fears and uneasiness in the population. This will continue to happen as long as the public does not receive assurances that these algorithms are fair, reliable, and interpretable.
Q. What can be done to allay these fears?
A. This is where you see the importance of mathematics, which helps us to understand how algorithms work and is the tool to open the black box of artificial intelligence. It is important to get the message across that algorithms do not work alone, that whoever is using them knows what they are doing and why these decisions are being made. This would go a long way toward reducing people’s distrust of them.
A. You spoke of prejudice and discrimination. Are algorithms racist?
Q. Algorithms are not racist or sexist. Algorithms learn from data. Machine learning is a form of artificial intelligence that is able to make predictions and connections from huge databases, which it is able to manage at high speed. The problem comes when this data is not high quality. For this reason, it is essential to make a commitment to having quality databases that are not biased with respect to variables that may entail sensitive information, such as race, gender, disabilities, sexual orientation, or any other information that may be susceptible to discrimination.
Q. Is this what your research is about?
A. The idea was to try to create two population subgroups, for example, men and women, that were as similar as possible in all other characteristics. In this way, I tried to eliminate the gender information so that the algorithms were not able to learn about the gender of people and were left with the information provided by the rest of the database.
Q. Society has improved when it comes to discrimination. Why do so many biases still exist?
A. It is not a societal problem. In the end, you are using a tool that learns from historical data and that is biased, and that is how the algorithm learns it. Moving forward, what should be done is to promote research, because it is a matter of knowledge and the frontier of knowledge is increasingly complex. If we want to improve, we need multidisciplinary teams made up of mathematicians, statisticians, computer scientists, and other professionals who contribute to the cause. All points of view are needed, not only the mathematical view, which also needs a lot of improvement.
Q. In what way?
A. We should absolutely encourage careers in the academic and research fields, to have quality research focused exclusively on artificial intelligence, but with very solid mathematical foundations to ensure that what is being done with algorithms is reliable, safe, and fair. To achieve this, there needs to be a motivation for young people to be attracted to this career, which in these times is quite difficult. It is important to improve the conditions of this profession, especially in the early stages. Before, at the age of 30 you were already a full professor, while now, at 29, I am still going to start at the associate position.
Q. When you talk about motivation, do you mean financial resources?
A. In part, but there are also other factors to take into account. For example, feeling that you are advancing in your career and getting more and more important positions. Feeling valued is essential to stay in Spain and keep trying.
Q. You did your doctorate in France. Do you think there are more opportunities abroad?
A. There are many opportunities, but also in Spain. The mathematics and the research that are done here are of excellent quality. I have already gone through the experience of living abroad, and I am sure that throughout my career I will also be presented with other opportunities to do international studies, which is undoubtedly something that gives great value and can be a great influence on your career. But my ultimate goal is to stay in Spain, where research, at least in my field, is advancing a lot, and the importance that artificial intelligence is gaining will give us a lot to work on.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition