Gemma Galdón, algorithm auditor: ‘Artificial intelligence is of very poor quality’

The founder of Eticas Consulting advises international organizations to help them identify and avoid bias. She distrusts the expectations of the sector: ‘To propose that a data system is going to make a leap into consciousness is a hallucination’

Gemma Galdón, an algorithm consultant and expert in ethics and artificial intelligence, in Madrid.Moeh Atitar

Artificial intelligence is not just for engineers. You can lean more towards soft than hard science and still become a point of reference in the global debate about the social and ethical repercussions of what these systems do. Gemma Galdón, a 47-year-old AI expert from Mataró, in Spain’s northeastern region of Catalonia, graduated in contemporary history and earned a PhD in technology-related public policies. She is the founder and top executive of Eticas Consulting, a company that examines algorithms to ensure their responsible use. “Being aware of how society has solved old problems gives me a useful perspective to work with new problems,” she says, sitting inside a coffee shop in Madrid. “Twelve years ago, when I got my PhD, there were very few people in the social sciences who worked with technology.” Her company currently advises European and American organizations. Galdón’s suitcase is packed: she is about to return to New York, where she lives and where she recently received a Hispanic Star Award, an accolade given to agents of change in the Spanish-speaking community at an event at the United Nations. She had to move to America, she says, because in the U.S. “the market is more receptive to responsible AI.”

Question. What is it like to audit algorithms?

Answer. Well, it involves inspecting artificial intelligence systems to see how they work, but first of all to ensure that their impacts on society are fair, that there is no discrimination. And, furthermore, that systems do what they say they do.

Q. And what problems do you encounter?

A. At first these systems are just as biased as society, but after a very short time they become much more discriminatory than society. That is because what AI does is take a lot of training data and look for a pattern. And the pattern is always a white man with a stable job; in the case of banks, this will be their ideal client. Any profile that belongs to a minority or is anecdotal is eliminated from the sample. So a woman has much less chance of being diagnosed with endometriosis through AI, because historically we have not diagnosed endometriosis.

Q. There are those who say that AI cannot be thoroughly examined because not even its creators fully understand how it works, but rather that it learns on its own.

A. False. That idea of the black box is a myth, pure marketing. I think there is a certain desire on the part of the AI sector to portray it as something magical, to make us believe that it is something that we cannot understand and to take away our ability to intervene. What we have seen is that we can audit it when a client hires us and shows us practically everything, but also from the outside we can reverse engineer and see how a system works based on its impacts.

Q. You have advised political institutions to regulate AI. What do they want?

A. What has happened in recent years is that legislators, with very good intentions, have generated a very abstract set of regulations, very much based on principles, and the industry has complained of not having concrete practices. We have an industry born in the image of Silicon Valley, accustomed to that idea of “move fast and break things,” without being aware that what it could break are fundamental rights or laws. Sometimes there is a certain obsession with asking for the code or the foundational models. Those have never been useful to me. We are asking for a level of transparency that is not useful for auditing, for inspecting impacts. If you know that there is going to be an inspection during which we will evaluate certain metrics, then you have to start making changes. With this we change the incentives of the technology industry so that they take into account the impact and bias, and any type of dysfunction.

P. Are you disappointed or satisfied with the AI law that the European Union has agreed on?

A. It seems to me like a giant step in regulation: it is the first law on these issues in the West. What disappoints me is Europe’s role in going further, in creating a market linked to responsible AI. Both the United States and Asia and China are really getting their act together on this.

General Artificial Intelligence is as close as when Plato spoke about the possibilities of other types of worlds

Q. Is everything that is presented as such really artificial intelligence?

A. We are surrounded by very poor quality artificial intelligence. It is no longer an issue of bias, it is simply that it does not do what it says it does, and makes decisions that humans would never make. An example is the system that was implemented to evaluate the performance of teachers in the educational system of several states in the United States. Some workers who saw how their performance changed in manual and algorithmic evaluation took it to court. The court ordered an audit and it emerged that the only inputs that were taken into account to decide if you are a good teacher were the results of your students in math and language exams. It’s a glorified Excel. If the principals of those schools had been offered this system as a spreadsheet that records results in math and language tests, they would never have bought it.

Q. Will responsible AI prevail?

A. I am an optimist. When we audit, we find biased systems that also perform poorly. Artificial intelligence is of very poor quality and at some point the industry is going to have to do better. These systems were born from entertainment tools like Netflix, which can have a high margin of error. If the movie that Netflix recommends is not the one you want to watch, that’s okay. But if the AI wants to work in the medical field recommending a treatment; or in personnel selection, deciding who we hire or who we fire; or in the allocation of public resources... it has to work well. Right now, the AI we are accepting is not only biased, it also doesn’t work well. The good thing is that both problems are solved at the same time. When the problem of bias is addressed, the other inefficiencies are also addressed.

Gemma Galdón, on November 27 in Madrid.Moeh Atitar

Q. The departure and reinstatement of Sam Altman as CEO of OpenAI has been linked to an alleged sensational advance towards Artificial General Intelligence (AGI), or superintelligence, something that threatens humanity. Do you believe it?

A. General Artificial Intelligence is as close as when Plato spoke about the possibilities of other types of worlds and lives. Humanity has always dreamed of automatically reproducing human consciousness. We have been able to dream science fiction futures. There is a debate about AGI that has nothing to do with technological capabilities right now.

Q. Aren't machines going to surpass humans?

A. The way we humans think, creativity, the new, has nothing to do with AI. Here’s a very simple exercise: if we give a system all of Picasso’s work before 1937 and ask it: what is Picasso’s next painting going to be?, it will produce any old thing. Yet in 1937 he painted Guernica. People evolve in our way of expressing ourselves, of loving, of working, of writing, of creating. Proposing that at some point a statistical and mathematical data system will make a leap into consciousness is a hallucination.

Q. What ChatGPT does when it makes up answers to questions is also called hallucination. It’s unreliable, right?

A. This is the case of a lawyer who works defending victims of pedophilia and ChatGPT creates a biography of him making him out to be a pedophile. Why? Because his name appears associated with this word most of the time or more times with this word than with other words, so that it associates this word with you and that’s it.

At some point we have to consider removing polluting technologies from circulation, such as cryptocurrencies.

Q. You study the social impact of AI. What about the environmental impact? Because data centers have become big wasters of water and energy.

A. It doesn’t make any sense that right now, when they do an environmental audit in your company, they come to see what type of light bulbs you have but don’t look at where the servers are located and how far the information has to travel. There has been no desire to quantify the environmental impact of data processes and to encourage the industry to have servers closer to the place where the information is provided. It is a debate we have not had yet. In the era of climate change it makes no sense that almost everyone is talking about technology as the solution and not as one of the problems.

Q. We should not even mention cryptocurrencies, considering their consumption needs.

A. Just as we are removing polluting cars from the streets, at some point we may have to consider removing polluting technologies from circulation. Maybe we have to start prohibiting blockchain architectures when the social value is not there. What cryptocurrencies provide is a speculation tool, an investment mechanism more similar to a pyramid scheme... If we were saving lives I would say: ok, it’s still justified.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information