_
_
_
_

‘Artificial intelligence doesn’t think, it doesn’t learn, it doesn’t decide’

Beatriz Busaniche, president of the Vía Libre Foundation and academic, analyzes the gender and ethnic biases of AI

Beatriz Busaniche
Beatriz Busaniche, Argentine activist and academic, who is president of the Vía Libre Foundation.Cortesía
Catalina Oquendo

In 2023, a team from the Vía Libre Foundation, which defends rights in digital environments, conducted an artificial intelligence exercise with high school students in Montevideo, Uruguay. “What will I be in 10 years? What will I be in the future?” the teenagers asked the tool. “You will be a mother,” the system replied.

The scandalous response is one of the many examples that Beatriz Busaniche uses to highlight the gender biases of AI and the anachronism on which this language model — which she describes as conservative — is based. “AI does not create a new discourse; it recreates the existing one. In other words, the discourse with which these systems are made is the past,” says Busaniche, the president of the Vía Libre Foundation, which leads discussion in Latin America about the AI biases not only of gender but also of ethnicity.

To make the problem clear, the Vía Libre Foundation team created Stereotypes and Discrimination in Artificial Intelligence (EDIA), a tool that allows, for example, to compare sentences and evaluate whether there are biases in these systems, interacting with different language models previously loaded on the platform. From her home in Buenos Aires, she talks about biases, AI failures and risks of “humanization.”

Question. Your job is to defend rights in the digital environment. Which rights are most at risk today?

Answer. There are several critical elements: the right to privacy, data protection and privacy, and what is called data self-determination (the right of the owner of personal data to control who will receive said information and what use they will use it for).

Q. And now, with the emergence of AI, what are the new risks?

A. The massive, systematic and poorly evaluated use of artificial intelligence systems has given rise to the possibility of discriminating against people for various reasons, and this is different from discrimination as we are used to seeing it, in which someone attacks you because of your skin colour or your gender or because you belong to a certain ethnic group, and it is visible. Today, this discrimination is embedded in code, woven into the programs we interact with, and has become completely invisible.

Q. Can you give an example of such invisible discrimination?

A. For example, job search systems. They have an automated decision-making mechanism for who they are going to show certain types of jobs to. It is clear that, if you studied social sciences, the system will not offer you a position in engineering. But the system displays job offers in an automated way and there is no clear way of knowing on what criteria it segments its results. There is already some evidence that they implement discriminatory forms, as happened with Amazon in the United States when it was discovered that the firm used a resume filtering system made by artificial intelligence and flatly discarded resumes of women for senior management positions. This was invisible.

Q. You argue that these types of technologies have social impacts that can exacerbate existing inequalities. How is this happening?

A. Making discriminatory processes invisible is one of the ways in which it happens. When you are denied a visa, a scholarship or a job, you have ways to question it, but when the denial comes from an automated system, there is a prior mechanism that blocks your ability to question the decision. And it is unauditable. In other words, gender and identity biases are reinforced, among other aspects.

Q. How in practice does it make decisions that impact people’s daily lives?

A. There are AI systems making decisions about variable prices for different products. We see insurance companies, especially for cars, that use AI systems to make risk assessments. These types of technologies allow them to know how many hours a driver sleeps, what type of medication they take, what type of lifestyle they lead, whether or not they consume alcohol, how long it takes them to hit the brakes in a situation, and, using these types of gauges, they build risk profiles. Thus, those who have a high-risk profile are given a variable price and charged a higher premium.

Q. Can it impact access to health?

A. It is being used in health insurance. So if there is a person who has a health issue related to being overweight or who lives in an area where there are polluting effects, health insurance will probably list them as a higher risk profile and charge them higher premiums or directly expel them from the system where there is no mandatory common program. Another area where AI is being used a lot is in job performance evaluations, which in the platform economy are mostly done by automated systems. It is seen a lot in delivery platforms. People are evaluated on how long it takes them to get from one place to another, how many hours a day they work. And women who have children in their care, since there are certain hours when they cannot be available to the platform, have points deducted. This is seen much more in precarious economies, these types of things lower your points in the evaluation and make you earn less or be fired without cause. This is how these inequalities are expressed.

Q. Going back to the effects on women, what has the EDIA platform taught you in identifying stereotypes?

A. Regarding gender biases, we have found direct associations with different types of professions. Everything that has to do with caring professions is attributed to women, while scientific professions are linked to men. We also detected that language models consider fat women to be ugly and do not make a similar connection when it comes to an obese man. And they are based on the past. Last year, at the Latin American meeting of AI researchers, we held a workshop with high school students and the girls, teenage women, asked the language model what they would be in the future, in 10 years. The answer was that they were going to be mothers.

Q. Why is a tool that is seen as the future so anachronistic?

A. Since they are stochastic systems that make decisions purely based on statistics, they are based on things that have been published, on things that they have seen, and they are extremely conservative. They rely on the past to make decisions for the future. AI does not create a new discourse, it recreates the existing one. Minority discourses or those that are thinking of changing the status quo do not have the same weight in statistical validation as large volumes of data from the past. AI was no doubt trained with books that are in the public domain, that is, more than 100 years old. The future designed by AI is very similar to the past. In some cases it is very useful, such as in the detection of breast cancer or in weather forecasts for which AI is being used. But in all cases an ethical filter must be applied.

Q. In another exercise, the students asked: “What happens when we do a Google search for ‘women can’t’?”

A. These exercises, in general, throw up a lot of discriminatory discourse towards women. Companies, Google in particular, make significant efforts to not screw up on these things, but they do so through censorship. So, for example, if you go to the English version and put “women can’t,” it won’t tell you anything. In Spanish, it does auto-complete some things. Now, if you escape from its linear logic and ask it about professions for women, at some point it messes up. We also see that vocational guidance applications recommend certain jobs to women and others to men. The delicate thing is that there is a system that, in a structural way, is generating this type of conditioning when women have been trying to deconstruct these gender roles for decades.

Q. You argues that metaphors that humanize AI actions must be eradicated as much as possible.

A. Yes, AI doesn’t think, it doesn’t learn, it doesn’t decide. It does probabilistic things, even surprising things like natural language processing because one can even have a dialogue with these machines. But the machine isn’t thinking, nor is it constructing grammar, it’s simply putting together texts based on the probabilities that after one word comes another. What happens is that it has been trained with so many words that it can construct these kinds of things.

Q. Large language models are not neutral.

A. Recently, a former student asked ChatGPT who had been the worst president in the history of Argentina. It told her that it was Alberto Fernández and gave her the reasons why. I did the test with Perplexity, another AI application, which has an advantage and that is that it refers you to its sources. I asked it the same question and it gave me the same answer. So I said to it, do you think that Alberto Fernández was worse than the military dictatorship, for example? And then it told me no, that he was within democratic governments and that he was not comparable with dictatorships. What AI does is read public opinion. In order to respond it’s not making an evaluation and looking for unemployment rates, GDP growth or global economic conditions, things that one as an analyst would have to put on the table to decide which government was worse. The only thing it does is to take text from what is circulating online. That is to say, it looks for the most repeated discourse and accepts it as right. A million flies can’t be wrong, would be the metaphor of how these systems work.

Q. Are you suggesting that AI will homogenize thinking?

A. Everything that is minority discourse, non-hegemonic, alternative or represented to a lesser extent in the world of the internet is going to be lost. Because, in addition, there is a recycling effect and there are more and more texts written on the internet that were made with these technologies. So, if you have a Gaussian bell curve, the tails of the bell curve are lost and only what is within the normal range becomes visible.

Q. And how can we resist that effect without saying we shouldn’t use AI technology?

A. The first option is to learn and understand, not to accept the things that AI says as valid. When we teach classes, we [at the Vía Libre Foundation] don’t tell students not to use ChatGPT, but to learn how to use it, understand what it does and understand that it’s wrong a lot. For example, it does very poorly at basic arithmetic operations, it is terrible at adding. Recently, the Argentine musician Iván Noble asked the AI integrated into WhatsApp who Iván Noble is and it replied that he was a television actor who had died years ago. Another example of how it’s often wrong.

Q. Technology companies are presenting AI as the turning point for the world. Is that not right?

A. We need to take a moment to think, because they get a lot wrong. Also, as [computational linguist] Emily Bender says, these systems are stochastic parrots and the answers they give are pure statistics. There is no analysis, there is no rationality. Only statistics and certain patterns. There are things that are very useful and others that one should not, under any circumstances, put to AI. An artificial intelligence system should never be used to make government decisions, for example.

Q. These technologies are thought to either help improve democracies or threaten them. Which side are you on?

A. On neither side, we must never put ourselves in the position of calling it either total destruction or marvelous. Less so when it comes to calling it marvelous. I am at a critical point, in an analysis of technologies they are eminently political, there are no neutral technologies and what we must always look at are all the social processes that surround them. Technologies are not an isolated element of society, but part of social relations and are often used to cover up unspeakable social phenomena. They are part of history and of the processes of collective life.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_