Paola Ricaurte, researcher: ‘Large technology companies are allies of authoritarian governments’
An expert in artificial intelligence and feminism, the Mexican-Ecuadorian professor speaks with EL PAÍS about how big tech is building a model of the world that will deepen inequality
Paola Ricaurte Quijano was born in Bogotá by chance, grew up in Ecuador and has made her home in Mexico. She likes to be called Mexican-Ecuadorian. An associate professor in the department of media and digital culture at the Monterrey Institute of Technology and Higher Education, she’s also a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University. At the latter — when the subject still wasn’t even being discussed — she began to study the effects of artificial intelligence (AI) on the Global South, as well as “technology as a space for colonial reproduction.”
Today, she warns about how, if the rules surrounding the game of technological developments aren’t changed, inequality will deepen. “Sociotechnical systems aren’t a natural outcome, but rather a set of decisions that drive a new model of the globe. We live in a world dominated by one type of technology and a handful of actors,” she affirmed, during her speech at the DemocracIA Forum in Buenos Aires. The event was organized by Civic Compass, Luminate, and the International Fund for Public Interest Media (IFPIM).
Ricaurte Quijano is a co-author of the AI Decolonial Manyfesto, co-founder of Tierra Común — an academic initiative for the decolonization of data — a promoter of the debate on feminist AI, as well as a member of the Alliance for Inclusive Algorithms. “In 2018, when I arrived at Harvard, everyone was talking about artificial intelligence. I began to investigate: what is this? How is it consumed? How does it impact the Global South?” she asked in Buenos Aires, while in conversation with Women Leaders of Latin America.
Question: Six years later, and with everyone now focused on the subject, how is AI consumed today?
Answer: Today, AI is on the public agenda… but not in the way I would like it to be. To begin with, it has a rather unfortunate name, because it’s neither intelligence, nor is it artificial. Under this label, a number of very diverse things are grouped together. Firstly, it’s a field of study, but it’s also a set of very diverse technologies and techniques. So, when we want to talk about AI in the public debate, we have to specify what exactly we’re talking about.
Q. What would be a good definition, then?
A. As a technical definition, I like the one used by the OECD: “A system based on a machine that can, for a human-defined objective, make predictions, recommendations, or decisions that influence real or virtual environments.” But the problem associated with the conception of technology is that we think it’s only a mathematical or mechanical procedure when, in reality, these are sociotechnical systems. That is, they emerge from society and, therefore, drag along all the social conditions of their production.
Q. You’ve spoken extensively about a series of non-natural decisions that drive a “new model of the world.” What’s that model like?
A. When we think of technology as part of these social systems, we [have to] understand that we’re not just talking about hardware, algorithms, data, or applications: we’re talking about knowledge production systems, institutions, regulations, infrastructures. We’re talking about social interactions, natural resources, labor.
Q. Regarding the AI-based model that’s being built: what does it propose and who are the dominant actors behind it?
A. What we’re experiencing must be read in terms of historical processes and social relations. In the history of humanity, dominant technologies have served to control nature and to foster an idea of what society, work, and relationships should be like. So, the sociotechnical systems we see today respond to this vision of the world, where the important thing is the accumulation of money, [or] the increase in productivity and efficiency. The problem is that this narrative doesn’t take into account that the world is finite and that the technological development of these sociotechnical systems widens social gaps.
Q. You’re of the opinion that, if we don’t change the rules of the game, inequality will increase.
A. These systems widen inequality because they concentrate all the money, power, natural resources, data, and knowledge in the hands of a few people. They’re systems that are made to optimize the extractive processes of dispossession and violence. I’m very critical of that. I don’t want the model of the world that these hegemonic technologies are reproducing; I want other technologies that contribute to a model where human rights and care are at the center, both for the planet and for people, particularly for those who are suffering from the impacts of technological development in a disproportionate way. Corporations are developing technologies that contribute to this accumulation of wealth and social inequality. But the technologies that we need are technologies that help people like us, [as well as] our most marginalized populations.
Q. World leaders have just met at the UN to address the future. What should they be proposing when it comes to AI?
A. One of the central issues is the governance of these systems. We’re discussing technologies developed by a few corporations, but they’re being used worldwide. Their impact is transnational: it transcends borders and, oftentimes, ideologies. [And the firms’] discourse — which is associated with this idea of advancement and progress — is easily appropriated in contexts of inequality and violence, like those in Latin America. It’s often said that, “well, we’re going to use AI technologies to promote public safety.” That’s when governance matters and questions arise: who are these world leaders who discuss AI? From which countries are they speaking?
Q. The Latin American perspective isn’t taken into consideration by the global debate…
A. There are very few leaders who are controlling the debate and they do it in their favor, so that these technologies are quickly appropriated and governments and citizens continue to be clients. Under this model of the world, companies regulate themselves and aren’t accountable to societies: strong regulations aren’t developed, nor are fiscal policies imposed. When we think about how the global debate is going, [those of us in the Global South] have to be very clear about what those forces and tensions are. We have to insert ourselves [into the debate], because nobody is really defending our rights.
Q. How could Latin America reverse this situation?
A. Our countries have very little negotiating capacity. Europe, on the other hand, is acting as a regulatory front. While [the EU] doesn’t stand out for its capacity to develop the technology, it’s investing a lot of money to compete. What [the EU members] did was ask: “how do we protect ourselves?” Well, [they decided to impose] regulations. They’re very intelligent, because they proposed a sovereign perspective of artificial intelligence at a regional level. But this isn’t a policy focused on rights, either, and that benefits European companies. Why can’t we — when we have a population that’s the whole of North America and Europe combined, [as well as] the resources, the workforce, the data — do the same? In other words, we would have more negotiating capacity if we were to articulate a different perspective on the subject of artificial intelligence. But we’re not doing it, because the companies are lobbying [against this alternative vision].
Q. Brazil seems to be going it alone when it comes to regulating AI.
A. Yes, it’s doing it alone. And then we see these narratives that Brazil is authoritarian, when it’s simply putting on the brakes. A foreign company cannot have more political and economic power than a nation state. That’s why I say that this is a direct risk to democracy. Now, many of the governments in the region are developing policies, but they’re aligned with the interests of companies. We’re not seeing laws for the protection of personal data, articulated regulations, or a public policy vision.
Since these systems aren’t designed for us, all they do is reproduce longstanding inequalities”
Q. Given these uneven playing fields, what options do we citizens have?
A. There’s always the option to make decisions. And I think that, sometimes, we minimize the capacity we have to choose our governments and demand a vision and a policy that defends our rights. Citizens in Latin America and organized civil society groups have always been at the forefront, but we also lack articulation. [We need to] understand that this isn’t a technological issue. We’re talking about a model of society, of the defense of democracy and human rights. We’re talking about the fact that we don’t want inequalities or violence to increase in our countries and about how these systems are contributing to aggravating the problems we already have.
Q. How does it aggravate them?
A. Inequalities are reproduced at the macro level by the concentration of resources and power. The gap between countries that have control over technological development and those that don’t is vast. Then, at the country level, what happens when these systems are installed by governments for public decision-making purposes, regarding access to health services, education, social services, jobs? Well, since these systems aren’t designed for us, all they do is reproduce longstanding inequalities.
Q. In which specific cases can this be observed?
A. For example, predictive systems for preventing teenage pregnancy end up being used to control women’s bodies instead of solving the problem of violence behind it. The same goes for preventing the risks of teenagers committing delinquent acts. Why don’t [policy-makers] address the social issues that put vulnerable teenagers at a disadvantage in the first place? The problem is that governments adopt these sociotechnical systems under the premise that technology will solve a social problem… and it does exactly the opposite. [These technologies] reproduce social classification. If we look closely, these systems act against the poor, women, and people in situations of social mobility. Never against the rich. We don’t create [AI systems] to predict how much the rich will steal or evade taxes… [they’re only made] to deal with the most precarious population. So, not only do these systems not work, but our governments also contribute to reproducing inequalities.
Q. What should feminist AI be like?
A. There are many discussions about gender and racial biases in AI. It’s important to understand technologies as power relations. If you don’t change the power matrix, you’ll continue to have systems that may work, but they’ll continue to reproduce these asymmetrical power relations in society. Therefore, our perspective has to be feminist. We want to [combat] the causes of inequality and asymmetrical power relations and structural violence that are reproduced through these systems at all levels and throughout the life cycle of AI. I want us to consider how [AI] affects labor relations, democratic systems, the environment… as long as we don’t have this slightly more complex vision, we’re not solving any problem.
Q. When we talk about actors in this new model of the world, what role does mass media play?
A. The most urgent task we need from the media is for [outlets] not to follow hegemonic narratives regarding these systems and for them to have a critical position on what they mean for our societies. There’s a very important space for people to be able to understand the concrete, material impacts of the development of these systems on their lives and on those of their people and territories. [The media has to consider] the socio-environmental impact of data centers (which require energy and water to cool them), the whole labor issue, the extraction of minerals (which are used for technological development, such as batteries), the opaque supply chains and the fact that we don’t know the place that Latin America occupies in this updated international division of labor.
Q. What’s the most urgent discussion regarding AI?
A. We need to understand how the development of these technologies is directly linked to political systems and democracy. These large companies are very close allies of authoritarian governments and are driving the automation of violence on a large scale.
Translated by Avik Jain Chatani
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition