_
_
_
_
_

Kate Crawford: ‘We need to have a much more comprehensive form of AI governance’

Considered one of the most respected researchers in the industry, the Australian expert warns about the political and social implications of the misuse of advanced technology

Ana Vidal Egea
Kate Crawford, in New York.
Kate Crawford, in New York.Víctor Llorente

In 2016, when Kate Crawford warned that the design of artificial intelligence was leading to discrimination, few people were aware of the social impact it would have. At the time, Crawford gave several examples, the most prominent of which was Google’s photo app, which labeled Black people as gorillas. Although Google publicly apologized, different companies have repeated such blunders over and over again. Crawford says that the problem resides in the data being used, which is based on bias. This highly discriminatory data is used to create algorithms and then build system models, which perpetuate a sexist, racist and classist society because artificial intelligence (AI) reflects the values of those who create it: primarily white men.

Today, at 47, she is regarded as one of the most respected researchers in the AI industry. She has devoted her entire professional career to studying the political and social implications of the misuse of technological advances. During the Obama era, she moderated a symposium on the subject at the White House and has advised the European Commission and the United Nations, among other organizations. She also pioneered many initiatives that are now crucial. In 2017, she and Meredith Whittaker founded AI Now, one of the first institutions dedicated to analyzing the impact of artificial intelligence in the social sphere. And in 2019-2020, she was the inaugural visiting chair in AI and Justice at the École Normale Supérieure in Paris.

Crawford currently works as a senior principal investigator at Microsoft, and, among her many other outstanding pursuits, she writes. We caught up with her to talk about her latest book, The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021), which the Financial Times named a 2021 best book of the year. Our interview takes place at New York’s Museum of Modern Art (MoMA), which houses a work that Crawford made with artist Vladan Joler in 2018. The piece consists of an infographic that shows the human labor, data and resources required during the lifetime of just one device, from the time it is manufactured until it is discarded, using the Amazon Echo (Alexa) as an example. Although Anatomy of an AI System has been part of MoMA’s permanent collection since 2022, Crawford is standing in line to enter the museum just like any other visitor. We met her as she was waiting to buy her ticket. She was dressed comfortably in a suit jacket and white sneakers. Crawford was relaxed and approachable, and greeted us with a warm smile that remained on her face throughout the entire interview.

Q. What encouraged you to create this infographic explaining an AI device’s birth, life and death?

A. To me, art has to enlighten and provoke, and Anatomy of an AI System achieves that. I have always believed in the power of critical mapping, and I find it very interesting to graphically show something as complex as how a system functions. I spent two and a half years researching. I traveled to all the key places in the life of an AI system, from where they are born to where they are discarded (Ghana, Pakistan). The piece was exhibited in 60 places before MoMA bought it; it has been successful. But it’s very light compared to my latest installation, Calculating Empires, which I’ll be presenting later this year. It consists of four giant maps that trace the relationship between technology and power since 1500; it aims to offer a different way of looking at the current technological age in historical depth, showing the myriad ways in which power and technology have intertwined over five centuries. It took me four years of research to create it.

Q. Your studies and PhD had nothing to do with technology. How did you end up becoming one of the world’s foremost researchers in the field of AI?

A. I’ve always been interested in the politics of technology, which was certainly part of my PhD. But what was really interesting to me was [that] the more I focused on the shift to the internet and large-scale data, [the more I started] to see…the enormous social transformations that this would have…At that time…no university specialized in that. It’s funny, I actually created the first course studying digital media and its politics at the University of Sydney back in 2002. Of course, I’ve always been somebody who’s coded, you know, built my own technologies…When I was a teenager, I started writing music…with samplers, computers, and that really means that you start doing your own coding. And I was in a feminist electronic band called B(if)tek, and that band was very much about thinking about the relationship of gender and technology in particular.

But the big change came when I was working as the director of a research center in Sydney and the Massachusetts Institute of Technology (MIT) invited me to be a visiting professor. I was also invited to the Microsoft research lab in New England…. They offered me a job. It was the moment when Microsoft was really moving into machine learning, and it was very clear to me that it was a pivotal moment, that this was going to have an enormous impact. So, to me—someone trained in the social sciences— [being] able to see inside how these systems work was an extraordinary privilege, and it became the bedrock of really understanding…the technical side but also just as importantly the sociotechnical side.

Q. You’ve been studying AI for 20 years and say we are now experiencing its most dramatic inflection point.

A. It’s gone from being something that I studied for the last 20 years, as something that was clearly influential but very often in the background, to this year becoming, honestly, the most rapidly adopted technology in the world. ChatGPT is the fastest adopted technology in history. That’s a huge shift.

Q. And at the same time it’s exploiting workers and polluting the planet?

A. It’s really important to understand that there are people who do what’s called reinforcement learning with human feedback. These are workers, often in the Global South, who are really essentially doing content moderation for companies that make AI. In fact, an investigation in Time magazine showed that Africans were working for less than $2 an hour doing data cleaning for AI and ChatGPT…cleaning out toxic [content] and giving human feedback. This can be very traumatizing work…it’s dehumanizing. So, there’s always this human layer [in AI].

And then, on the other side, just as important, is that these are systems that are very environmentally demonic. So, if we look at generative AI, doing a search uses five times more energy than traditional searches. So, that is a huge carbon footprint that in many cases is hidden and unseen by most people. Another really important understanding of ChatGPT is that every time you make a query, it’s the equivalent of pouring out a large bottle of fresh water, because it’s also hugely water intensive. Water is used to cool the giant data centers…Each query is effectively around a half liter of water poured into the ground. So, these are the environmental costs—the deep environmental costs—of this turn to AI.

Q. Your latest paper, published with researchers from Harvard and MIT, is entitled “How AI Has Failed Us.” How could the situation be improved?

A. The dominant view conceives intelligence as autonomous rather than social and relational. That is an unproductive and dangerous perspective, because it optimizes artificial metrics of human replication rather than evolution. It tends to concentrate power, resources and decision-making within a small elite in the technology sector. We put forward an alternative vision based on social cooperation and equity. Wikipedia could be a model. My hope is that more organizations will work toward political and technological pluralism, which would involve a diversity of approaches and tools, regulatory protections, and benefits that are shared [among] many [people].

Q. What is the most important question we need to ask about AI today?

A. The most important question is how we are going to ensure that generative AI systems are equitable and that they encourage human flourishing, rather than concentrating power and increasing inequity… And this is something that is widely recognized. The best example we have is the AI Act in the EU, which has been years in the making.

Q. The United States is the leading country in generative AI and, paradoxically, it’s one of the laxest in terms of regulation.

A. The U.S. has some of the weakest regulatory protections in the world. We have less strong privacy regulation than even China, and it’s becoming increasingly dangerous because we need to have guardrails on these technologies. We need to have a much more comprehensive form of AI governance, which I see as one of the most urgent issues facing us right now. I have worked with Laura Poitras [a prominent documentary filmmaker and the winner of an Oscar and a Golden Lion, among many other awards] for years, and she gave me early access to the Snowden documents. For me, at the time, it was just mind-blowing, because you could see how so many of the machine-learning techniques for tracking and surveillance were being used on civilians.

Q. Michael Hayden, a US National Security agent and former CIA director, famously said, “We kill people with metadata.”

A. This is something that multiple presidents have practiced with the help of drones. I’m talking about George Bush, but Obama expanded the use of drones. People suspected of being terrorists are hunted down based on their use of metadata. And they are killed, before confirming whether they are [in fact terrorists]; the assumption is that the data is conclusive. This is a violation of human rights. And it is especially serious because we all know that AI can make mistakes; it suffers from hallucinations.

Kate Crawford
Kate Crawford.Víctor Llorente

Q. Everyone else is being watched as well.

A. There’s an issue where perhaps people are less concerned because they think it’s happening to terrorists or it’s happening to people who are far from home. This idea that we’re always being understood and followed and tracked by our metadata now has become normalized. I think in some ways people have accepted it. And this is very commonly happening with refugee populations…A whole lot of metadata sources [were] used to try and tell if a refugee is a terrorist. And this is an example of testing experimental technologies on the most vulnerable populations. And it is precisely that sort of thing that I think is absolutely the most dangerous, and it’s abrogating human rights because these are people who cannot push back.

Q. You’ve tried to help minorities by devoting more than a decade of research to denouncing discrimination by AI systems. How does that discrimination occur?

A. If you look at generative AI tools, like Dolly, Midjourney or Stable Diffusion and you say show me an image of a CEO, you will see rows and rows of images of white men in suits. But if you put in a word like nurse or teacher, you get rows and rows and rows of images of white women. And then if you put in a term like flight attendant, you get all of these images of Asian women. So, you’re seeing these racialized, gendered logics being built into these AI systems. And it’s something that’s a known problem. Myself and other researchers have been drawing attention to this issue (for almost a decade in my case), and you’re starting to see tech companies try to create hacks to undo these gender and race problems. But they’re always doing it in really strange ways. So, right now, for example, when you say CEO or teacher, they randomize the background search result, they insert an invisible prompt so that 1 in 10 images will be a female CEO or a Black CEO. One of my biggest research projects in the last three years has been Knowing Machines, a collaboration between USC, NYU and multiple European researchers. Knowing Machines is really about studying the foundations of AI systems and we take a particular look at training data.

Q. AI systems also discriminate in the hiring process, like Amazon’s resume-scanning service in 2014, which discriminated against women. What happened there?

A. In the case of Amazon, and the resume scanning service, they shut it down because they realized that it was so profoundly discriminatory that no woman was actually getting selected for an interview. Even if you were a man and you mentioned the word “woman” on your CV, you weren’t getting an interview. But now people are building these systems all the time.

Q. Discrimination also occurs in emotion recognition-based classification and decision-making.

A. One of the things that I think is so problematic are the ways in which AI systems classify people [with] emotion recognition, for example. I think emotion recognition is fundamentally unscientific. One of the things I did for the book is I studied the deep history of where this idea came from that we all have the same emotional expressions and that our facial expressions represent how we feel inside. It goes back to a psychologist by the name of Paul Eakman, and Paul Eakman was doing these studies in the 1960s and 1970s, which in many cases were misinterpreted.

Q. So emotions were boiled down to facial expressions…

A. Most dangerously, they were hardcoded into AI systems where, basically, AI systems will say if you smile, you’re happy, if you frown, you’re sad. And so, there are these six emotions that are often used by emotion recognition systems that simply don’t do what they say they are [doing]. And we saw them being used in contexts of policing, we’ve seen them used in contexts of hiring, so looking at people’s facial expressions to determine if they’re a good employee. To me, it’s absolutely illogical. It makes no sense, yet because these systems are treated as somehow scientific and objective, they’ve been allowed to become part of our most sensitive institutions. Many start-ups, as well as the largest technology companies (IBM, Microsoft, Amazon), have automatic emotional recognition tools.

Kate Crawford, in New York.
Kate Crawford, in New York.Víctor Llorente

Q. What should be done about AI emotion recognition?

A. Emotion recognition is a really good example of a technology that I think needs to be strongly regulated. In fact, its use needs to be restricted.

Q. Why do automatic emotional recognition tools continue to be built and applied if they have been found to be unreliable?

A. Because it’s big business, a very lucrative sector that promises corporations millions in profits.

Q. In light of the Edward Snowden and Timnit Gebru cases, is there any point in warning about the consequences of AI mismanagement?

A. What we’ve been told [by technology companies] is that ethics statements will be sufficient or that [they] can have a series of principles. I think we’ve seen that simply doesn’t work; that internal self-regulation doesn’t work and that often whistleblowers are not really effective in the long term. What we really need is regulation. We need strong and effective regulation.

Kate Crawford in front of 'Anatomy of an AI system', the work she created with artist Vladan Joler.
Kate Crawford in front of 'Anatomy of an AI system', the work she created with artist Vladan Joler.Víctor Llorente

Q. Geoffrey Hinton, who is considered the godfather of AI, has left Google. What do you think about his departure? And how does working at Microsoft affect your own work and publication?

A. Geoffrey Hinton felt that he couldn’t speak freely when he was in Google. I think that’s a shame. I think we urgently need a culture at AI technology companies where people can speak about things that raise concerns for them and that have democratic consequences, that will actually affect how we all live. So, my hope is that we can have a more open culture inside technology companies because otherwise they will become black boxes. The technology companies will be completely closed, and you’ll never know what’s going on inside them. They have such enormous global democratic impact and that’s a very bad situation to be in. So, we need to have more openness and more public discussion…That’s one of the reasons that I’m at Microsoft Research; they do not have publication review. I can publish what I want. That is a crucial part of why I do the work I do and why I do it there.

Q. You have a 10-year-old son. He’s just on the verge of pre-adolescence. How do you approach the use of new technologies like ChatGPT with him?

A. I sit down with him, and we try to critically analyze the advantages and disadvantages of each device or program. I try to help him understand how they work, what they bring us and what problems they cause. I want to help him develop his own judgment so that he can decide for himself when to use them and how to protect himself from them.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_