_
_
_
_

Margaret Mitchell: ‘The people who are most likely to be harmed by AI don’t have a seat at the table for regulation’

One of the founders of Google’s Ethical Artificial Intelligence team, who was fired in 2021, warns of the lack of transparency

Margaret Mitchell
Margaret Mitchell, expert in ethics in artificial intelligence, pictured in Barcelona on November 8.Albert Garcia
Josep Catà Figuls

Margaret Mitchell, born in Los Angeles, prefers not to reveal her age. This could be a matter of vanity, or her tendency to safeguard the privacy and the proper use of data. Perhaps the latter, as she is one of the leading experts in ethics of technology and has devoted her career to reducing algorithmic biases. She founded and led Google’s Ethical AI team alongside Timnit Gebru until they were both fired, a few months apart, three years ago. She now oversees the ethics department at Hugging Face, is one of the 100 most influential people of 2023 according to Time magazine and was one of the most anticipated speakers at the Smart City Expo World Congress held recently in Barcelona, Spain.

Question. How do technology companies react when they are warned about ethical issues?

Answer. The people I worked with [at Google] were absolutely passionate about it. I think at a higher level, it was maybe not as clear why what I was doing was important. Maybe they didn’t understand.

Q. Why were you fired?

A. It wasn’t someone saying “ethical work is bad.” It was more tied up in weird power differentials. It had to do with my co-lead [Timnit Gebru] being treated as less than her peers. That kind of treatment was consistent with racism. I couldn’t not see that, and I couldn’t not say something about it. It was part of a larger discussion of power and systemic discrimination.

Q. Do AI developers care about ethics?

A. It really depends on the developer. I’ve worked with developers who always want to check if they’re doing the right thing. But the development culture, engineering culture, tends to prioritize a sort of alpha behavior where you always want to be the first to put out something or get the highest score. That can work as a disincentive to thinking through ethical considerations.

Q. How do discrimination and bias work in algorithms?

A. It starts with who’s at the table from the beginning. If you don’t have inclusive spaces for diverse people, you’re not going to be able to incorporate a diversity of thinking into your product development. If they don’t incorporate marginalized people, the kind of data that’s collected, and how it’s collected, reflect the viewpoints of the people who have power, and these tend to be disproportionately white and Asian men, so the kind of data that gets collected tends to be consistent with their worldviews. So if the viewpoints of people who are marginalized aren’t treated as important in the development of AI, then there will be decisions to launch that technology, even if it doesn’t work for people like them, or even if it harms them. For example, autonomous cars that do not detect children, because the data that control them do not take into account their most chaotic or erratic behaviors. Women used to be hurt by airbags, because the crash test dummies were the shape of men, so they didn’t actually save women, because they tend to have breasts. You have to make sure that your system works equally well across those different kinds of characteristics and contexts.

Margaret Mitchell
Margaret Mitchell, pictured in Barcelona after her talk at the Smart City Expo World Congress. Albert Garcia

Q. What are the groups most discriminated against in AI?

A. Black women, non-binary people, people from the LGTBQ+ community and Latinx people. You also see this in who works in tech companies and who doesn’t.

Q. How can you make sure that technology respects ethical values?

A. One of the things that help me navigate this is the idea of value pluralism. Different people have different values. Something that really concerns me about AI right now is this idea that you should have one general model that has the right values, as opposed to another approach, which is called Artificial Narrow Intelligence, where you focus on specific tasks and specific values. You need to have more individualized kinds of models.

Q. Is there a lack of regulation and transparency?

A. Companies need to disclose the basic details of their training data. Maybe not openly share their data with everyone, but minimally, demonstrate that the proportions of different characteristics are roughly equal, and take into account actual context, not a stereotyping context.

Q. What do you think about those who asked for a pause in the development of AI?

A. It was a weird thing. It turned out that the group it was coming from is highly associated with effective altruism and it was trying to get more power influencing politicians. This was probably less about doing the right thing and more about trying to get power. Some of the wording in there was super problematic, this idea that we’ve reached a point where AI is beneficial, but it’s not beneficial for everyone; there was a focus on existential threat, as opposed to individual harms that were already happening to different subpopulations. It really reflected some of the most problematic thinking in tech, and it kind of made me feel sick to my stomach a bit. And now they have a seat at the table, they are getting invited to discuss regulations.

Q. Are you optimistic about the future of AI?

A. No. I think that the people who are most likely to be harmed by this technology are not the people who have a seat at the table for regulation or who have a seat at the table for large tech companies. The fact that we’re having discussions about ethics in AI all over the world, I would not have imagined that four years ago. But that doesn’t mean that where we will be in 10 years is the best place for us to be. There are a lot of other paths that I think would be more beneficial for humanity that I don’t see us following.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_