_
_
_
_
_

Daniel Howden, AI watchdog: ‘These systems are already affecting human and civil rights’

The founder of Lighthouse Reports, a platform created to investigate how governments use algorithms to make decisions, warns about the need to regulate this revolutionary technology

British journalist Daniel Howden in a hotel in Santiago, Chile, on December 4, 2023.
British journalist Daniel Howden in a hotel in Santiago, Chile, on December 4, 2023.Sofía Yanjarí
Antonia Laborde

British journalist Daniel Howden, an expert on corruption and migration, has been a correspondent for the most important English media outlets for much of his career. Three years ago he founded Lighthouse Reports, a platform dedicated to investigate how central and local governments around the world use algorithms to make decisions, and hold them accountable. The platform, which partners with newspapers, podcasts and television networks, has worked with more than 120 media outlets to publish its reports, co-created with journalists from the respective alliances.

Howden’s strategy is to work with specialists, without the rush of a newsroom, and to take advantage of the showcase and the relationship that is already established between the media and the audience. In this way, they have done about 20 annual investigations in different corners of the planet. He talks about his discoveries in this interview conducted in Santiago, Chile, where he participated in the Ethical Algorithms Project of the Adolfo Ibañez University, with support from the innovation laboratory of the Inter-American Development Bank (IDB Lab).

Question. Why did you want to focus on accountability for the use of algorithms?

Answer. Automated decision-making systems are being implemented around the world in areas like criminal justice, healthcare or welfare services without much or no public consultation. We are being watched, classified and scored by systems that most of us don’t understand. For the average person, that means that decisions are being made about their lives that they have no control over, whether it’s applying for a mortgage, a job, or a government benefit. If the affected person cannot understand how their request was accepted or rejected, it means that it didn’t go through due process and that they cannot challenge the decision or know the data that they probably did not even know had been collected. Most of these AI systems are being implemented by governments, cities and public agencies without supervision. That is why journalists have to enter that uncomfortable space to report and defend the need for regulation.

Q. What did you find when you started Lighthouse?

A. What frustrated me was that tech journalism always talked about artificial intelligence as something dark that would take place in the near future: “This is going to happen.” But they ignored the fact that there are already things to report on present in our lives. If you are in the poorest part of the world, it is quite possible that international support for aid programs is based on an algorithm developed by the World Bank that calculates poverty using a rather controversial methodology.

Q. In what cases is this automated decision-making system being used?

A. In criminal justice sentences in the United States and some places in Europe to a greater and lesser extent, for example. These systems provide risk scores, which judges then use to pass sentences and determine how much jail time a person should serve. Prison authorities use them to decide who should be sent to a maximum security prison or who should be released on parole.

Q. What information do they use to pass a sentence?

A. It is an interaction of variables. Some are simple, such as age, gender, classification of the crime, but also ethnicity, family size, last known address, financial records... There are factors that cannot be taken into account. We also thoroughly analyze systems that detect citizen fraud in welfare states. One question is how the decision is made about whom to investigate for possible fraud. In the Netherlands there was a system that had 315 variables, where more than 20 were related to language. Now what were they trying to control for in language? They’re trying to work out who was a native citizen and who wasn’t. But you can’t say that a person is more likely to be committing fraud because they’re an immigrant.

Q. Are the biases in AI systems a reflection of society’s biases?

A. When a technology company comes to sell a system like this, what they’re claiming is that it’s going to make objective decisions, so it’s removing the human bias factor. But it depends on how you train it. If the training data set that you have is basically a reflection of years worth of biased actions, then that bias is going to come through in the training data. A lot of AI accountability reporting has focused on predictive policing. These systems tell the police: you need to concentrate your policing resources in these areas and at these times, because looking at historical records of criminal offences committed, these are the hotspots where these things happen. So in theory, that sounds okay. But the risk is that, for many years, police are allocated to look for certain kinds of crimes and they are concentrated in some particular neighborhoods. Most of the drug dealing in Sao Paulo, for example, is concentrated in the poor neighborhoods, areas for people who work in the rich neighborhoods. Statistically, the person who’s living in the wealthy area is more likely to be purchasing illegal drugs and possessing illegal drugs; it’s just that they’re going to do the purchase transaction in this neighborhood, so this system will never tell you to look in the wealthy area. This is how bias gets built into this.

Q. What can be done about it?

A. in the simplest terms, one side of that argument would say that we can do a better job with the construction and the training of these systems. These are the ethical AI exponents. Another group of people would say that these tools are inappropriate for some of these tasks. But we’re still at the stage of catching the really bad systems that have been very badly trained and creating incentives for public authorities to work on better systems and to better understand the technology that they’re buying. But we’re skipping the part where we make it work in an ethical way, where there is accountability for mistakes that get made and where you can challenge the outcomes from these systems.

Q. Is artificial intelligence affecting human rights?

A. The way in which AI is being used by public authorities in governments and cities and national systems is affecting human rights. The system that determines possible fraud in welfare programs or if a person has to stay in jail for longer are some examples, but AI systems are increasingly being used to decide who gets interviewed for jobs or if you’re asking for a loan. That’s why it’s not good enough to say that we can’t move fast enough to regulate AI. We can’t throw up our hands and say that it’s cleverer than we are and that we’re not going to try to assess bias in systems which are going to make very basic decisions which impact our civil rights.

Q. How do we avoid falling into that attitude?

A. What we need to do is just step back a bit from the hype, which is very exciting and very frightening and tells us that this is something inevitable that is going to remove our capacity to make decisions about how our societies will be and place all of that authority in the hands of a few technology companies. This is great for them, but it’s not so great for anyone who expects to be a citizen of a country and not just a consumer of a product. We think of it like we would the medicines market: the average politician who decides on regulation is not in a position to pass tests on the latest miracle drugs. They rely on public institutions that inspect and regulate how drugs get released to the market. Why are we able to do that? Because we’ve decided that it’s in the public interest to have these safeguards in place.

Q. Is what is happening similar to what occurred with social media?

A. Right now there’s a massive amount of hype around AI. We’re being told that AI is going to fix everything, or it’s going to kill us all. These are two incredibly intoxicating ideas, but they’re all somehow in the future. What’s much less discussed is what we can do right now in order to treat AI systems that are already in our lives. There are rules about everything else that impacts our lives, and there’s the idea that AI should be an exception to this. We heard them from big tech platforms like Amazon and Airbnb, which told us: you can’t regulate us just like any other retailer. You can’t regulate us like anyone else in the hospitality industry. And Airbnb has had a profound impact on the cost of rents right across the world. They shouldn’t be the dominant voices in this conversation. And it’s okay to expect our governments to think about how to create flexible future-proof legislation to build in the same civil and human rights that we already have. We shouldn’t be sacrificing civil and human rights in pursuit of an amazing AI future.

Q. What is the position of the AI industry?

A. What it wants is light touch regulation that it has a huge say over. Now, governments can play a role in this in two different ways: they can set down a regulatory playing field, which makes sense for the industry because it means that all of the players have to develop technology and deploy it in the same way, and the other thing that they can do is set the bar on the systems that they will deploy on the public and be transparent about those systems and require that technology providers give access to third parties, like the media and inspectors and auditors.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_