Alan Mislove, researcher: ‘There are technology companies with an incredible influence on how we speak and think’

The expert points out that algorithms play an ever-growing role in shaping our perception of reality. The difficulty in deciphering them places society at a significant disadvantage relative to major corporations

Alan Mislove, former White House advisor and professor of computer science at Northeastern University, at the Telefónica Foundation in Madrid.Álvaro García

Alan Mislove is a professor of computer science at Northeastern University in the U.S. who has spent years studying how the algorithms that shape our daily lives work: from flight prices and advertisements to credit scoring. His expertise led him to spend a year and a half working at the White House, where he helped craft new legislation on the topic.

During his time there, he observed that government departments were eager to understand how AI would impact their operations. Mislove, 44, also contributed to President Biden’s executive order on AI, which may be at risk when Donald Trump takes office. The New Orleans-born researcher recently visited Madrid to speak at the Internet Measurement Conference (IMC), one of the largest global gatherings on networks, traffic, and security, organized this year by IMDEA Networks and Carlos III University.

Question. Can an algorithm be so important that it affects the future of democracy?

Answer. Yes, of course it can be key. There are a number of populations that don’t get their news from the mainstream media. Platforms like X or TikTok are, in many cases, the way people access information and, with so much content available, you need algorithms to decide what you are going to read. In politics, it is critical to listen to both sides to understand what their arguments are, but that’s not often what the algorithms are trying to do. They’re optimized for engagement, to keep you on the platform longer. The best way to achieve this is to show you more content that you are interested in instead of content that you may not be interested in, but that may be useful for you when you’re making a political decision.

Q. Can algorithms change opinions?

A. It’s an interesting question. I don’t think we really know the answer. Twenty or 30 years ago, there was a certain type of media that most people digested. So there was a kind of common ground of what was considered generally accepted. Today, that’s not the case anymore.

Q. It’s surprising that you’ve been studying this for years and don’t have the answer.

A. There are sociologists who are looking at the direct impact, because that focuses on humans, which is something I don’t study much because I focus on algorithms. These systems change very quickly. We’ve studied everything from e-commerce to ride-sharing apps to advertising. It takes months or years to really understand how one of these systems works. By the time we design an experiment to answer one of these questions, the systems have changed. X [formerly Twitter], for example, has changed dramatically.

Q. Is it worrying not to really know how algorithms affect us?

A. It’s true that it’s becoming increasingly difficult to understand exactly how algorithms are affecting our lives. Many times people don’t even realize that there are recommendation algorithms and other ranking algorithms that sit behind the systems that you interact with. One of the key points we worked on when I was in the White House was defining the practices that the government should follow when using AI. One of the main points was transparency: notifying people when an algorithm makes a decision about them that could negatively affect them, and giving them the ability to appeal.

Q. That’s for algorithms in public systems, but users know that social networks are organized by algorithms...

A. Most people do, but what they don’t realize is how much data they collect. For example, the apps on your phone collect a ton of information about you. People don’t realize how much data goes into these algorithms, or how tipped the scales are in terms of the amount of effort, money and resources that are going into coming up with these algorithms to optimize the content they’re showing you.

Alan Mislove.Álvaro García

Q. The goal of these companies is to make money. If they also change society, is that just an unintended consequence?

A. I can’t speak to their motivations, but yes, the objective of these companies is to maximize you. Some of the largest companies in the world handle an incredible amount of data and have an incredible influence on how we speak and think, whether nationally or globally. It’s easy to imagine how that can spiral out of control.

Q. Is the U.S. government trying to ban TikTok for those reasons?

A. There are a lot of motivations for banning TikTok. There’s growing recognition that these systems are incredibly powerful. They have the ability to influence people in a lot of ways. I saw, for example, a graph circulating on X, which claims that the platform recommends more right-wing content. Even if that were true, which it could be, it might be because those types of people are more active on Twitter, or it could be because the algorithm is actually recommending it. I don’t know which of the two is true, and I could design an experiment to study that. But there are many potential explanations. Just because you see a difference doesn’t mean it’s the algorithm’s fault.

Q. It’s hard to wait for the experiment when people are already asking about X’s influence on Trump’s win in the U.S. election...

A. We’ve had cases before where we were sure and then we weren’t. For example, 10 years ago, when we were studying price discrimination on flights, there were times when people would say, “I searched for a flight and someone else searched for the same flight and the prices were different!” We set about designing experiments for a year and controlling variables, and we discovered that only in very narrow circumstances was this true. Most of the time, what happened was just random.

Q. It was all a coincidence?

A. There were a few exceptions. We found that in some apps it changes if you’re on a mobile device or desktop. In others, it was based on what country they thought you were in: even if it’s the same flight, if I buy it in the U.S. or Spain, they may publish different fares. Airlines have been doing this for years. There’s a natural tendency to think, every time you see a difference, “oh, an algorithm is screwing me over,” which is sometimes true, but most of the time, it’s not. It’s not really discrimination, they’re just trying different things.

Q. Some people also suspect their cell phone because they get ads about things they just said out loud...

A. I would say there is a bit of confirmation bias. You only notice when it happens, not when it doesn’t happen. There are probably a thousand other things you said to your partner that day that didn’t have any effect. Some colleagues of mine did a study because people are worried about whether Alexa or apps on your phone are listening to you or recording you. They ran a study with mobile apps. They found apps that did other terrible stuff, but not apps that secretly recorded you. Sometimes there is household-based targeting. Data brokers notice that you live with that other person, and they associate the profiles in various ways. So in certain contexts it could be that your partner searched for something, and suddenly you started getting ads for it. But I think, again, this could be the explanation or it could just be randomness, and you noticed that one coincidence. Scientifically speaking, we’d have to come up with an experiment to show this was happening systemically.

Q. Will personalization algorithms become an even bigger field?

A. Over the years, it has only gotten more prevalent. More systems are being personalized simply because there’s more data available. Anytime you have a situation where there’s a ton of content, more than someone can consume, you need some kind of algorithm to say, “Here’s what’s relevant to you.” That’s personalization. These algorithms are only likely to intermediate more of our daily lives.

Q. Are you more afraid or hopeful for the next five years?

A. If you had asked me five years ago, I would have said that people have no idea what is going on, I would have been more pessimistic. Now, I feel that there is more awareness of how much algorithms are involved in these systems. It is something that is in the public opinion. The optimistic answer is that you see governments really grappling with these issues, which gives me hope that, in five or 10 years, we will have found a way to balance things out. The pessimistic answer is that these systems are moving so fast and corporations are so powerful that governments are going to have trouble keeping up.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In