Carissa Véliz, philosopher: ‘AI presents predictions as facts, and that has profound ethical implications’
In her new book ‘Prophecy,’ the scholar explores how probabilistic thinking —the basic tool of artificial intelligence — has become a mechanism for exercising power
In 2020, a thirty‑something Spanish‑Mexican philosopher burst into the global debate on the effects of technology. In her book Privacy Is Power, Carissa Véliz laid out why the constant intrusion of surveillance capitalism into people’s private lives is unacceptable. Her fresh, rigorous approach quickly turned the book into a touchstone in the field. Six years later, the Oxford University philosophy professor is back in bookstores with Prophecy.
Her new work offers a clear‑eyed analysis of how statistics and predictions — now heavily used by artificial intelligence — have become tools of power that shape the world. “Predictions are often commands disguised as a search for knowledge. The entire data economy has been built because we want to predict; otherwise, we wouldn’t spend our time and energy on it,” she tells EL PAÍS during a visit to Madrid.
Question. What makes prophecies so appealing?
Answer. In recent years, I’ve noticed that a culture of divination is emerging, one that’s closely tied to AI. Machine learning fosters a probabilistic mindset, which is emerging alongside prediction markets and presents predictions as facts. This has profound ethical implications.
Q. You argue in the book that one of the perverse aspects of predictions is that they can be used to shape the world, to force them to come true.
A. We tend to be very naive about predictions, and my hypothesis is that this is partly an illusion of language because predictions sound like facts, like descriptions of the world, but when you analyze them philosophically, you realize that they are not. In particular, predictions about human beings influence human beings because they affect our expectations, and expectations, in part, shape the world. They therefore have a magnetic pull.
Q. You also note that although mathematics has existed for millennia, the study of probability is actually quite recent.
A. Yes. The Greeks, for example, were philosophically and mathematically very sophisticated, but they didn’t develop a mathematics of probability. We don’t know why, but one hypothesis that seems very plausible to me is that it was incompatible with thinking about gods and fate. If you have the idea that fate is predetermined and that the gods decide, it makes no sense to think of probability as something mathematical.
Q. You draw a parallel between the rise of probability theory and the emergence of censuses, and how both were basic tools of colonialism.
A. I think it’s very important to understand the roots of how population statistics developed. Its origin lies with Francis Galton. It’s truly amazing because we discovered the normal curve through two independent paths: one is gambling, dice, and games of chance, and the other is the study of the stars. Since it’s so difficult to measure the space between stars because of clouds or because the sky moves, the notion of the distribution of measurement errors was developed. It’s astonishing that someone thought to apply that tool to social issues. And not just in any old way, but in a very normative way, imposing notions of normality on people. If you don’t fit the idea of normality, you’re a deviant. Indeed, it has to do with colonialism, with first controlling the populations we trust less and then the general population.
Q. That’s one of the ideas in your book: that the origins of statistics have a lot to do with social control. And that they are a tool of power. Why is this?
A. Because making a prediction that sounds like a fact, if you convince people that that’s the future, is actually a way of creating the future you want. There are statements that seem to describe reality, but what they actually do is give an order. When we hear a prediction and take it as fact, what we’re really doing is obeying.
Q. You argue that categorizing people eliminates the idiosyncrasies of individuals. And that AI is taking that process to the extreme.
A. Exactly. To consider people as mere numbers is to dehumanize them. There comes a point when people have to adapt to given categories, not the other way around. When you looked at professions in France before the development of statistics, they were very flexible and fluid. A person could be part carpenter, part blacksmith, or whatever. But when the government establishes certain categories and creates associated subsidies, if you don’t fit into the category, you start to suffer the consequences. Then, the carpenter becomes just a carpenter, and the statistics work much better, but that’s at the cost of creating the reality you’re supposed to be describing. Statistics are never neutral.
Q. Why did this bureaucracy develop?
A. We trust numbers because we don’t trust people, and we forget that it’s people who have to choose and create the numbers. Furthermore, once the monarchy, whose justification was divine grace, was eliminated, bureaucrats feel vulnerable because they have to prove their worth, that they have a reason for being. The justification they offer is numbers. Relying on automated processes leads to less accountability because you no longer know who to turn to. When something goes wrong, everyone hides behind the machine. This impersonal entity we’ve built becomes like a monster with a life of its own, pushing people this way and that.
Q. You criticize the fact that a statement backed by numbers seems more valid than one that isn’t. Why is this the case?
A. If you want to become famous in academia, make a prediction about anything and assign it a number; it doesn’t matter where you get it from. Often, having a number not only doesn’t help, but it actually confuses, because it makes it seem like we’re talking about reality, whereas if it’s a completely fabricated number, it’s obfuscating and confusing the public.
Q. AI is based on predictions built from data that are often also predictions. Should we think of AI as a giant house of cards?
A. Absolutely. And, the more entrenched we are in the illusion that everything is predictable and that we have everything under control, the more blind we will be to the incontrovertible fact that AI is also generating its own risks, and that these risks are systemic, to which no number can be assigned.
Q. How can we get out of this situation?
A. We should be much smarter in our use of forecasting. I’m not saying we shouldn’t use it. I like knowing what the weather will be like tomorrow, but we need to be aware of what can and can’t be predicted. We should focus much more on building a robust society and less on predicting. For example, we know that sooner or later, there will be another pandemic. I don’t know why we haven’t managed to improve ventilation in buildings over the years. Instead of dedicating resources to predicting things that can’t be predicted, we could focus on addressing what we already know can happen.
Q. Does a prediction tell us something about the world, or about our knowledge — or ignorance — of it?
A. When predictions vary widely, as is the case with the future of AI, it’s a sign that we’re not saying anything, that we really have no idea. Another important point: you can be an expert in something, but that doesn’t make you an expert on the future of that something. The future is unknown; it’s not written! I, for example, know about privacy, but if you ask me about the future of privacy, I don’t know any more than anyone else. We shouldn’t fall into the trap of thinking there are experts on the future, even if the person speaking is a Nobel laureate.
Q. You also analyze the role of utilitarianism and effective altruists as cogs in the bureaucracy
A. I recently reread Charles Dickens. He disliked utilitarians intensely, and for very good reasons. They have been incredibly successful in convincing us to think about morality in a certain way. We tend to think that way intuitively because of how successful they have been throughout the centuries in selling us the idea that cost-benefit analysis can transform moral habits into a sum. It is very important to criticize utilitarians because they have had a significant influence on public policy for a very long time. Effective altruists, who maintain that it is ethically acceptable to become obscenely wealthy because they can then help more people with their donations, are a little quieter now [one of their leading exponents was Sam Bankman-Fried, the crypto-billionaire who has been serving a 25-year sentence since 2023 for defrauding his clients of $8 billion]. But they will return. It is the perfect framework to justify why billionaires do what they do.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition