How the algorithm ruined your favorite bar: Will everything end up looking the same?

The question is becoming more frequent when it comes to aesthetics, music and trends that spread on social media. Content creators are reproducing algorithmic biases that, in turn, were fed by other creators

Tanja Ivanova (Getty Images)

In the 1980s, the futurologist Hans Moravec warned that, paradoxically, it would be the actions that are easiest for humans (such as holding a piece of sushi with two chopsticks) that would pose the greatest difficulties for robots and computers. On the other hand, very complex tasks such as finding errors in medical prescriptions, distinguishing when a space telescope has detected something interesting, or choosing Christmas presents for the whole family have ended up being enormously simple for algorithms.

“Artificial intelligence already does that,” we argue more and more often. But according to thousands of scientists and philosophers, the label is not entirely appropriate. “Both words (artificial and intelligent) are controversial and highly suspect. I prefer the term machine learning, as it is easier to see what we are talking about: systems that allow machines to learn patterns or correlations and apply them to new situations and data,” explains Justin Joque, who teaches at the University of Michigan and is the author of Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism.

“It is reasonable that there is some confusion among the general public because these are concepts that are difficult to understand for those without a mathematical background. There is a lot of mysticism surrounding AI, as there is in any other scientific field: studies on cancer, astronomical observatories when we talk about UFOs… These are interesting issues and they are widely publicized, so there are always those who will create a morbid curiosity around them,” explains Celsa Pardo-Araujo, a mathematician at the Institute of Space Sciences whose research focuses on the application of machine learning to astrophysics. “What is also clear is that Google, DeepMind and Microsoft are creating algorithms that solve problems that could not be solved before,” she adds.

But here comes the part that affects us: in addition to solving certain problems and being very useful in scientific research, algorithms are also generating content and, above all, ordering and hierarchizing everything that we have created ourselves. And this includes both the vast array of universal culture and the last photo we took while having breakfast. What criteria do they use? What are these creations like? That is the most worrying part because, as Kyle Chayka shows in the 2024 book Filterworld: How Algorithms Flattened Culture, the map (that is, the algorithm that rewards some content over others) is already affecting the territory (that is, the form of the content itself and the reality in which we move, especially in cities).

Mensent Photography (Getty Images)

Chayka gives the example of internet cafés that want to appear sophisticated: if they all offer the same products and their decoration is so similar, if the public that visits them also looks so similar all over the world, it is because their managers are following the model that Instagram imposes when it gives priority to some images over others. Instagram only attracts an audience to the places that upload photos that fit its algorithm, and this is something that is happening in all areas: there are already musicians who teach how to write songs so that they will go viral on TikTok (for example, with the chorus very close to the beginning) and many illustrators imitate the Pixar style regardless of whether it stimulates them (it is also used by many automatic image generators) because they have found that this helps them go viral.

A world that is increasingly resembling itself

Through empirical research conducted in France in the 1960s, the sociologist Pierre Bourdieu studied the “social bases of taste” and discovered dozens of correlations between issues such as educational attainment, type of employment or disposable income (i.e., social class factors) and aesthetic preferences. Today, when algorithms have much more precise and personalized information about our tastes (we give it away to them all the time) and some of their suggestions satisfy us (Spotify is rarely wrong when it designs a playlist for us), we still have the feeling that many platforms only amplify the worst content, the most sensationalist or misleading one.

“YouTube’s recommendation system, for example, will have a core initially trained with a certain number of users and then it receives feedback and retrains itself with each stream,” explains Pardo-Araujo. “It is true that algorithms reproduce many biases because you can never train them with the entire population, and you have to be very careful with this process: the distributions must be representative of reality. But it is funny that biases in algorithms generate so much alarm, when we all have so many biases ourselves that we should also eliminate from our consciences. It may be easier to recognize them in algorithms than in ourselves,” adds the mathematician, convinced that algorithms reflect what is already happening in society.

Tara Moore (Getty Images)

But when it comes to algorithms, the line between adapting to user tastes or shaping and directing them is very thin. This is where the feeling emerges that we are being shown and recommended a variation of the same thing again and again. For example, some people accused Billie Eilish of writing her songs with TikTok in mind, but isn’t it easier to believe that, unintentionally, they come out that way because, at her age, she has spent hours exposed to TikTok? This algorithmic feedback of already existing trends is what is most worrying in the world of culture. In fact, this is the process that some authors like Chayka call the “flattening of culture” and which gives rise to increasingly conservative works. Creators, consciously or unconsciously, are reproducing algorithmic biases (which, in turn, were those of previous artists and users).

On a technical level, the introduction of increasingly similar samples (or directly produced by previous algorithms) into systems constitutes a major threat to their evolution. Systems trained with several generations of AI output quickly become absurd, he notes, adding that the risk that good-quality, human-generated content could become a resource like oil or coal is real.

On the artistic level, algorithms “constantly feed back into the current recurring trend,” says Luis Demano, an illustrator and activist against the abusive use of generative AI in his sector. He has identified which images are most rewarded and reproduced by automatic systems: “They tend to be realistic representations close to photographic profiles and with a very characteristic chromatic treatment, forcing a lot of lighting contrasts between warm and cold tones.” In addition to being useful for reducing costs, Demano recognizes that “entering the algorithm’s game” can be rewarding for those who use them: “It rewards us and makes us feel special with the attention we receive. Ego is a very powerful drug.”

Neither artistic nor original: When the algorithm creates and distributes

When the notions of authorship and originality developed after the Enlightenment, art became the most characteristic practice of a new type of individual: creative, autonomous, and free to choose their own rules and those they would apply to their works. The rules that generative AI uses for the works it produces have nothing to do with that: they are a statistical approximation that takes advantage of the characteristics of the works it has been trained on, as well as data on the functioning of the attention market.

“I strongly assert that tech companies steal copyrighted works to train their models,” Demano complains. While originality is a difficult property to define, philosophers are clear: it is not something that can be found in AI products. “Originality is as much a matter of the work of art as it is of the process of creating it,” Joque explains. “Recently, I asked my students to read Borges’s short story Pierre Menard, Author of Don Quixote. The story describes the eccentric author Menard, whose secret job is to rewrite Don Quixote word for word. Borges suggests that writing the exact same words in the 20th century completely changes the work, as Menard gives them a different meaning than Cervantes did when he wrote them in the early 17th century. Although Borges writes this somewhat jokingly, I think he is suggesting that the conditions under which art is created affect how we understand it and whether we find it interesting. Even if an AI could produce a Rothko-style work, doing it automatically in the 21st century could never compare to what Rothko did in the 20th century,” the professor and philosopher explains.

So what exactly does AI do and why do all its works or products look so similar to each other? Demano answers: “They are not designed to create art, but to generate content. The difference between the two terms is established by the function they fulfill. Content serves to make the infinite advertising billboard that is the internet work as a business. Generative AI is the technology industry’s solution to meet this need as quickly and efficiently as possible. Its greatest success has been making us believe that using it can turn us into artists instantly, when in reality we are customers of a content demand service.” So, when we find a family resemblance in everything that algorithms generate or offer us, we are not faced with malicious bias or a question of style: it is simply a market imposition.

Understanding how algorithms work helps us understand that they have no political leanings, but rather circulate what provokes the most heated reactions in us, what requires less concentration or what can be consumed more quickly. When the algorithm is deified, we forget that it is a simple mechanism, and that its development and operation involves many human subjects: the person who orders programming to maximize profits, the person who writes that code as a paid job (surely an underpaid freelancer), the person who trains it, in many cases involuntarily, with their creations, and the person who runs it on their computer or phone and at the same time feeds it.

Of course, we should not blame the user, but neither should we blame the mechanism behind which the real operator of this whole process hides: a businessman who does not care about the type of content that is reproduced by his platform; or in other words: Amazon does not distinguish between distributing a copy of Dostoevsky’s The Brothers Karamazov or the Troll Book by the social media star elRubius. Marx wrote that we often believe that social structures are immovable objects or indisputable natural laws. It is an illusion: all social structures and scientific and industrial constructions (and artificial intelligence is one of them) are a consequence of our actions and relationships and, with sufficient collective force, they can be modified.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition


More information

Archived In