The problem with Xania Monet: The AI-generated singer who has amassed millions of followers
The emergence of an AI-generated R&B artist on the most listened-to charts seems to show that the public is willing to accept artificial songs
The story of Xania Monet has been touted as a small technological revolution: an AI-generated singer who has broken into the influential Billboard charts and signed a $3 million contract. Behind the avatar is a real author, Mississippi poet Telisha Nikki Jones, and a generative AI engine, Suno, capable of converting text into a smooth voice that fits American radio. Her ballad, How Was I Supposed to Know?, debuted at number 30 on the Adult R&B Airplay chart after reaching number one on the digital R&B sales chart, going viral on TikTok, and accumulating more than seven million streams on Spotify. As of the publication of this article, it has almost 12.5 million plays.
But what is it about Xania Monet that connects with the public if she doesn’t physically exist? It’s what commercial music has been honing for decades: an intimate tone, a tale of fragility, and a production that mimics 2000s R&B, aimed somewhere between Brandy, Aaliyah, and Toni Braxton. The song speaks of growing up without a father figure and learning to love oneself. The voice sounds human, the pain seems relatable, and it fits perfectly into any late-night playlist. But it all stems from a statistical calculation about which emotional structure works best.
Authenticity is an effect: if the voice moves you, its origin matters little. In fact, for a large segment of Generation Z, authenticity is no longer defined by who creates a work. According to a 2024 study, almost half of young people in the U.S. were “open to AI-generated music as long as it made them feel something.” The question is no longer “Is this artist real?” but rather “Does this artist match my mood?”
This shift has consequences: if all we ask of songs is that they match our mood, wouldn’t it be enough to optimize the algorithm that decides how loneliness, nostalgia, or overcoming adversity should sound? Xania Monet reveals, in a way, that the romantic ideal of the artist has its days numbered. There’s another unsettling detail: the song meticulously reproduces an archetype linked to Black and female music. A woman without a father who transforms trauma into lyrics. The AI didn’t invent this story; it learned it from a sound archive full of songs that, for decades, have encoded racialized and gendered suffering as proof of authenticity. When an AI model generates this narrative, it simply reinforces the same patterns that the industry has exploited for years.
Meanwhile, Walk My Walk, a country song created with AI, has also reached number one. Singer Emily Portman reported that an album she hadn’t recorded appeared on Spotify under her name. The same thing happened to country musician Blaze Foley, whose Spotify profile was updated with new songs... even though he’s been dead since 1989. Countless YouTube channels feature mashups, unauthorized interpolations, and AI-generated songs that combine two or more artists. Others fabricate nostalgic soul hits that never existed.
At the same time, signs of something even stranger are beginning to emerge: AI-generated culture is being used to train more AI-generated culture. Billboard has documented cases of songs created with the help of generative models that have registered an unusual volume of paid downloads. There’s a simple yet disturbing hypothesis: some creators download these files because they serve as “clean” material for training new generative models. In other words, artificial songs not only compete with those created by people, they also become technical feedstock for the next generation of artificial songs. Generative AI tends toward these closed loops, and many theorists fear what some call Model Autophagy Disorder: that AI models will cannibalize themselves.
In this scenario, charts and specialized media act as legitimizing forces far more than observers. Billboard’s declaration of Xania Monet’s radio debut as “historic” assigns cultural value to something that could have been treated as a mere technical curiosity. If the only yardstick is audience response and platform traction, anything that works will end up being equated with art. This situation is convenient for the market, but it erodes a fundamental distinction: not everything that is played and consumed is expanding our cultural experience.
Does this mean we should ban AI in music or ban avatars from the charts? Not necessarily. The tool itself isn’t the problem. The delicate point lies in how it combines with an ecosystem that has already been pushing toward homogeneity for years. AI, in this case, has the capacity to amplify what we ourselves have validated. If rankings, awards, and critics relinquish the distinction between a work that takes risks and a product that simply fits the mold, the field is clear for this noise to fill all the space. In the end, the case of Xania Monet matters less for what she is — a well-designed avatar with good, generic songs — than for what it reveals about our current consumption habits and our extremely high tolerance for simulation.
That a voice without a real body can be perceived as sufficient speaks volumes about the level of exhaustion we’ve reached. It’s not about yearning for a “more authentic” past, but rather about asking ourselves what we expect from music in a present where almost everything can be automated. If we accept that everything else is irrelevant as long as the song works, the industry will continue to push in that direction.
Musical justice (or lawless land)
Currently, the legal framework is progressing in fits and starts. The U.S. Copyright Office insists that AI-generated works cannot be registered as copyrighted creations, and Congress is debating laws that aim to protect the voice and image of performers from cloning. On paper, the message is clear, but the reality is quite different. Record labels are suing companies like Suno for using their catalogs while simultaneously trading in products created by these same models. The result is a gray area where everyone claims to defend artists while simultaneously positioning themselves to avoid being left out of the next distribution of royalties.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition