Artificial intelligence conquers Spotify with viral hits and triggers a crisis in the music industry.

Artificial Intelligence (AI) has already conquered the written word and the digital image. Now, it's tuning its ear and aiming for the next stage: music . The disturbing thing is that, although these are algorithms that string together notes with mathematical precision , they achieve something as earthly as creating chords capable of moving those who listen to them.
It's becoming increasingly difficult for the ear to distinguish between a human and an artificial touch. For this reason, many platforms are promoting imaginary artists followed by thousands of fans. Are we witnessing the dawn of a new musical era or the decline of the classical composer?
So-called diffusion models are having a profound impact on creative fields. By transforming random noise into coherent patterns, they can generate melodies or video clips guided by text prompts or other input data.
In January 2025, only one in ten tracks on Deezer was the work of an AI. But the creative pace has accelerated, and today, around 20,000 songs are uploaded, almost double the number just six months ago. If nothing slows this trend, in two years, machines could dominate 70% of the music catalog.
The Velvet Sundown, a synthetic band that already has 3 albums.
One of the blind spots of this trend is the lack of transparency . There's no system that allows you to know for sure whether what's playing on your playlist was created by a robot or a person.
The responsibility of investigating the origin of each song shouldn't fall on the listener: access to credits should be easily available, without forcing anyone to become a music detective.
The controversy erupted when it was revealed that Velvet Sundown, the band that had gone viral in a matter of weeks, with over a million streams on Spotify , wasn't real. Everything from their songs to their promotional images and backstory had been created by an algorithm.
The episode sparked a debate about authenticity in the digital age. Music industry experts warn that streaming platforms should be legally required to label AI-generated songs so listeners know exactly what they're listening to.
After being labeled a breakout band in several specialized magazines and interviewed in a few interviews with their vocalist, it was discovered, in the absence of convincing information, that Velvet Sundown was a copy-and-paste experiment. To avoid confusion, Spotify listed them as "synthetic music" in their bio.
The impact not only shakes the foundations of creation, but also forces us to rethink concepts such as authorship, originality, and intellectual property rights . The question is no longer whether AI can create art, but rather how we will coexist with it in the creative landscapes of the future.
It's well established that Spotify isn't always willing to label music as AI-generated and has been criticized on several occasions for distributing playlists featuring music from "ghost artists."
Among the most suspicious cases is Jet Fuel & Ginger Ales, a band that boasts the "verified artist" badge and has over 414,000 monthly listeners . However, there is no trace of their existence outside the platform, fueling suspicions that this is a laboratory product.
This isn't the only case. Bands like Awake Past 3 and Gutter Grinders have also raised eyebrows: they have thousands of fans, yet their voices sound strangely artificial, their logos look like they're straight from a generic template, and there's no personal information to offer any clues.
udio, one of the platforms to create songs without knowing anything about music.
The music industry has just launched a legal offensive against Suno and Udio, the two most innovative platforms for AI-powered music creation. A consortium of labels filed a lawsuit in a US federal court, accusing them of copyright infringement on a scale they describe as "massive."
This is why Sony Music, Warner Music, and Universal Music, along with the other plaintiffs in the RIAA, see it as feasible that "machine-generated sounds" could end up competing with those that were genuinely created.
The music industry still bears the scars left by Napster, and the rise of AI-generated music is once again raising alarm bells. This time, the threat isn't from piracy, but from a new kind of competition: songs created by platforms like Suno or Udio that can sound dangerously similar to copyrighted works, but which don't pay royalties to anyone .
In this scenario, traditional business models face a dilemma: how to protect the value of content when creation no longer depends on an artist, but on an algorithm.
And while the widespread use of these applications would lower production costs—anyone could become a successful artist—at the heart of the debate is a real concern: the sustainability of the market and the appreciation of artistic work.
In a saturated universe—where 100,000 new songs are released every day—the emergence of these platforms poses a new challenge: How can we stand out among voices that don't breathe, but sound increasingly human?
“Our technology is transformative; it's designed to generate entirely new results, not to memorize and spit out pre-existing content. That's why we don't allow user instructions that reference specific artists,” Suno CEO Mikey Schulman said in a statement.
Suno, the other competitor in the digital music race.
The results that Udio and Suno are showing point to a daring conclusion: there is a growing audience that doesn't care whether the music they listen to was created by hand or by an app .
On these platforms, some profiles already function as real artist pages, with thousands of followers and songs generated entirely by AI, accompanied by fictional portraits also produced by algorithms.
But behind these projects aren't traditional musicians, but rather people who master marketing strategies, curate styles, and assemble pieces that are impossible to attribute to a single author. In this new ecosystem, traditional notions of authorship are blurred, and the line between creation and reproduction begins to blur.
The method used by Suno and Udio is similar to the way humans learn: by absorbing data. Their training is based on the analysis of thousands of songs from different genres, styles, and eras .
From this universe of sound, he detects patterns, structures, and harmonies and repurposes them to generate new compositions. In essence, it's not a mechanism so different from the human process of assimilating by listening, comparing, and reconstructing what already exists. The difference is that, in his case, he does so at an extraordinary scale and speed.
Unlike a band that composes in layers—first the piano, then the vocals, then the drums—a diffusion model doesn't follow a sequential process. Instead of building piece by piece, it generates all the elements of the song simultaneously.
It does this through visual logic: it translates the complexity of audio into a waveform, a graphical representation that shows the amplitude of the sound as a function of time. Since these shapes—or their variants, such as spectrograms—can be processed as images, they become ideal raw material for AI models.
The system is trained on millions of musical fragments labeled with descriptions, and then works in reverse: it starts with random noise and, based on the user's instructions, "paints" a new song until the final waveform makes sense. Thus, what appears to be spontaneous art is actually a statistical reconstruction guided by text.
Clarin