Every few weeks for the past half a decade, there’s a new headline proclaiming how AI is about to replace musicians, composers, producers, and engineers. As someone deeply involved in the process of music production in various projects from the inception to its release, I find these narratives not only overblown but also fundamentally misunderstood what music actually is.
Artificial intelligence has undoubtedly made impressive strides in recent years. From algorithmic composition tools to machine learning-based mastering services, AI is touted to be a part and parcel of the audio and music production landscape. But at least for the time being, AI in music is a tool, not a creative individual. And inevitably even when AI technologies catch up with humans at some point, years or decades in the future, I believe AI should still only aid creative expression—not assume a composer’s or a producer’s profession.
Music is a vastly emotional language. It communicates feelings, stories, and experiences that are uniquely human. No matter how advanced an algorithm becomes or how cutting edge technologies get, they ultimately operate based on patterns, datasets, and programmed logic at its fundamental. It can mimic the surface of emotions superficially, but it can’t originate them. The soul of music is inherently tied to human experience; it is deeply personal. No line of code can replicate the moment a guitarist bends a note just slightly out of tune because it “feels” right. I’m neither an AI detester nor am I any traditionalist. I’ve been heavily involved with technology and new research before pivoting to audio engineering and music technology. And I have always been intrigued by the newest developments and how they could be used in the convergence of artistic and technological expression.
In my work at Sterling Sound, Eventide Audio, Rockstar Games, Jungle City Studios, NYU, and many more, I’ve seen how AI can assist the creative process in entirely different workflows. Intelligent EQs can suggest tonal balances and curves. Adaptive reverbs can respond dynamically to an arrangement. AI-assisted file management can simplify going through sample libraries with thousands of samples and loops. These tools save time, instigate ideas, and sometimes catch things that human ears might miss after long sessions. They help streamline workflows, allowing engineers and artists to focus on the bigger picture rather than getting bogged down by technical minutiae.
But they should not replace the role of a producer deciding to push a vocal slightly ahead of the beat to create tension. They shouldn’t replace a mastering engineer choosing to leave a small imperfection untouched because it gives a track an identity. These decisions aren’t based on logic—they’re based on instinct, culture, history, and personal expression. A computer can tell you what the “optimal” frequency balance might be, but it can’t tell you when breaking the rules will create something unforgettable, even pioneering.
I’ve also had conversations with artists who fear that AI will homogenize music, making everything sound the same. And that’s a valid concern, particularly when companies are generating capital in the millions of dollars by swindling copyrighted music catalogs. This is where the line blurs, where AI starts becoming unprincipled. Various companies have released extremely accurate text-based music generation AI models, giving the user exactly what they want. Although it may be useful for a bedroom producer who may not be able to hire a Grammy-winning musician to play for their debut EP, it just takes away from the satisfaction that humans inherently get by trying their hand at something unknown and conquering it by themselves. You would never get the gratification and “stank-face” on a slapping bass line that took you five days to perfect, if you typed in a text prompt to get something vaguely similar. In the current AI bubble, companies are trying to recover the heavy developmental costs they invested in these models at the expense of creative liquidation. But the beauty of human creativity is that it’s inherently rebellious. AI might suggest a chord progression based on millions of copyrighted popular songs, but an artist’s experience can lead them to throw that suggestion out the window and invent something entirely new.
The danger comes when we view AI as a shortcut to creativity instead of an aid to it. If we rely on AI to generate melodies, lyrics, or arrangements wholesale, we risk losing the nuances that make music meaningful. At best, AI should offer raw material or inspiration for human artists to refine and elevate. It should be a partner in the process, not the process itself. Understanding the strengths and limitations of AI would allow creatives to leverage its potential without sacrificing authenticity. Using the efficiency AI offers and preserving the emotional grit that defines great art could theoretically strike a good balance.
Ultimately, the future of music rests in how people embrace creative collaboration with newer technologies. AI can democratize access to professional-grade tools, assist in tedious tasks, and even open new avenues for exploration. But it’s the human spirit—the imperfections, the risks, the emotion- that will always be at the heart of great music.
Ethical Issues In Artificial Intelligence(AI)(Opens in a new browser tab)
AI in music must undoubtedly be approached with extreme caution while it is still in its nascent stages, albeit aggressively deployed. Proceeding carefully and achieving an equilibrium between artistic integrity and technological adoption would be key to a more equitable and content music industry. It’s a brush, not the painter. It’s here to help us tell our stories better, not to tell them for us.
Atharva Dhekne
Mastering Production Engineer at Sterling Sound
Audio Engineer at Eventide Audio
Discussion about this post