
Folk musician Murphy Campbell discovered AI-generated versions of her songs uploaded to Spotify under her name without permission, exposing critical vulnerabilities in streaming platform verification and music copyright enforcement. Her case highlights the growing threat AI fakery and copyright trolling pose to independent artists.
Murphy Campbell, an independent folk musician, found herself at the center of one of the most unsettling collisions between artificial intelligence and music copyright yet documented. Earlier this year, she discovered that several songs had appeared on her official Spotify profile — tracks she never authorized, featuring vocals that sounded like her but were subtly, disturbingly wrong.
The case has since become a cautionary tale for independent artists everywhere, highlighting how the combination of generative AI tools and a creaking copyright infrastructure can leave creators vulnerable to exploitation from multiple directions at once.
In January 2025, Campbell noticed unfamiliar entries populating her Spotify artist page. The songs themselves were recordings she recognized — performances she had originally shared on YouTube. But the versions on Spotify had been altered. Someone had apparently scraped her YouTube content, processed the audio through an AI voice-cloning or vocal manipulation tool, and then uploaded the modified tracks to major streaming platforms under Campbell’s own name.
When one of the suspicious tracks — a rendition of the traditional folk song “Four Marys” — was analyzed using two separate AI-detection tools, both flagged it as likely generated or substantially modified by artificial intelligence. The evidence pointed firmly toward unauthorized AI manipulation of Campbell’s original performances.
For Campbell, the discovery was jarring. As an artist working within the folk tradition — a genre built on trust, community, and shared heritage — she had assumed her relatively modest profile would shield her from the kinds of AI abuse that had primarily made headlines involving mainstream pop stars and major-label acts.
Campbell’s situation exposes a convergence of failures that should alarm every working musician, regardless of genre or audience size. Here’s why this matters:
The music industry’s copyright enforcement apparatus was designed for an era of physical distribution and clearly identifiable infringers. It was never built to handle a world where anyone with a laptop can generate plausible imitations of real artists and distribute them globally within minutes.
Platforms like Spotify rely heavily on automated systems and third-party distributors like DistroKid, TuneCore, and CD Baby to handle uploads. These services use metadata — artist names, track titles, ISRCs — to organize content, but they perform minimal verification of whether the person uploading actually has the right to use a particular artist identity.
The result is an environment ripe for abuse. Bad actors can upload AI-manipulated songs under a real artist’s name and begin collecting streaming royalties before anyone notices. Meanwhile, the legitimate artist may face counter-claims or find themselves trapped in dispute processes that were designed to mediate between labels, not to protect individuals from AI-powered identity theft.
Campbell’s experience is not an isolated incident. Over the past two years, AI-related music fraud has surged dramatically. In 2023, an AI-generated track mimicking Drake and The Weeknd went viral before being pulled from platforms. By 2024, reports of AI covers flooding Spotify and Apple Music had become routine.
What makes the Campbell case distinct is the intersection of AI fakery with copyright trolling — the practice of weaponizing copyright claims for financial gain or harassment. When an independent folk artist becomes a target for both simultaneously, it reveals just how little protection exists for creators who lack institutional backing.
Industry groups like the Recording Industry Association of America (RIAA) and the Human Artistry Campaign have lobbied for stronger AI regulations, but legislative progress has been slow. The proposed No FAKES Act in the United States aims to establish federal protections against unauthorized AI replications of voice and likeness, but it has yet to pass.
Music technology researchers and copyright attorneys have increasingly sounded alarms about this exact scenario. The consensus among experts is sobering:
Murphy Campbell’s ordeal is far from resolved, and the broader systemic issues it illuminates will take years to address fully. In the near term, several developments are worth watching:
Platform reforms: Spotify has introduced some artist verification improvements, but pressure is mounting for more robust identity-protection measures, particularly for independent musicians without label representation.
Legislative action: The No FAKES Act and similar proposals in the EU’s AI Act framework could eventually provide legal remedies, though enforcement across international borders remains a formidable challenge.
Community response: Within the folk music community, Campbell’s story has already sparked broader conversations about digital rights, platform dependence, and whether independent artists need collective advocacy organizations specifically focused on AI threats.
Murphy Campbell set out to share traditional folk songs with the world. Instead, she became an unwilling case study in everything that’s broken about how we protect musicians in the age of generative AI. Her experience should serve as a wake-up call — not just for streaming platforms and legislators, but for every artist who assumes they’re too small to be a target. In 2025, no one is too small. The tools are too accessible, the guardrails too weak, and the incentives for bad actors too strong.
Until copyright law catches up with the technology, artists like Campbell are left fighting a battle on two fronts: against the people who steal their voices, and against the systems that were supposed to protect them.