AI and the rise of ‘music laundering’

Avatar photo

By Frederick Gummer on

LPC student Frederick Gummer analyses the legal implications of artificial intelligence on the music industry


In April 2023, a track claiming to feature Drake and The Weeknd titled Heart on My Sleeve spread rapidly across TikTok and Spotify. In fact, this wasn’t a collaboration between the two artists but rather, an AI-generated song by a TikTok user who had trained the AI on their music styles. This incident, fuelled by the rapid dissemination platforms like TikTok and Spotify, highlights the emerging challenges in copyright law known as ‘music laundering’.

Music laundering is the practice of presenting AI-generated songs as authentic collaborations between human artists, without proper disclosure. As AI increasingly infiltrates the creative processes, the UK music industry faces new complexities in protecting artist rights without stifling innovation.

Copyright infringement

In the Heart on My Sleeve track saga, Universal Music Group successfully requested the removal of the song from platforms, reportedly using the inclusion of producer Metro Boomin’s tag in the track, giving a definitive basis for its takedown. However, this event underscores the complexities and uncertainties of copyright law when it comes to AI-generated content. Specifically, it brings up pressing questions: without a straightforward, copyright-protected element like a producer’s tag, what recourse will artists in the UK have against such imitation tracks, and how might existing copyright protections adapt to address these challenges?

Copyright infringement, as understood in both US and UK law, hinges on the creation of works that are “substantially similar” to the original or involve copying the “whole or substantial part” of a copyrighted work. In the context of AI, this distinction becomes particularly complex. AI tools, designed to emulate the general sound and style of existing music without directly copying melodies or lyrics, navigate a fine line to avoid infringement claims. To this end, artists must demonstrate copyright infringement in one of two ways: either through an input or an output. The input question deals with whether training AI with copyrighted music without explicit consent infringes on copyright laws or falls under fair dealing exceptions (although the application of fair dealing in the context remains uncertain). The output question explores if AI-created works, potentially derivative, infringe on the original copyright holders’ exclusive rights to create based on their prior works.

The UK’s legislative stance

The UK’s current legislative stance on AI and copyright is characterised by a prohibition on using copyrighted material for AI training, a position that has seen notable shifts and challenges. Initially, the UK government considered allowing an exception for AI training on copyrighted works but later retracted the same in the face of strong opposition, highlighting the tension between innovation and copyright protection. This indecision reflects broader disputes, including failed attempts to establish a fair licensing framework and legal battles exemplified by Getty Images suing Stability AI. Given the swirling currents of regulatory change and prevailing lack of clarity, coupled with the anticipated challenges of compelling tech companies operating generative AI models to adhere to any forthcoming transparency regulations, it’s a certainty that more AI-generated copycat tracks are on the horizon.

As a result, until there is reasonable clarity over the copyright status of the input data used to train generative models, there will be continued reliance on enforcing artists’ copyright based on the outputs of these models. Additionally, any transparency requirements, as in the EU’s new AI Act, will come with big tech’s inevitable dragging of heels and jurisdictional jiggery-pokery to avoid them.

In reality, however, this also comes with its own issues. Should an AI application replicate specific melodies or lyrics (or even a producer tag, as in the Heart on My Sleeve copycat), it might be a breach of copyright laws. But pinpointing such direct mimicry can be challenging, as sophisticated AI tools are often engineered to emulate the overall style and ambience of music, partially to circumvent any potential copyright infringement. Even with the notable implications stemming from the Blurred Lines case in the US, which established the principle in the US of infringement based solely on the emotional or stylistic essence of a song, it may not meet the legal threshold. New works produced through AI tools or rendered with AI-powered voices are unlikely to breach copyright if they do not contain elements that are “substantially similar” or constitute a “substantial part” of any protected original work.

Want to write for the Legal Cheek Journal?

Find out more

So, for artists watching as an AI-generated version of their voice gains traction online, what is there to do? There hasn’t been enough transparency over the data used to train these generative models in order to easily prove that infringing inputs were used to generate your voice. Additionally, attempts to reverse-engineer outputs in the Getty Images Stability AI case have resulted in images that often feature unexpected, irrelevant, and absurd elements. This process not only produces comedic outcomes but requires significant time and expense. Equally, it is difficult to identify how a voice or style can reach the legal threshold required to attract protection.

A potential legal remedy in UK law

Moving forward, the UK legal system offers potential recourse through the principle of “passing off,” which prevents false endorsements or representations. While traditionally applied to visual representations and false endorsements, this tort could potentially be extended to cover AI-generated vocal imitations which suggest an artist’s (unauthorised) endorsement or participation.

The application of passing off in cases like Irvine v TalkSport, where celebrities’ images were used without permission, sets a precedent. This ruling found that, firstly, the ‘celebrity’ must have significant reputation or goodwill at the time of the incident and, secondly, that the unauthorised image use misleads a substantial part of the target market into believing the celebrity endorsed the product. Such claims are uncommon and hinge on other particular details, as illustrated by Rihanna’s victory over Topshop. In that case, the court sided with Rihanna. This was not on the basis of a broad image right, but rather because her well-documented endorsement history could lead many Topshop customers to mistakenly think she had approved the use of her image on T-shirts, when, in fact, she hadn’t.

Given this context, you could envision a flexible interpretation of these principles applied to an AI-generated track of a well-known artist with a distinctive voice and production style. However, this approach is yet to be robustly applied to cases involving synthetic voices, and its effectiveness remains largely untested in this new context.

A potential legal remedy in US law: California

To consider how this approach may work, it is worth considering the legal position of another hotspot jurisdiction for music litigation: California. The legal landscape in California provides clearer protections for artists through the right of publicity, which recognises the unauthorised commercial use of an artist’s distinctive voice as a violation.

This was established in the landmark Midler v Ford Motor Co. case, where the use of a Bette Midler soundalike in a commercial without her consent was deemed an infringement of her publicity rights. This principle was recently invoked in Rick Astley’s lawsuit against Yung Gravy for the imitation of Astley’s voice, suggesting that California’s right of publicity could offer a pathway for actions against vocal imitations made by AI.

While the Astley case involved human imitation, its implications for AI-generated content are significant, offering a potential legal remedy for artists against unauthorised commercial use of their vocal identity. A successful expansion of the Midler judgement so that it applies to any commercial purpose, rather than solely false endorsements, may provide a window into how Irvine could be interpreted in the UK, should it be tested. In turn, this may provide a more realistic option available for high-profile UK artists looking to protect their intellectual property rights as this area develops.

To conclude, although the music industry has a history of catastrophising with each major paradigm shift, such as during the introduction of music streaming, the concerns about generative AI and its potential for ‘music laundering’ are not without merit. The existing patchwork of copyright protections does not provide adequate safeguards for artists against copycats. However, there is potential for developments that could enable well-known artists to challenge these imitations through passing-off claims.

Frederick Gummer is an LPC student at The University of Law with interests in entertainment law, copyright and artificial intelligence.

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

Join the conversation

Related Stories

The future of music copyright laws

Cambridge University graduate and aspiring lawyer Katrina Toner considers what lies ahead for IP laws

Jul 26 2023 8:58am
3

The legal lessons of Barbenheimer

First-year law student Shinelle Leo looks at last year's cultural film phenomenon

Apr 22 2024 8:55am

K-pop and contract law

Law graduate Anca Andreea Aurica explores the popularity of South Korean pop music and the growing curiosity around artists' contracts

Jun 7 2023 11:34am
5