Skip to content

Who owns AI-generated content?

Avatar photo

By Ahmed Shawqie on

Ahmed Shawqi examines how the rise of generative tools is unsettling long-held assumptions about creativity, authorship and rights across the arts

“I run, I run, I run.” These lyrics from the viral song I Run by Haven quickly spread across social media platforms and streaming services and the song has since attracted millions of listeners, becoming widely used in online short-form video content across social media. Yet the song’s rise to prominence is not only a story about music but also plays a part in a much larger cultural shift in the creation, sharing and consumption of content in the digital age.

Before a song itself goes viral, countless other pieces of content do too. The short-form video ecosystem that platforms like TikTok, Instagram Reels, and YouTube have built thrives on an endless scroll of shareable moments: a cat inexplicably dancing to a pop beat, a kitten strumming a miniature guitar with suspicious timing, a dog reacting to its owner’s dramatic exit. These clips, often algorithmically amplified and passed between family members and friends with the simple caption “this is you,” represent one of the most dominant forms of modern media consumption. They are low-effort, high in emotional reward, and, crucially, almost entirely generated and distributed without any serious consideration of who owns them.

This is the same ecosystem into which I Run landed. The track’s popularity, however, was accompanied by significant controversy: much of the vocal production was reportedly generated using artificial intelligence (AI). The track was later removed from streaming services after Jorja Smith’s label, FAMM, alleged that AI had been used to make the vocals sound like her, raising questions about identity, consent, and royalties in the age of generative audio.

The controversy surrounding the song illustrates a rapidly developing legal issue facing the media sector as a whole. Whether it is an AI-cloned vocal mimicking a chart-topping artist, or an algorithmically generated cat video that accumulates millions of views overnight, a fundamental legal dilemma has emerged: who owns AI-generated content? This question is particularly significant for the media sector, where intellectual property rights underpin the monetisation, licensing, and distribution of creative works.

Want to write for the Legal Cheek Journal?

Find out more

The rise of AI-generated media content

Generative AI (Gen AI) systems are transforming how content is created and distributed. Tools capable of generating music, images, scripts and video content now allow creators to produce entire pieces of media with minimal to no human involvement. On platforms such as Instagram, creators increasingly publish videos generated almost entirely using AI tools.

Similarly, AI-generated music has become increasingly common. Software can replicate musical styles, generate melodies and even mimic the vocal characteristics of existing artists. While these technologies create new opportunities for creativity and efficiency, they also challenge traditional legal frameworks governing intellectual property.

Copyright law and the requirement of human authorship

Copyright law has traditionally assumed that creative works originate from human authors. Generative AI challenges this principle directly. In the United Kingdom, the Copyright, Designs and Patents Act 1988 (CDPA) addresses computer-generated works. Section 9(3) provides that where a computer generates a work, and there is no human author, the author is deemed to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” Section 178 defines “computer-generated” as a work “generated by a computer in circumstances such that there is no human author of the work.”

Although this provision predates modern AI, it is frequently cited as a possible basis for protecting AI-generated content and is now at the centre of debate about who should own purely AI-generated works. Its applicability to contemporary generative models is far from settled and is currently under active policy review by the UK government. In fact, UK law still frames copyright around the idea that protected works must be an author’s “own intellectual creation”, and it is particularly difficult to demonstrate the necessary free and creative choices where no human author can be identified.

Recent commentary suggests that, in an AI context, the “person by whom the arrangements necessary for the creation of the work are undertaken” could be either the developer who designs and trains the model, or the user who configures and deploys it to achieve a specific output. Some scholars have argued that Section 9(3) should be repealed or fundamentally reformed.

In the United Arab Emirates (UAE), copyright is governed by Federal Decree-Law No. 38 of 2021 on Copyrights and Neighbouring Rights. Article 1 defines an “Author” as “the person who creates the work” and “Person” as any “physical or juristic person.” This framework means AI systems which lack legal personality cannot themselves qualify as authors. The legislation does not explicitly address AI-generated works.

The legal treatment of such content in the UAE, therefore, remains uncertain and untested in reported case law and “works created solely by AI may not qualify for copyright protection under existing frameworks.”

Want to write for the Legal Cheek Journal?

Find out more

Implications for the media industry

For media companies operating across the UAE and the UK, the uncertainty surrounding authorship may complicate licensing, distribution agreements and IP enforcement. In the UK, this uncertainty has been highlighted in the government’s Copyright and AI consultation, which records concerns from both rights holders and AI developers about unclear rules for training and use of AI models. In addition, the use of AI systems trained on copyrighted material may expose companies to infringement claims for both the training itself, which potentially infringes reproduction rights, and for specific outputs that are substantially similar to protected works, as illustrated by the High Court’s recent decision in Getty Images v Stability AI. Getty Images, one of the world’s largest stock photo agencies, licenses images to media companies, advertisers and publishers worldwide. Getty alleged that Stability AI had scraped millions of its copyrighted images to train its image-generation model without a licence.

The High Court found sufficient grounds to proceed to trial on claims including primary and secondary copyright infringement and trade mark liability but declined to resolve the central question of whether using copyrighted works to train AI models constitutes infringement. The case, therefore, leaves media companies without clear guidance on whether their existing content libraries are at risk.

Want to write for the Legal Cheek Journal?

Find out more

Conclusion

The debate over I Run shows that AI-generated hits are no longer a thought experiment but a practical headache for rights holders and platforms. This extends beyond chart-topping songs; the same questions apply to the algorithmically generated cat videos and short-form clips that circulate daily across platforms, largely unexamined and unowned in both the UK and the UAE, core copyright concepts, such as authorship, originality and ownership, were built for human creators, not for autonomous systems that can mimic voices, styles and formats at scale. In the UK, policymakers are openly questioning whether Section 9(3) of the CDPA and protection for computer-generated works still make sense in an era of generative models. In contrast, cases such as Getty Images v Stability AI test the limits of existing infringement rules.

In the UAE, commentators speak of an “authorship gap”, in which works created solely by AI may fall outside copyright altogether because only natural or juristic persons can qualify as authors.

Until legislatures and courts in both jurisdictions provide clearer answers, media companies cannot assume that AI-generated content will fit neatly within familiar licensing and enforcement structures. Treating AI as a powerful but legally sensitive tool, rather than a plug‑and‑play replacement for human creativity, will be essential while the law catches up.

Ahmed Shawqie is a Dubai-based legal professional working in corporate and commercial law, with experience across the UAE and UK. His work involves drafting and reviewing commercial agreements, conducting legal research, and supporting advisory matters. He has a particular interest in media and entertainment law, with a focus on the legal frameworks shaping the creative and sports industries.

guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Related Stories

BigLaw’s billion-pound AI race

The Legal Cheek Podcast: Inside the tech choices of the City's top firms

3 days ago