In association with LPC Law

Copyright in the age of AI: The UK’s contentious proposal 

Avatar photo

By Xin Ern Teow (Ilex) on

First year law student at Leeds Uni, Xin Ern Teow (Ilex) analyses the UK’s proposals to resolve the tension between copyright and AI-produced content

Credit: Cash Macanaya via Unsplash

What happens when cutting-edge technology collides with centuries-old concepts of creativity, privacy, and law? The UK government’s latest proposal, Copyright and AI: Consultation, to allow AI companies to use copyrighted works for training has sparked fierce debate, raising questions about the future of intellectual property in the AI era.

Imagine a world where AI-generated novels outsell human-written ones, where iconic artworks inspire machine-crafted masterpieces, and where centuries of cultural heritage are fed into algorithms to create something entirely new. At the heart of this transformative vision lies a contentious question: Who owns the rights to this creativity, the machines, their makers, or the creators whose works serve as the foundation?

What’s on the table?

The UK government’s Copyright and Artificial Intelligence consultation is the most recent official initiative addressing the intersection of AI and copyright law. This consultation sought public input on how to adapt and modernise the UK’s legal framework to support both the creative industries and the AI sector, balancing their needs and fostering innovation while protecting creative industries.

At the heart of this consultation lie three key objectives:

  1. Control: The framework seeks to ensure that rights holders retain control over their works. This means creators should have the ability to license, monetise, and safeguard their content when used by AI technologies.
  2. Access: AI developers require access to extensive datasets to train their models effectively. The government proposes streamlined access to copyrighted materials to prevent legal barriers from stifling technological progress.
  3. Transparency: The framework aims to establish greater transparency, ensuring all stakeholders — creators, developers, and consumers — understand how AI systems use copyrighted content and generate outputs.

In order to achieve these goals, the government proposes an “opt-out” system, allowing AI companies to use copyrighted works unless the rights holders explicitly object, thereby reducing administrative hurdles for developers. However, it places the burden of action on creators, who must proactively protect their intellectual property.

The creative industry backlash

Undoubtedly, there is a strong opposition ignited from the creative industries, which argue that the “opt-out” system threatens their livelihoods and the value of intellectual property. At the heart of this resistance is the “Make It Fair” campaign, supported by various news organisations, further underscoring the demand for equitable treatment and compensation for creators.

The potential loss of revenue is just one part of the broader concern. Creators fear the long-term ramifications of AI on the entire creative ecosystem. If AI systems are allowed to harvest copyrighted works without compensating creators or offering any recognition, it could lead to a “race to the bottom,” where the value of human creativity is overshadowed by algorithmically-generated content. In this scenario, emerging creators would struggle to profit from their work, as the very worth of their intellectual property would diminish in an AI-dominated marketplace.

Want to write for the Legal Cheek Journal?

Find out more

Many critics argue that this shift could foster a monopolistic environment in which only a handful of large tech companies profit from AI-generated content, while individual creators are left with little control or benefit. This concern is poignantly illustrated by the silent album, Is This What We Want?, a collaborative protest from over 1,000 musicians, including iconic figures like Kate Bush and Damon Albarn. The album’s track titles collectively convey the message, “The British government must not legalise music theft to benefit AI companies”. This symbolic gesture underscores a key point that this issue goes beyond mere financial gain — it’s about recognition and respect for human artistry in a world increasingly dominated by machines.

While there is broad support for fostering AI innovation, many creatives argue that the government’s approach needs to be balanced more carefully. If these concerns are not addressed, the unrest within the creative community suggests that the government’s proposal may not only face legal challenges but could also lead to a loss of public trust in the ethical development of AI.

The benefits of the proposal

While the proposal has faced significant backlash, the UK government has strongly defended its stance, arguing that the benefits far outweigh the concerns. From the government’s perspective, this initiative is crucial for fostering the growth of AI technology, ensuring the UK remains competitive on the global stage, and contributing to economic growth.

With access to vast datasets, AI models can improve and innovate faster, benefiting industries like healthcare, education, and finance. The government believes this will enable AI firms to create groundbreaking technologies without the delays of seeking permissions for every dataset.

Besides, by facilitating AI development, the UK aims to attract investment, create jobs, and position itself as a leader in AI research. In a global race for AI supremacy, providing open access to data can help the UK remain competitive, particularly against tech giants in the US and China.

Additionally, AI innovation can revolutionise industries, from self-driving cars to personalised medicine. By supporting AI companies, the UK hopes to foster new industries and technological advancements, which would contribute to long-term national growth and improved societal outcomes.

While the proposal acknowledges creator concerns, the government argues that promoting AI innovation justifies easier access to data. If implemented with a balanced legal framework, the UK’s approach could serve as a model for other nations grappling with AI and copyright challenges.

Conclusion

To sum up, the UK government’s proposal to allow AI companies to train their algorithms on copyrighted works without prior permission highlights the ongoing tension between fostering technological innovation and protecting creators’ rights. While the proposal aims to accelerate AI development and bolster economic growth, it raises critical concerns about the fairness of intellectual property distribution and the potential devaluation of human creativity.

Xin Ern Teow (Ilex) is a first-year law student at the University of Leeds with a strong passion for making a positive impact through volunteering. Her interests also extend to negotiation and exploring strategies for conflict resolution and collaborative problem-solving.

The Legal Cheek Journal is sponsored by LPC Law.

Join the conversation

Related Stories

Artificial intelligence

AI in law: evolving ethical considerations

BPP student Catherine Chow analyses the relationship between AI and the legal profession, weighing up the opportunities and challenges this evolving technology brings

Nov 20 2024 8:46am
deepfake

Deepfakes and the law: navigating the blurred lines of reality in the digital age

Essex Uni law student Raksha Sunder unpacks the rise of deepfakes, their legal implications, and what global regulation could mean for this evolving digital frontier

Dec 4 2024 6:42am
1

The future of music copyright laws

Cambridge University graduate and aspiring lawyer Katrina Toner considers what lies ahead for IP laws

Jul 26 2023 8:58am
3