In association with LPC Law

AI in court: rights, responsibilities and regulation

Avatar photo

By James Bloomberg on

1

Birmingham Uni student James Bloomberg explores the challenges that AI poses to the justice system and concepts of legal personhood


The advancement of artificial intelligence (AI) presents a complex challenge to contemporary legal and ethical frameworks, particularly within judicial systems. This Journal explores the evolving role of AI in the courtroom, using recent high-profile cases, including fabricated legal citations and algorithmic hallucinations. It examines how AI’s integration into legal research and decision-making strains traditional understandings of accountability, responsibility and legal personhood. The discussion also considers AI’s broader societal impact.

The advancement of technology over the recent years has resulted in a seismic shift in the way societies interact, how businesses operate and how governments can regulate this change. AI is now a driving force changing how we live our lives, how students work at university, but most of all its ability to make quick decisions creates red flags, especially for law firms. With AI becoming a part of our everyday life, with AI now in-built on WhatsApp, X (formally Twitter) and elsewhere, questions have been raised: Should AI be granted legal rights? This discussion, far from hypothetical, would challenge existing legal frameworks, and ultimately lead to questions about the societal as well as ethical implications there would be, when AI is recognised as a legal entity.

Article 6 of the Universal Declaration of Human Rights addresses legal personhood, the status upon which an entity is granted the ability to hold rights and duties in the legal system. This can be anything from the legal persons being the owners of property, the ability to act and be held responsible for those actions or the ability to exercise rights and obligations, such as by entering a contact. In the past, corporations have been granted legal personhood. However, if this same concept was applied to AI systems such as ChatGPT, this introduces further complexities that transcend any current legal definitions. The European Parliament has previously explored whether AI systems should be granted a form of legal status to address accountability issues, particularly in cases where harm is caused by autonomous systems.

In 2024, a Canadian lawyer used an AI chatbot for legal research, which created “fictitious” cases during a child custody case in the British Columbia Supreme Court. This was raised by the lawyers for the child’s mother, as they could not find any record of these cases. The circumstance at hand was a dispute over a divorced couple, taking the children on an overseas trip, whilst locked in a separation dispute with the child’s mother. This is an example of how dangerous AI systems can be, and why lawyers today need to use AI as an assistant not as a cheat sheet. However, who is to blame here, the lawyers or the AI chatbot?

Want to write for the Legal Cheek Journal?

Find out more

A major argument against granting AI legal personhood is that it would contradict fundamental human rights principles. The High-Level Expert Group on Artificial Intelligence (AI HLEG) strongly opposes this notion, emphasising that legal personhood for AI systems is “fundamentally inconsistent with the principle of human agency, accountability, and responsibility”. AI lacks consciousness, intent, and moral reasoning — characteristics that underpin legal rights and responsibilities. Unlike humans or even corporations (which operate under human guidance), AI lacks an inherent capacity for ethical decision-making beyond its programmed constraints.

Another central issue is accountability. If AI were granted legal rights, would it also bear responsibilities? Who would be liable for its actions?

Another case saw a federal judge in San Jose, California, ordering AI company Anthropic to respond to allegations that it submitted a court filing containing a ‘hallucination’ created by AI as part of its defence against copyright claims by a group of music publishers. The claim sees an Anthropic data scientist cite a non-existent academic article to bolster the company’s argument in a dispute over evidence. Currently, clarity is needed as to whether liability for AI-related harm is at the hands of the developers, manufacturers, or users.

In the UK, the allocation of liability for AI-related harm is primarily governed by existing legal frameworks, the common law of negligence, and product liability principles. Under the Consumer Protection Act for example, manufacturers and producers can be held strictly liable for defective products that cause damage, which theoretically could extend to AI systems and software if they are deemed products under the Act. Developers and manufacturers may also face liability under negligence if it can be shown that they failed to exercise reasonable care in the design, development, or deployment of AI systems, resulting in foreseeable harm. Users, such as businesses or individuals deploying AI, may be liable if their misuse or inadequate supervision of the technology leads to damage. While there is currently no bespoke UK statute specifically addressing AI liability, the Law Commission and other regulatory bodies have recognised the need for reform and are actively reviewing whether new, AI-specific liability regimes are required to address the unique challenges posed by autonomous systems.

The use of legal personhood on AI may create situations where accountability is obscured, allowing corporations or individuals to evade responsibility by attributing actions to an “autonomous” entity.

Further, AI decision-making lacks transparency as it often operates through black-box algorithms, raising serious ethical and legal concerns, particularly when AI systems make decisions that affect employment, healthcare, or criminal justice. The European Parliament’s Science and Technology Options Assessment (STOA) study has proposed enhanced regulatory oversight, including algorithmic impact assessments, to address transparency and accountability. Granting AI legal rights without resolving these issues would only increase the risk of unchecked algorithmic bias.

The ethical implications extend beyond legal considerations. AI’s increasing autonomy in creative and economic spaces, such as AI-generated art, music, and literature has raised questions about intellectual property ownership. Traditionally, copyright and patent laws protect human creators, but should AI-generated works receive similar protections? In the UK, for example, computer-generated works are protected under copyright law, yet ownership remains tied to the creator of the AI system rather than the AI itself. Under the Copyright, Designs and Patents Act 1988, section 9(3), the author of a computer-generated work is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken.” This means that, in the UK, copyright subsists in AI-generated works, but the rights vest in the human creator or operator, not the AI system itself. Recognising AI as a rights-holder could challenge these conventions, necessitating a re-evaluation of intellectual property laws.

A potential middle ground involves the implementation of stringent governance models that prioritise accountability without conferring rights upon AI. Instead of granting legal personhood, policymakers could focus on AI-specific liability structures, enforceable ethical guidelines, and greater transparency in AI decision-making processes. The European Commission has already initiated discussions on adapting liability frameworks to address AI’s unique challenges, ensuring that responsibility remains clearly assigned.

While AI continues to evolve, the legal framework governing its use and accountability must remain firmly rooted in principles of human responsibility. AI should be regulated as a tool, albeit an advanced one, rather than as an autonomous entity deserving of rights. Strengthening existing regulations, enhancing transparency, and enforcing accountability measures remain the most effective means of addressing the challenges posed by AI.

The delay in implementing robust AI governance has already resulted in widespread ethical and legal dilemmas, from biased decision-making to privacy infringements. While AI’s potential is undeniable, legal recognition should not precede comprehensive regulatory safeguards. A cautious, human-centric approach remains the best course to ensure AI serves societal interests without compromising fundamental legal principles.

While it is tempting to explore futuristic possibilities of AI personhood, legal rights should remain exclusively human. The law must evolve to manage AI’s risks, but not in a way that grants rights to entities incapable of moral reasoning. For now, AI must remain a tool, not a rights-holder.

James Bloomberg is a second year human sciences student at the University of Birmingham. He has a strong interest in AI, research and innovation and plans to pursue a career as a commercial lawyer.

The Legal Cheek Journal is sponsored by LPC Law.

1 Comment

Grant Castillou

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Join the conversation

Related Stories

Tulip Trading v the Twelve Tables: Where Roman ownership meets blockchain chaos

A-level student Ishaan Modi explores how property law needs to adapt to function in the modern age

Jul 7 2025 7:50am
4

White lines and harsh fines: cocaine and football banning orders

Aspiring criminal barrister Harry Toy takes a look at changes to legislation on football banning orders.

Jun 27 2025 8:45am
1

From NDAs to non-competes: The shrinking scope of commercial confidentiality

BPP bar student Catherine Chow explains why once-standard commercial clauses are under increasing legal and regulatory attack

Jun 23 2025 9:16am