Should AI be given legal personhood? New Law Commission paper raises ‘radical’ possibility

Avatar photo

By Legal Cheek on

No legal status… for now


A new Law Commission discussion paper has floated the once sci-fi idea of giving artificial intelligence (AI) systems their own legal personality — meaning they could, in theory, be sued, held liable for harm or even pay damages.

The paper, titled AI and the Law, explores the legal challenges posed by the rise of autonomous, adaptive AI, including who should be liable when AI systems act independently and cause harm. While the paper stops short of proposing specific reforms, it suggests that a “potentially radical option” could be “granting some form of legal personality to AI systems”.

Currently, AI cannot be held legally liable as it has no legal status. But with AI systems becoming increasingly sophisticated and capable of completing complex tasks with little or no human input, the Law Commission warns that “liability gaps” could emerge where “no natural or legal person is liable for the harms caused by, or the other conduct of, an AI system”.

The paper states: “Current AI systems may not be sufficiently advanced to warrant this reform option. But given the rapid pace of AI development, and the potentially increasing rate of pace of development, it is pertinent to consider whether AI legal personality requires further discussion now, in the event that such highly advanced AI arrives in the near future.”

Legal personality — the ability to be sued or held accountable — is currently limited to natural persons (humans) and legal persons (such as companies). Extending it to AI systems would be unprecedented, and the Commission acknowledges this would represent a significant shift in legal thinking.

The 2025 Legal Cheek Firms Most List

The core problem arises when AI acts autonomously, making decisions that cannot easily be traced back to a developer or user. The Commission points out that “AI systems do not currently have separate legal personality and therefore can neither be sued or prosecuted”.

In such cases, victims might struggle to obtain compensation, or the state could be left requiring “assistance at public expense”. The Commission warns that this legal uncertainty could also hinder innovation, for instance by impeding insurance for AI-related risks.

While the idea of AI personhood remains speculative, the Commission argues that now is the time to discuss it, given the “rapidly expanding use of AI” and its likely impact across areas including product liability, public law, criminal law and intellectual property.

In the meantime, the Commission plans to monitor the legal impact of AI across its wider law reform work. It has already looked at AI in automated vehicles and deepfakes, with projects underway on aviation autonomy and product liability.

For now, AI remains a tool, not a person, but as the Commission notes: “It is not yet clear that those same [legal] systems will apply equally well to new technology that is also intelligent to varying degrees.”

Join the conversation

Related Stories

artificial intelligence

Government embraces AI in bid to speed up justice

New plan aims to transform courts system with AI-powered tools, reduce backlogs and boost efficiency

2 days ago
2

‘How are junior lawyers using AI?’

Amid the hype and hyperbole, one Legal Cheek reader wonders what real impact technology is having on the day-to-day working lives of trainees and associates

3 days ago
5

Aspiring lawyers urged to focus on firms’ AI prowess, not just big salaries

Simmons senior partner argues that top tech could accelerate trainees' careers

Jul 29 2025 11:14am
3