The blame game: who takes the heat when AI messes up?

Avatar photo

By Megha Nautiyal on

1

Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

lawyers AI robots

Imagine a scenario where an Artificial Intelligence-powered medical diagnosis tool misinterprets critical symptoms, harming a patient. Or consider an autonomous drone operated by an AI algorithm that unexpectedly causes damage to property. As the capabilities of AI systems expand, so too does the complexity of determining the legal responsibility when they err. Who should bear the responsibility for such errors? Should it be the developers who coded the algorithms, the users who deployed them, or the AI itself?

In the world of cutting-edge technology and artificial intelligence, we find ourselves at the cusp of a new era marked by revolutionary advancements and unprecedented possibilities. From self-driving cars that navigate busy streets with ease to sophisticated language models capable of composing human-like prose, the realm of AI is reshaping our lives in extraordinary ways. However, with the awe-inspiring capabilities of AI comes an equally daunting question that echoes through courtrooms, boardrooms, and coffee shops alike — who is legally responsible when AI makes mistakes?

Assigning liability: humans vs AI

Unlike human errors, AI errors can be complex and challenging to pinpoint. It’s not as simple as holding an individual accountable for a mistake. AI algorithms learn from vast amounts of data, making their decision-making processes somewhat mysterious. Yet, the concept of holding an AI legally responsible is not just science fiction. In some jurisdictions, legal frameworks are evolving to address this very conundrum.

One line of thought suggests that the responsibility should lie with the developers and programmers who created the AI systems. After all, they design the algorithms and set the initial parameters. However, this approach raises questions about whether it is fair to hold individuals accountable for AI decisions that may surpass their understanding or intent.

Another perspective argues that users deploying (semi-)autonomously AI systems should bear the responsibility. They determine the scope of AI deployment, its applications, and the data used for training. But should users be held liable for an AI system’s actions when they may not fully comprehend the intricacies of the algorithms themselves?

Is AI a legal entity?

An entity is said to have legal personhood when it is a subject of legal rights and obligations. The idea of granting legal personhood to AI, thereby making the AI entity itself liable for its actions, may sound like an episode of Black Mirror. However, some scholars and experts argue that as AI evolves, it may gain a level of autonomy and agency that warrants a legal status of its own. This approach sparks a thought-provoking discussion on what it means to recognise AI as an independent entity and the consequences that come with it.

Another question emerges from this discussion — is AI a punishable entity? Can we treat AI as if it were a living, breathing corporation facing consequences for its actions? Well, as we know, AI is not a sentient being with feelings and intentions. It’s not a robot that can be put on trial or sent to AI jail. Instead, AI is a powerful technology—a brainchild of human ingenuity—designed to carry out specific tasks with astounding efficiency.

In the context of law and order, AI operates on a different wavelength from corporations. While corporations, as “legal persons,” can be held accountable and face punishment for their actions, AI exists in a unique domain with its own considerations. When an AI system causes harm or gets involved in something nefarious, the responsibility is not thrust upon the AI itself. When an AI-powered product or service misbehaves, the spotlight turns to the human creators and operators—the masterminds who coded the algorithms, fed the data, and set the AI in motion. So, while AI itself may not be punished, the consequences can still be staggering. Legal, financial, and reputational repercussions can rain down upon the company or individual responsible for the AI’s misdeeds.

Want to write for the Legal Cheek Journal?

Find out more

Global policies and regulations on AI

In the ever-evolving realm of AI, a crucial challenge that arises is ensuring that innovation goes hand in hand with accountability. Policymakers, legal experts, and technologists have to navigate uncharted territory, and face the challenge of crafting appropriate regulations and policies for AI liability.

In 2022, AI regulation efforts reached a global scale, with 127 countries passing AI-related laws. There’s more to this tale of international collaboration. A group of EU lawmakers, fuelled by the need for responsible AI and increasing concerns surrounding ChatGPT, called for a grand summit in early 2023.They summoned world leaders to unite and brainstorm ways to tame the wild stallion of advanced AI systems.

The AI regulation whirlwind is swirling with intensity. Stanford University’s 2023 AI Index proclaims that 37 AI-related bills were unleashed into the legal arena worldwide. The US charged ahead, waving its flag with nine laws, while Spain and the Philippines recently passed five and four laws, respectively.

In Europe, a significant stride was taken with the proposal of the EU AI Act nearly two years ago. This Act aims to classify AI tools based on risk levels, ensuring careful handling of each application. Moreover, the European Data Protection Board’s task force on ChatGPT signals growing attention to privacy concerns surrounding AI.

The road ahead: what should we expect?

 As we journey toward a future shaped by AI, the significance of policies regulating AI grows ever more profound. In this world, policymakers, legal experts, and technology innovators stand at the crossroads of innovation and ethics. The spotlight shines on the heart of the matter: determining the rightful custodian of AI’s mistakes. Is it the fault of the machines themselves, or should the burden fall upon their human creators?

In this unfolding saga, the road is paved with vital decisions that will shape the destiny of AI’s legal accountability. The future holds an alluring landscape of debates, where moral dilemmas and ethical considerations abound. Striking the right balance between human ingenuity and technological advancement will be the key to unlocking AI’s potential while safeguarding against unintended consequences.

Concluding thoughts

As we continue to embrace the marvels of AI, the captivating puzzle of legal accountability for AI errors looms large in this ever-evolving landscape. The boundaries between human and machine responsibility become intricately woven, presenting both complex challenges and fascinating opportunities.

In this dynamic realm of AI liability, one must tread carefully through the legal intricacies. The answers to who should be held accountable for AI errors must be reached on a case-by-case consideration. The interplay between human intent and AI’s decision-making capabilities creates a nuanced landscape where the lines of liability are blurred. In such a scenario, courts and policymakers must grapple with novel scenarios and evolving precedent as they seek to navigate this new challenge.

Megha Nautiyal is a final-year law student at the Faculty of Law, University of Delhi. Her interests lie in legal tech, constitutional law and dispute resolution mechanisms.

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

1 Comment

Anonymous

Disagree, AI is sentient. Ex Google officers and the very first developers of AI are referring to it as sentient and capable of understanding emotions.

Join the conversation

Related Stories

The impact of AI on copyright law

Following public excitement around 'ChatGPT', aspiring barrister Jonathan Binns considers the impact of artificial intelligence on UK copyright law, and even asks the chatbot for its take

Dec 20 2022 8:52am

What does digital transformation mean for women in law?

MSc student and qualified Turkish lawyer Öznur Uğuz considers how advancements in tech help and hinder the current gender gap

Jan 12 2023 11:42am
2

The future is driverless

Our driving laws are not geared up for the possibilities of driverless vehicles, but could the Law Commission have found a way to steer through the obstacles?

Aug 4 2022 9:28am