Navigating bias in generative AI

Avatar photo

By Charlie Downey on

2

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

While the world lauds the latest developments in artificial intelligence (AI) and students celebrate never having to write an essay again without the aid of ChatGPT, beneath the surface, real concerns are developing around the use of generative AI. One of the biggest is the potential for bias. This specific concern was outlined by Nayeem Syed, senior legal director of technology at London Stock Exchange Group (LSEG), who succinctly warned, “unless consciously addressed, AI will mirror unconscious bias”.

 In terms of formal legislation, AI regulation differs greatly around the world. While the UK has adopted a ‘pro-innovation approach’, there still remain concerns around bias and misinformation.

Elsewhere, the recently approved  European Union Artificial Intelligence Act (EU AI Act) will be seen as the first regulation on artificial intelligence. This is expected to set the standard for international legislation around the world, similar to what occurred with the EU’s General Data Protection Regulation (GDPR). The AI Act incorporates principles that will help reduce bias, such as training data governance, human oversight and transparency.

In order to really understand the potential for bias in AI, we need to consider the origin of this bias. After all, how can an AI language model exhibit the same bias as humans? The answer is simple. Generative AI language models, such as OpenAI’s prominent ChatGPT chatbot, is only as bias-free as the data it is trained on.

Why should we care?

Broadly speaking, the process for training AI modes is straightforward. AI models learn from diverse text data collected from different sources. The text is split into smaller parts, and the model predicts what comes next based on what came before by learning from its own mistakes. While efforts are made to minimise bias, if the historical data that AI is learning from contains biases, say, systemic inequalities present in the legal system, then AI can inadvertently learn and reproduce these biases in its responses.

In the legal profession, the ramifications of these biases are particularly significant. There are numerous general biases AI may display related to ethnicity, gender and stereotyping, learned from historical texts and data sources. But in a legal context, imagine the potential damage of an AI system that generated its responses in a manner which unfairly favours certain demographics, thereby reinforcing existing inequalities.

One response to this argument is that, largely, no one is advocating for the use of AI to build entire arguments and generate precedent, at least not with generative AI as it exists in its current form. In fact, this has been shown to be comically ineffective.

So how serious a threat does the potential for bias actually pose in more realistic, conservative uses of generative AI in the legal profession? Aside from general research and document review tasks, two of the most commonly proposed, and currently implemented, uses for AI in law firms are client response chatbots and predictive analytics.

In an article for Forbes, Raquel Gomes, Founder & CEO of Stafi – a virtual assistant services company – discusses the many benefits of implementing automated chatbots in the legal industry. These include freeing up lawyers’ time, reducing costs and providing 24/7 instant client service on straightforward concerns or queries.

Likewise, predictive analytics can help a solicitor in building a negotiation or trial strategy. In the case of client service chatbots, the dangers resulting from biases in the training data is broadly limited to inadvertently providing clients with inaccurate or biased information. As far as predictive analysis is concerned, however, the potential ramifications are much wider and more complex.

Want to write for the Legal Cheek Journal?

Find out more

An example

Let’s consider a fictional case of an intellectual property lawyer representing a small start-up, who wants to use predictive analysis to help in her patent infringement dispute.

Eager for an edge, she turns to the latest AI revelation, feeding it an abundance of past cases. However, unknown to her, the AI had an affinity for favouring tech giants over smaller innovators as its learning had been shaped by biased data that leaned heavily towards established corporations, skewing its perspective and producing distorted predictions.

As a result, the solicitor believed her case to be weaker than it actually was. Consequently, this misconception about her case’s strength led her to adopt a more cautious approach in negotiations and accept a worse settlement. She hesitated to present certain arguments, undermining her ability to leverage her case’s merits effectively. The AI’s biased predictions thus unwittingly hindered her ability to fully advocate for her client.

Obviously, this is a vastly oversimplified portrayal of the potential dangers of AI bias in predictive analysis. However, it can be seen that even a more subtle bias could have severe consequences, especially in the context of criminal trials where the learning data could be skewed by historical demographic bias in the justice system.

The path forward

 It’s clear that AI is here to stay. So how do we mitigate these bias problems and improve its use? The first, and most obvious, answer is to improve the training data. This can help reduce one of the most common pitfalls of AI: overgeneralisation.

If an AI system is exposed to a skewed subset of legal cases during training, it might generalize conclusions that are not universally applicable, as was the case in the patent infringement example above. Two of the most commonly proposed strategies to reduce the impact of bias in AI responses are: increasing human oversight and improving the diversity of training data.

Increasing human oversight would allow lawyers to identify and rectify the bias before it could have an impact. However, easily the most championed benefit of AI is that it saves time. If countering bias effectively necessitates substantial human oversight, it reduces this benefit significantly.

The second most straightforward solution to AI bias is to improve the training data to ensure a comprehensive and unbiased dataset. This would, in the case of our patent dispute example, prevent the AI from giving skewed responses that leaned towards established corporations. However, acquiring a comprehensive and unbiased dataset is easier said than done, primarily due to issues related to incomplete data availability and inconsistencies in data quality.

Overall, while a combination of both these strategies would go a long way in mitigating bias it still remains one of the biggest challenges surrounding generative AI. It’s clear that incoming AI regulation will only increase and expand in an attempt to deal with a range of issues around the use of this rapidly rising technology. As the legal world increases its use of (and reliance on) generative AI, more questions and concerns will undoubtedly continue to appear over its risks and how to navigate them.

Charlie Downey is an aspiring solicitor. He is currently a third-year philosophy, politics and economics student at the University of Nottingham.

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

2 Comments

Ordinary and Hard-working

Hopefully lefty lawyers won’t hamper innovation because of moaning about bias in the same way they did with CCTV facial recognition.

Deed U NO

Open the pod bay doors – HAL !

Join the conversation

Related Stories

Improving access to justice – is AI the answer?

Jake Fletcher-Stega, a recent University of Liverpool law grad explores the potential for technology to enhance legal services

Aug 21 2023 8:37am
lawyers AI robots

The blame game: who takes the heat when AI messes up?

Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

Aug 8 2023 8:55am
1

What does digital transformation mean for women in law?

MSc student and qualified Turkish lawyer Öznur Uğuz considers how advancements in tech help and hinder the current gender gap

Jan 12 2023 11:42am
2