News

‘Uncritical reliance’ on AI in criminal justice could lead to ‘wrong decisions’, says Law Society

By on
8

Warning contained in new commission report

“An uncritical reliance on tech” in the justice system is raising alarm bells for the Law Society.

In a report published this week, Chancery Lane highlights a lack of accountability and transparency alongside potential human rights challenges of algorithms such as facial recognition, predictive crime mapping, and mobile phone data extraction being developed by the police, prisons and border forces.

There are increasing concerns about police forces piloting facial recognition technology that can, for instance, cross-reference someone at a particular public event with crime data, or algorithms that predict the level of risk of an individual committing further crimes over a given time period.

Christina Blacklaws, president of the Law Society, said:

“Complex algorithms are crunching data to help officials make judgement calls about all sorts of things … [and] … while there are obvious efficiency wins, there is a worrying lack of oversight or framework to mitigate some hefty risks … that may be unwittingly built in by an operator.”

The 80-page report, authored by a commission set up by the Law Society last year, sets out the challenges that algorithms raise such as bias and discrimination.

The 2019 Legal Cheek Firms Most List

Because algorithms “encode assumptions and systematic patterns” they can reinforce and then embed discriminations. It reads: “If, as is commonly known, the justice system does under-serve certain populations or over-police others, these biases will be reflected in the data, meaning it will be a biased measurement of the phenomena of interest, such as criminal activity.”

There is also a concern that different government agencies are not talking to each other, as Blacklaws puts it: “Police, prisons and border forces are innovating in silos to help them manage and use the vast quantities of data they hold about people, places and events” but there is an “absence of … centralised coordination or systematic knowledge-sharing between public bodies.”

Chancery Lane makes a number of recommendations as a result of the research findings including ensuring that public bodies rather than tech companies take ownership of the software involved, and setting up a National Register of Algorithmic Systems as an “initial scaffold for further openness, cross-sector learning and scrutiny.”

The commission also mapped all the known algorithms currently being deployed or developed by police in England and Wales.

The commission included members of the Law Society alongside academics, as well as Andrea Coomber from all-party law reform and human rights organisation, Justice.

For all the latest commercial awareness info, and advance notification of Legal Cheek's careers events:

Sign up to the Legal Cheek Hub

8 Comments

Anonymous

Hooman rites innit

Anonymous

Things not working innit. In the Wales pilot, facial recognition software mis-identified over 90% of people. This is a govenment IT system. None of them work.

Anonymous

A pilot scheme in Swansea involving three psychics sloshing around in a paddling pool had moderate success.

Anonymous

So? If it delivers faster cheaper results that are acceptable to the population as a whole then I say go for it. The criminal defenders and the human rights moaners would rather 99 criminals go free than one person is wrongly convicted. I don’t like the thought of the 99 walking around the streets, thank you very much.

Anonymous

The problem with AI is that you remove the human element and replace it with pure logic that can be misconstrued and misused.

For example, if the algorithm “learnt” that a disproportionate number of BAME persons are convicted each year vis a vis the proportion of white privileged people, then the algorithm may conclude that BAME persons are more likely to pose a risk of being criminals.

It would be impossible to programme in the human factors such as poverty, police racism, white privilege etc that makes the issue so much more complicated.

Anonymous

That’s the beauty of the AI. It removes the human element. And means it can see past the apologists like 4:29pm. BAME persons ARE more likely to pose a risk of being criminals. It is just people like 4:29 want to make excuse after excuse for criminals.

Anonymous

Uprated your own comment?

Anonymous

May a long time in the future you could have robo-justice. AI at the moment is crap. We are nowhere near HAL.

Join the conversation

Related Stories