Robot judges determining guilt through speech and body temperature ‘commonplace’ within 50 years, AI guru predicts

By on

Sci-fi style justice sooner than we think?

It might not be long before we see robot judges handing out justice based on the speech patterns and body temperatures of defendants.

This is the rather terrifying prediction of Terence Mauri, an artificial intelligence (AI) expert and self-described “global disruption thinker”, who claims robo-justices will be “commonplace” in the UK within 50 years.

Researchers say the machines will use a series of cameras to detect behaviour “indicative of wrongdoing or probable falsehoods”, including changes in a defendant’s speech patterns and body temperature, as well hand and eye movements.

The robot judges will apparently boast a processing power equivalent to 100,000 home computers and detect dishonesty “with 99.9% accuracy”. Good luck defence lawyers.

Mauri, who founded global think-tank Hack Future Lab, commented: “AI has created unprecedented changes in the way that people live and work by performing complex problems with a level of consistency and speed that is unmatched by human intelligence.”

The latest comments from across Legal Cheek

He continued:

“In a legal setting, AI will usher in a new, fairer form of digital justice whereby human emotion, bias and error will become a thing of the past. Hearings will be quicker and the innocent will be far less likely to be convicted of a crime they did not commit.”

But talk of robot judges is nothing new. Last year we reported that boffins from Estonia were looking to create an AI-powered system to help clear the backlog of small claims clogging up the country’s courts.

For a weekly round-up of news, plus jobs and latest event info

Sign up to the Legal Cheek Newsletter



Setting aside how polygraphs are not good evidence of anything, see the summer exam fiasco as to why algorithms are not the panacea these snake oil salesmen suggest. People want to be judged by real humans, not a computer. AI may end up assisting judges and juries but it won’t supplant them (not least because this AI can’t decide legal questions).



It is just a matter of time before it gets to the point AI can do that though. In fact, technology is going to move very rapidly over the next 50 years. There are going to be huge problems with disease and famine and we need to move everything and everyone online. Digitising certain aspects of life is just the first stage. Wait and see…



What’s with the dislikes? He’s right you know. Lawyers might not like it any more than taxi drivers like uber or the mom n pop shop down the road likes amazon, but we are expendable and will be replaceable by robots at some point in the future. Whether it happens in 15, 50, or 150 years we can perhaps debate. But so long as humanity still exists by that time and hasn’t completely destroyed itself, the law will fundamentally change to be governed by machines.


Archibald Pomp O'City

“see the summer exam fiasco as to why algorithms are not the panacea these snake oil salesmen suggest”

The article is about AI (a very rapidly-advancing field) in decades to come, you jerk. Not today’s AI. Try reading the article.



How very Daily Mail of 8:48am. The evidence as to the increasing accuracy of AI assessments over human judgement is already astonishing. A brain scan is now a far better predictor of recidivism than a parole officer. Sadly it is backwards views like 8:48’s that hold us back accepting this assistance into our decision making.



So there’s time for this nonsense but not to address blatant institutional anti disabled bias? Everyone involved in this should be thoroughly ashamed.


Archibald Pomp O'City

You’re being sarcastic, right? Away with your whataboutery!



Criminal magistrate and jury trials are just about accuracy, they are also fulfilling the basic premise of the law’s criminal authority that no person should be sanctioned save by the judgment of their peers – the use of AI in determining outcomes is a clear challenge to the legitimacy of that process and risks being seen as a return to pre-medieval principles of criminal justice.

Also, this focus on dishonesty in the article is totally missing the point – somebody could be lying through their teeth about points of fact, that doesn’t necessarily render them guilty of the offence charged.



*are not just


Historian with an opinion

“pre-medieval principles of criminal justice”? What do you think this phrase means?



I wasn’t practising back then so not sure.


Opinions and stuff

The problem with thinking AI can serve as an adequate substitution I would argue demonstrates a fundental misconception of the point of criminal justice.

It’s whether your peers are convinced that actions you are alleged to have committed meet the threshold for the state to restrict certain liberties. AI can and should never substitute this.


Archibald Pomp O'City

“AI can and should never substitute this.”

How does this not beg the question?



Better than determining guilt through gender.



Don’t seem to remember studying R2-D2 v Brown


Alan Robertshaw

It’s funny, the bench book specifically warns against making assumptions based on body language.

For example, often a refusal to look someone in the eye is seen as evidence of evasiveness; but in some cultures averting one’s gaze is a sign of deference.

It’s already understood in AI, that even the most seemingly objective algorithm can reflect the biases, subconscious or otherwise, of those programming the system. How does one avoid that in a judicial system? We have a hard enough time mitigating against the biases of the human components.

This is all very interesting, and tech will no doubt play an increasing role in litigation. We’ve seen that during the lockdown.

But there’s millennia of cultural inertia to overcome. It took long enough to move from leaving such matters in the hands of the gods (trial by ordeal etc) to letting humans decide. I suspect there would be just as hard a sell to get people to accept trial by robot


Archibald Pomp O'City

“in AI, that even the most seemingly objective algorithm can reflect the biases, subconscious or otherwise, of those programming the system. How does one avoid that in a judicial system? We have a hard enough time mitigating against the biases of the human components.”

Good points here, in contrast to much of the reactive drivel below this line


Scouser of Counsel

I hope I’m no longer alive by the time this happens.

I’d rather trust my case to twelve randoms than to a computer.



As someone who has been acquitted by the incorrect assessment of mouth breathing idiots, I must agree with you.



In terms of telling whether someone is lying or not, AI would be much better now that the theatrical excuse of the system we use in criminal trials.



But if there are ever a bunch of moaning Luddites, it’s the criminal bar.



The problem with creating an AI to predict human behaviour, is that is created by humans who still can’t predict Human behaviour. All of our discrimination and bias will be built into it. But technically humans are very poor lie detectors. See book, Talking with strangers, for evidence.


Alan Robertshaw

There’s quite a good discussion on witness reliability in Gestmin SGPS -v- Credit Suisse (UK) Ltd [2013] EWHC 3560 and subsequent cases; and a number of related articles.

The case points out that inaccurate witnesses can be very compelling, often because they truly believe what they are saying, and unconvincing witnesses may well be telling the truth.

There’s a lot of psychology in the case; but in essence it highlights that memory is really reconstruction not recall.

In practical terms the case stresses how to focus on documentary evidence, especially material produced before litigation was contemplated, as that is more likely to reflect the objective realty and intentions of the parties at the relevant time.

For a while after the case there was a bit of a tendency for courts to look solely at the documents and pretty much disregard oral testimony. Subsequent cases have clarified that witness evidence can still play an important role however; just that when testing the evidence it can be a useful exercise to see how consistent it is with the documentary exhibits.

It’s a very handy series of cases to be familiar with though if you do a lot of document heavy litigation.


Deed U No

So….in the future….. those with a nervous/ sweaty like disposition and speaking a “ certain way” – would be Robot- judged Guilty!

Will there be other Robots shifting through the Miscarriages of Justice pileup?


Comments are closed.

Related Stories