City firms’ ‘worst nightmare’ realised? Machines victorious in lawyer vs robot challenge

By on

But results should be approached with caution

A case-predicting robot has proved victorious in a man vs machine challenge previously described as “arrogant nonsense” by a top solicitor.

The prediction-off was organised by a bunch of Cambridge law students and graduates who first broke into the lawtech scene with their crime-identifying LawBot. Several rebrands later, the team is now called CaseCrunch and decided to prove its tech’s worth by pitting it against the brainpower of some of the country’s top lawyers.

According to CaseCrunch 112 lawyers took part in the challenge, these hailing from the likes of Allen & Overy, DLA Piper, Bird & Bird, Berwin Leighton Paisner and Eversheds Sutherland. These legal eagles were presented with factual scenarios of payment protection insurance (PPI) claims and were asked what the outcome of the claim would be, while the same scenarios were fed through CaseCrunch’s bot for predicting too. Two judges, a Cambridge law lecturer and a big data director, were tasked with making sure the challenge was fair.

The latest comments from across Legal Cheek

CaseCrunch has now announced its win over the human teams, scoring an accuracy of 87% compared to lawyers’ 62%. The figures are impressive, but should lawyers be worried? Pinsent Masons‘ David Halliwell, speaking before the result was revealed, did note a machine victory could be a nightmare for lawyers:

But let’s approach this victory with caution. In the words of CaseCrunch’s scientific director, Ludwig Bull:

“These results do not mean that machines are generally better at predicting outcomes than human lawyers. These results show that if the question is defined precisely, machines are able to compete with and sometimes outperform human lawyers.”

What the robot cannot do, at this stage anyway, is emulate the personable nature of legal services. This was the ammunition behind top litigation lawyer David Greene’s CaseCrunch-directed outburst, in which he described the challenge as “arrogant nonsense”. Greene later told Legal Cheek:

“That arrogant toad ‘I told you so!’ Susskind has for long been banging on about technology and its effect on legal services. But his is generally about digital technology and the place it has in making the processing and provision of legal advice and assistance more efficient. That does not counter the fundament of the business being a people to people business.”

In response, Bull told us: “machines will not replace lawyers and [we aren’t] trying to change that. Machines can help lawyers understand the law and maybe even make it clearer and more just.”

For all the latest commercial awareness info, and advance notification of Legal Cheek's careers events:

Sign up to the Legal Cheek Hub



Yawn. Robot lawyer hype getting boring. At least these self publicists have something for their training contract applications now



Why were the comments closed on this article? Are lawyers afraid of hearing about the true power of technology? I wonder if this is a transformative paradigm shift rather than “arrogant nonsense”?



Robot lawyers were poised and ready to post a load of bragging comments at their typing speed of 100,000 wpm, but luckily the LC tech team have managed to ban them.



Credibility of test ruined by inclusion of ‘top’ lawyers from Eversheds….



The deadline for training contract applications to Skynet & Co is 29 August.



Susskind will be having a massive asphyxo tug over this.



Is it me or is there insufficient detail in the report for this to make any sense ?

A number of factual scenarios about PPI cases were sat there…

112 lawyers predicted what the outcome would be.

Someone input the scenarios into casecrunch.

Casecrunch predicted the outcome too and got the right answer mist often…

Were the factual scenarios real county court cases?

Did the process have to wait weeks or months for the real data of the case outcome to come from the district judge ?

Were the lawyers and casecrunch predicting win draw or lose or quantum of award ?

What was so poor about the lawyers’ judgment calls exactly ? Quantum too low or too high, win , lose or draw outcome wrong ?

What did a factual scenario look like ?

What did a prediction look like?

What did a result look like ?



What did you look like at your third birthday party chasing bubbles around the garden on a succulent Summer’s Day before the grinding hell of corporate inertia slithered into your weak heart?



Not really a city firm’s worst nightmare. Assuming this software works then it would help a) claims management companies; b) firms engaged in volume PPI work.

PPI has likely been used as an example because it is easy to programme the parameters given a) the relatively narrow issues; and b) the lack of actual litigation in these cases.

The more respectable journals have highlighted that the software was also successful in predicting ECHR judgments. Again, that’s unlikely to trouble city firms. May be of passing interest to academics.



Let me know when it could have predicted the Gina Miller case with a word for word prediction of the wording of the judgement.


Comments are closed.

Related Stories