Beware the bursting of law’s artificial intelligence bubble

By on

Legal tech is a long story of incremental change, not revolution


First tech came for the music industry, then it disrupted film and TV, soon after moving onto the newspapers before hitting the banking industry. Next up after fintech? Lawtech, with added robots…

That was the narrative getting bandied around parts of the technology community last year. It was certainly helped by the landmark case of Pyrrho Investments v MWB Property, which saw the use of artificial intelligence-derived ‘predictive coding’ in document review approved by the English courts for the first time. Further substance was provided by the government’s announcement following its review of civil justice that it was to plough £1 billion into boosting the primitive digital capability of the nation’s creaking court system. Tech had arrived in the legal profession.

Soon a slew of press releases from legal software companies and ‘innovative’ law firms started appearing — many of them leveraging off the Pyrrho Investments case to suggest that Bladerunner-style robots were about to take over the legal profession. Amid much ante-upping, the march of machines accelerated beyond paralegals to lawyers and finally to QCs and judges. Even the Lord Chief Justice got in on the act, telling a conference in autumn that “it is probably correct to say that as soon as we have better statistical information, artificial intelligence using that statistical information will be better at predicting the outcome of cases than the most learned Queen’s Counsel.” The legal press — including Legal Cheek — lapped this stuff up.

But the reality that soon began to emerge was that the artificial intelligence being promoted was ‘soft AI’, a very different type of technology to the sci-fi stuff, which is known as ‘hard AI’ and remains in its infancy. Indeed, some argue that soft AI — algorithms of a not especially new type that are “inspired by” the human brain — is not AI at all. What these computer programmes most certainly need is human beings to place what they come up with in context and help it to make sense. Some of the challenges around practical implementation are also significant. Those learned QCs may be OK after all.

By the time everyone had realised this, it was too late. The AI genie was out of the bottle — and under pressure editors had news quotas to fill. The previously obscure and often ignored legal tech community found that they could get stories about highly niche pieces new software published if they uttered the magic ‘AI’ words to journalists — and even began to admit this publicly. Non-tech people started pulling similar tricks, with a proliferation of legal recruiters suddenly discovering their inner Silicon Valley to bill themselves as ‘Uber for lawyers’.

Meanwhile, tens of thousands of impressionable wannabe lawyers were taking this stuff deadly seriously. Student reaction to AI falls broadly into two camps. The first is fear that they’ve chosen the wrong career. Rapid technological development means they’ll never get a training contract, and even if they do will be on the scrap heap a couple of years later. Angsty, super-keen LLBers are susceptible to this interpretation. The second camp, by contrast, are highly optimistic that tech will herald some kind of utopian new age where lawyers lounge around chic former warehouses dressed as hipsters not doing much work but earning loads. This latter reaction is common among non-law students doing the GDL because their parents insisted on it.

Both categories of students can be found in training contract interviews around the country earnestly telling panels of faintly baffled corporate lawyers about their plans to learn to code.

Not that this is a bad objective. Just like learning a language, being able to code computer programmes is a useful skill which shows a breadth of interest and could come in handy in an ancillary way in a law firm (although it probably won’t help with ‘AI-assisted’ document review which is conducted via a standard user interface that does not require coding skills). Coding knowledge will also, to a certain extent, help young lawyers better interact with their firm’s technology company clients.

It’s also sensible for students to be thinking about technology in a more general sense. Just because there is hype doesn’t mean some developments aren’t real. Indeed, many large law firms are pouring significant amounts of money into new soft AI and data analysis technology that they believe can, over time, improve the way they operate. Some of this software will work, some won’t — but smart people are judging it worth the expense to find out and they are worth listening to.

There are plenty of interesting conversations to be had about this, including on the topic of whether such software poses a threat to law graduates. This was explored by Legal Cheek‘s Tom Connelly and Katie King in their recent interview with Professor Richard Susskind, who did his doctorate on AI way back in the 1980s and has long been one of the most respected voices about technology in the legal sector.

But before you panic, recall the mania about offshore legal process outsourcing that from 2010-2012 swept the profession in much the same way as AI has been recently. Despite many associated predictions of rookie lawyer carnage, training contract numbers have held steady since then, and actually rose by 9% last year to their highest level since the 2008 financial crisis.

A month into 2017 and, perhaps hastened by the new less tech industry-friendly regime in Washington DC, there is already a sense that AI credulity is swinging towards sceptism. The word in the City is that fintech is looking frothy and may not be the instant gamechanger that some have been presenting it as, while in recent weeks the Financial Times and The Telegraph have run high profile pieces questioning the benefits of recent developments in the wider AI sector.

An AI hairbrush that remembers the contours of your head is being singled out for particular ridicule amid suspicion that the new frontier of ‘Internet of Things’ may not be things that consumers want or find useful. Expect this mood to reach the legal profession. And when it does, robot law is going to have start delivering — and fast — or risk being swept away by the next big trend.

For all the latest commercial awareness info, and advance notification of Legal Cheek’s careers events, sign up to the Legal Cheek Hub here.



So what you mean, Alex, is having trotted out myriad articles about the AI revolution, which anyone who actually works as a lawyer would be able to tell you isn’t happening…. you’ve now realized that it was, in fact, a complete load of garbage.

Incisive journalism, as always.


Not Amused

But if impressionable young people are trotting out Susskind’s nonsense in interviews (and therefore harming themselves), Alex is right to point that out.



Well yes… but they are trotting it out because Alex has been telling them too by repeating the AI myths again and again…




“The second camp, by contrast, are highly optimistic that tech will herald some kind of utopian new age where lawyers lounge around chic former warehouses dressed as hipsters not doing much work but earning loads. This latter reaction is common among non-law students doing the GDL because their parents insisted on it.”



I’m more wary of your mum’s fat ass bursting.



I’m a former LPC graduate who now works for one of the more well known “Legal AI” tech companies in the UK. I found Alex’s key points pretty bang on. In essence, do not let the hype cloud your judgment of the benefits it can have for the practice of law. Do your research into what legal AI is because there are many different branches and numerous practical applications. It certainly doesn’t aid people’s understanding of a (relatively) new concept that it comes in so many forms and different folks have contrasting views of what constitutes Legal AI. So do some investigation yourself.

In my line of work I see its current-day benefits and limitations but also its huge potential. It’s great to see it having an impact at the highest levels of law already – Leveson’s judgement for SFO vs. Rolls Royce refers to “digital methods to identify privilege issues”. That’s just one example of how legal AI has expedited cases and ultimately saved time and money. But also at the more mundane level of legal service – sacrificing your weekends to crawl through 100s of contracts for the due diligence exercise is close to being a thing of the past. I’m sure most people starting out in law would welcome this.



Hi Richard Susskind!



“a former LPC graduate”

You are either a former LPC student, or an LPC graduate, unless you have had your LPC taken away from you…



I’ll give you the benefit of my experience on technology and disruption, having lived through it in the real world (rather than academia). But first, a short lesson in English: it’s “computer programs”, not “television programmes”. Ok Alex?

Technology has been disruptive… for ever. Since the first hominid worked out that a stone could be a useful tool. Probably the most distruptive technology of all time came in the mid-15th century as Gutenberg developed the printing press (precursors in China and Korea were nowhere near as affective) – culminating in his 1452 Bible.

My personal experience, from the late 1980s and early 1990s is in design and print. A revolution that completely overturned the old order wthin about five years. A lot of people lost their jobs, many graduates never managed to find jobs in their field of study – but to be honest they were probably all muppets that didn’t have a clue anyway. On the other hand a lot of muppets, with no design skills/knowledge, but with the right attitude, managed to carve-out for themselves a litle niche. Overall, what probably happened in that period, and in an ongoing way, is that about the same number of people were employed, but they were perhaps different people, and/or diferent roles. A designer/artworker could do more work, more accurately, so the cost per task reduced somewhat – more work was done for a wider group of clients.

One can imagine that the law would be much the same. Computer-based resources help one complete research much more quickly than previously, smart programmers develop systems that help one analyse data more easily. More work is done, more cheaply. Pools of poor slobs, dreaming of the chachet of working in law, end up in call centres completing semi-menial tasks. The truth probably being that 20 or 30 years ago those jobs would not have existed, and they wouldn’t have had any chance at all to work in law. There are also a lot of people at the blunt end of society, who think they should be doing better, but aren’t.

Does AI pose an additional threat? Frankly, I find it hard to believe that – short of sci-fi fantasies – we are anywhere near the point of robots taking over the legal process. It seems to me that the current pool of AI cannot even understand standard British accents let alone weave their way through the nuances of language that is everyday law. Rather, what we are likely to see, is more work being done more efficiently – with the lower-end becoming more and more menial, or eventually disappearing.



A short lesson in English: it’s “effective”, not “affective”. Ok Pantman?



Tres drole! Really, he makes one mistake in an interesting post and the anonymous ass jumps in (and thumbs-up himself) without contributing anything!?



If you start a post with grammar pedantry and then make a grammar mistake, you are rather asking for it…



Thanks Alex!



“Student reaction to AI falls broadly into two camps”

There is a third (large) category – those who believe Richard Susskind is a complete and utter charlatan.



This is history repeating itself.

There was a similar AI hype storm 30 years ago about the law being reduced to AI knowledge based systems (but no one remembers because that’s pre Twitter, even pre Web) – encoding the entire British Nationality Act. At the same time Susskind was also right there talking about a system called the Latent Damage Adviser, developed with a firm of accountants, able to “advise” on the Latent Damage Act.

Having worked with AI on and off throughout that period and as a practising lawyer founded a company delivering an AI based system to law firms and legal departments globally, which has remained in continuing use for the 17 years since it was launched, I’d suggest there are many ways the legal professions could benefit from AI, as much as any other discipline.
The problem, as ever with systems, is understanding which are the best practical methods and tools to deploy and how to overcome legitimate and unjustified fears around change.
Unfortunately progress will be glacial in “the law” as few understand or have time to think about AI and what it could do to improve their practice.
And … Hype storms will continue – probably following Moore’s law, so the next will be in 15 years or less.


Comments are closed.