Finding the limits of AI’s advancement in the English common law
There is a lot of talk of the rise of Artificial Intelligence (AI) and its ever-widening place in the legal sector, and a lot of fearful chatter of its eventual takeover of the legal industry.
As with any development — be it social, legal, political, economic or so forth — it is useful to plot its course before it has progressed its first mile, to look to the horizon of its seemingly never-ending path. Nothing can develop endlessly, nothing grows infinitely, there is always a limit. Besides this, it is time to start thinking realistically about AI’s place in English law and how the profession will have to adapt and mould around its incursion.
It seems as though AI’s position in the administrative and bureaucratic side of law, i.e. that currently inhabited by paralegals, legal assistants, clerks and unlucky trainees and pupils, is quickly being cemented. At least, the technology exists, it is now a case of law firms and chambers buying and implementing it.
What next though? DoNotPay seems to be the closest we have to a glimpse through the crack in the wall to the future of AI in law. It is a crude system in which key words trigger set responses, and is seemingly the first foray of a robot into advice, litigation and dispute resolution.
So how far can we see robots encroaching upon the tasks of solicitors and barristers: giving advice, engaging in manicured correspondence, sifting through issues in a case to discern the most salient points or those to ignore, or ‘advocating’ for a client in court or at mediation? Indeed, at the bottom end of the court hierarchy — the magistrates’ and county courts — how far could we see AI taking the place of judges and deciding outcomes?
The common law is, by definition, judge-made law. Certainly, parliament retains sovereignty to legislate — and does so — but traditionally the courts have been the domain of the development of the law.
Through the doctrine of stare decisis law is made and law is modified. English law, in this way, is a fluid and mutating organism — more so, perhaps, than the laws of civil law jurisdictions, which do not place so much reliance on an adversarial court process.
The involvement of AI in the law of England and Wales, therefore, is a tricky task to get to grips with. Maybe it could be that a programme be invented with all the legal tests and prospective outcomes programmed into it, so that one need merely input the facts of the case and have it belch out an outcome. Perhaps, as with the potential of DoNotPay, court waiting times and fees could be greatly reduced as cases come to speedy conclusions, arrived at by a robot with all the relevant statute and case law related to, for instance, professional negligence programmed into it.
Let’s take a professional negligence claim brought by an individual against the solicitor she instructed in relation to a personal injury claim. The solicitor negligently allowed the six-year limitation period to expire. Let’s say the original injury was one to the claimant’s femur worth around £11,000, and let’s say the original personal injury claim had prospects of success of 60%. Therefore, because the professional negligence claim is one for loss of opportunity, the likely damages will be around 60% of the originally claimed damages — £6,600. The professional negligence claim would therefore be heard in the small claims court because the potential damages are below £10,000, and the claimant would likely represent themselves, since any solicitors would be unable to claim their costs from the defendant.
On the face of it, it is a simple issue: one of establishing the standard of the duty of care owed by the solicitor to the claimant, that the solicitor negligently breached that duty, and that said negligent breach caused the claimant’s loss. Seemingly, therefore, it is an issue ripe for automation.
Currently, the claimant would likely go to the Citizens Advice Bureau or pay a fee to a law firm for advice before bringing the claim themselves. Perhaps, though, with the growth of AI, there is a business in charging a small fee to that claimant to have a robot provide them with instantaneous information and advice once the details of the issue have been inputted and it has drawn from its database of professional negligence knowledge. AI in this instance provides an opportunity, in one sense, for an entrepreneur selling the service seemingly without taking from existing vocations (aside from the fee a law firm would charge on account for initial advice).
Moving away from the hypothetical, let’s look at the sort of tech currently being used by major law firms. Programmes like Kim help with managing instructions and providing advice; Beagle scans and presents the important information in a contract. Surely these programmes can never escape the shadow of a human hand. When it comes to advice and correspondence and the client-facing aspects of the legal profession, there will always need to be a human element. Therefore, there would be no reduction in cost, since all that is being added is another layer.
Even behind the curtain, when it comes to back room research, at least at the more prestigious law firms — necessarily with more prestigious clients — those clients would demand a human eye. They may not trust that a robot could scour everything of note.
Well, what of automating the decision-making process, i.e. replacing the judges in the small claims court with robots? If the issues are simple enough to be advised upon by a robot, are they simple enough to be adjudicated upon by a robot?
Perhaps, is the answer. But, as with the implementation of all technology, it would be folly to assume that because it is possible it is wise.
Want to write for the Legal Cheek Journal?Find out more
Firstly, justice is about more than ‘the rules’. As any practitioner will attest, black letter law is often not the fundamental issue. Human psychology is at the root of everything. We don’t make rules free from prejudice or bias or mistake; there is always fallibility in everything humans do. It would be unrealistic, then, to apply the machinations of an unthinking automaton — programmed with certainties and incapable of discretion or empathy — to the adjudication of disputes between members of a species that are inherently imperfect.
Aside from this, humans need to implore humans. In the professional negligence case taken above, there may be any number of mitigating circumstances that would require a human mind to be taken into account. Ethos, logos and pathos are as necessary now as they were when Aristotle defined them.
Say, though, that AI was implemented in the courts. Surely it could only ever have an advisory role requiring knowledge and application of the law as is, rather than an active role in creating and developing the law. That is, it surely could only ever have a role in the lower courts.
Take, in the professional negligence context, the Bolam test to establish the reasonable standard of care and skill of a professional, laid down in Bolam v Friern Management Committee. Recently, in a case in the High Court — O’Hare and another v Coutts & Co — the Bolam test was departed from. Whereas in Bolam the question is whether the professional, in acting as they did, acted in accordance with the practice of competent respected professional opinion, in O’Hare, it was deemed that the finance profession has no consensus as to how to deal with investors’ attitudes to risk. Therefore, they opted for a test from a Scottish Supreme Court case.
O’Hare is exemplary of the fluid and mutating nature of English law. Rather than following English precedent, the judges opted to take a leaf from Scots law. The decision required knowledge of the financial investment industry, a discretion as to the use of a legal test, and a creative application in order to ensure a fair and just outcome. Could we ever entrust the more complex designs of the upper courts to AI?
Maybe not. It seems where discretion is involved, it is better that robots are not involved.
But what of the county and magistrates’ courts? Perhaps in cases where the issues are more clear-cut and the values or penalties involved are lower, it could be seen that there may be a place for AI in decision-making. But still, it is human outcomes with which we are concerned. It seems unlikely that the Civil Procedure Rules would be updated to include rules pertaining to which cases could be dealt with by robots. To do so would risk state-mandated discrimination in people’s cases based on what it deemed run-of-the-mill and what it deemed worthy of human attention. So then, the decision would fall to the ‘market’. As such, it is foreseeable that a de facto ‘two-tier’ court system could spring up. Higher fees could be paid to have a court constituted of humans review someone’s case, and a lower fee — or no fee — to have a case quickly surmised by AI.
As stated, this would lead to a reduction in court waiting times and costs. But it would also lead to a legal system that unfairly detriments the poor. The justification of the common law is that it sacrifices speed and efficiency on the alter of integrity, clarity and a nearer certainty of justice. Introducing AI into the decision-making process risks that.
To have AI dictating the course of advice, or even the outcome of litigation, is weird and perverse. It is reminiscent of some kind of sci-fi nightmare of a higher species holding sway over humans. But it is necessary to view the conceivable limits of a technology in order to begin preparing for our adaption to it. And history shows us time and time again that when it comes to technology’s exponential growth, if it can be done, it most likely will be done. Perhaps we may see the redundancy of the necessity for paralegals, and maybe the way the vocational training of incumbent solicitors and barristers is done will have to change, but it would seem to be a long time, if ever, until the lawyer is cuckolded by Wall-E.
William Richardson is a paralegal. He completed his law with business degree at Brighton University and then a master of laws degree at UCL.