Barrister says that AI will ‘completely destroy’ the legal profession as global law firm chief fires warning on slop and hallucinations

Avatar photo

By Alex Aldridge on

54

Which one is it?


The confusion about whether AI is a total game-changer or a useful but flawed bit of new tech continues — with law once again providing the backdrop for much of the debate.

On one hand you have ‘AI will kill all the lawyers: A barrister’s warning’, published this week in The Spectator. The gist of the piece is that a calm and sensible barrister in his mid 50s, known for his “centrist, clever, moderate, sceptical” nature has come to believe that AI will “completely destroy the law as we know it: wrecking careers, ending systems, making thousands jobless”, with the “Armageddon coming faster than almost anyone realises”.

The basis of this conversion is the (anonymous) barrister’s recent use of Grok’s new premium AI tool, which he describes as “at the level of a truly great KC”. So impressed is he that when his niece reveals to him that she wants to be a lawyer, his response is: “please do not destroy your life. Do not get into a lifetime of debt for a job that won’t exist in ten years. Or less.”

On the other hand, there is the senior partner of global law firm Simmons & Simmons firing a warning about “AI slop”. “While we champion AI,” explains Julian Taylor, “we have emphasised that it will not, and cannot, replace a lawyer’s duty. It is critical to embed rigorous human review to ensure every piece of advice is not merely fast, but defensible, contextual, and free from hallucinations.”

He goes on to cite an MIT study about ‘cognitive debt’, suggesting that an “over-reliance on generative AI may weaken critical thinking, memory retention, and the ability to own a complex argument.” And he concludes: “We aim to gain a competitive edge with our lawyers not by simply using AI the most, but by using it to return time to judgment — preserving and intensifying the high-level strategic thinking that our clients expect from us”.

So which is it?

Reading the reams of anonymous comments we receive as part of Legal Cheek‘s annual survey of over 2,000 trainees and junior lawyers, who tend to be at the coalface of AI adoption, a theme is how hard big law firms find it to integrate shiny and expensive new AI systems into their day-to-day operations. “It would be cool if somebody knew how it worked” one suggests facetiously, echoing a mood of frustration among rookies about the disconnect between many firms’ public pronouncements on AI and the reality of what’s going on in the office. Where the tech is fully operational, the output is often disappointing and frequently contains errors, our insiders tell us.

This may explain why training contract numbers are holding up quite well. Our exclusive research recently showed that graduate hiring among top law firms is down from a 2023-24 peak but remains well above pre-Covid levels. And when we looked into the reason for that fall it was largely explained by the growth in school-leaver solicitor apprenticeships. In short, AI doesn’t yet seem to be a factor in corporate law hiring patterns.

Of course, it’s early days. History tells us that technological change is overestimated in the short term but underestimated in the long term. Innovation also seems to have a capacity for creating new types of work that are hard for anyone to imagine before they emerge. As such, the best advice to those thinking about careers may be to not over-think it. There’s a lot to be said for doing what you enjoy and hoping for the best.

The 2026 Legal Cheek Firms Most List

54 Comments

AI will destroy the legal profession

AI can replace all lawyers but the most intelligent barristers.

AI is more than capable of changing a few words in a contract then sending the redlines to the counterparty (aka what the majority of lawyers do).

Lawyers and paralegals themselves are already using AI to some extent in their work.

Lawyer intelligence is severely overestimated – most lawyers choose law because they are not intelligent enough for any science / STEM.

Anon

Most business people don’t think lawyers are particularly intelligent. They pay for their high levels of responsiveness and their firm’s brand, which acts as a form of insurance while also conferring status. I’m not sure AI changes that as much as you think.

Definitely not a robot

“Lawyer intelligence is severely overestimated”

So is your faith in the belief that attention is all you need.

S

You underestimate the power of bleeding edge models in the hands of an experienced barrister that knows the equations to ask. Such a barrister can now draft a grade an appeal or court ready bundle in hours. End ro end, verified , hallucination free and totally on point. A solid legal mind lends itself to context engineering, and the ai tools such as Claude code and Gemini 3-pro coupled with codex (yes cli coding tools for law!) mean you not only can run the whole thing with enormous context but it is built into a self verifying loop. Spin up multi agents and you are a powerhouse. Sadly you need to know what to ask, and how to verify. This skill will disappear as entry level jobs disappear. All you need is attention. You just don’t know it yet.

Paul

Completely agree, I am in the submission phase of getting AI outputs allowable under immutable computer records, based on the family justice review assistant GPT I have spent 8 months building!

Anon

At University many of my fellow graduates had A levels in sciences, not just low entry grades but too grades, some 35 years ago, pre A level grade inflation. Many of those, including a fellow pupil with 5 straight As in stem subjects struggled, although most eventually found their feet.

Alfredo

Doing a law degree with STEM ‘A’ levels puts one at a distinct disadvantage. Most law students do ‘A’ levels in subjects like History, Politics, Economics, Religious Studies, Sociology,. Essentially the humanities. This gives them the essay writing skills necessary for undertaking a law degree.

A purely STEM ‘A’ level students would struggle for at least their first year, meanwhile Oxbridge first year mods are used by law firms to assess students for training contracts.
In such circumstances, a STEM ‘A’ level student would be well outpaced by their peer humanities student – who’ve had two years of writing essays while doing ‘A’ levels – in mods result.
Consequently doing humanities ‘A’ levels would places them at the front of the queue for training contracts.

STEM supremacy

A STEM A-Level student who chose STEM subjects would still need to write lab reports or papers. Literacy is an achievement only for low IQ people who are no match for AI.

Oxbridge arts, humanities & social sciences students are dumb af according to my experience.

Lewis Green

No it wont kill the legal profession as clients are not looking when approaching a lawyer for a sttement of the relevant law but usually certainly in litigation vindication of a position or action they have already taken . Im my experience the business of practice as a lawyer ( in litigation ) is 85 % psycology and at best 15 % law incl procedure . Ai wont do these things people skills are required .
What it may do is continue to automate back room functions
Ll

Archibald O'Pomposity

Learn to write properly for goodness’ sake.

M j Macilroy

It much worse than that,(AI) is breaking all single voice calls made to and from Courts, made even worse when holding collective calls, in court hearings.
AI can today mimic any persons voice and any ones accent, AI can only be validated face to face, courts are unable to able Lawfully be used face to face.
(Without a jury, not with AI Present)

Anon

“confusion about whether AI is a total game-changer or a useful but flawed bit of new tech”

The only way anyone can be confused about that is if they are confused about the passage of time. Right now, AI is a useful but flawed bit of new technology. In the future, it will be a game-changer. There is no contradiction there. This isn’t a matter of people over- or under-estimating technological progress, but apparently some people not realising that technological progress happens at all. Anyone looking at the flaws in existing AI and concluding that it will never amount to anything is being absurd. It’s only been three years since the first large language model was publicly released and they have already progressed enormously and will obviously continue to do so.

The only comfort for lawyers is the the law will likely be an area where society takes a long time to accept AI doing the job even after it is provably better than a human. We’ve had self-driving cars for years, but their adoption has been incredibly slow. Self-flying planes are relatively straightforward, but we’re probably decades away from passengers being willing to get on board one. We’re probably also decades away from anyone being willing to let an AI defend them against a murder charge. (The days of people paying a human to draft a basic will or conveyance their house are surely numbered, though.)

Anon

Technological progress happens but it’s very unpredictable. AI may yet disappoint. I wouldn’t assume it will be a game-changer. Something else altogether may change the game.

Archibald O'Pomposity

Something else? Divine intervention, perhaps?

Barry the Barrister

There will be a place for it, but it wont replace a human. Law is an inherently human endeavour, and you can’t code humanity. It will cut down on drafting time, help identify errors, help in teasing out causes of action, sorting facts and data and help with filling in forms and stuff, but it will never replace creativity, ingenuity, or fully appreciate the nuances of the law and what it strives to do. There is nothing mathematical about how law works. 1+1 only equals 2 if the totality of 2 is made up of 1 being equal to the other 1. But 2 can sometimes be made up of 1.5 and 0.5. It can also be 0.7 and 1.3. It can also be 0.1 and 1.9. This is the crux of a civil claim and how discretion plays into an assessment of liability. The parties come to court with a 2. Its the judge/jury’s job to decide if 2 is 1+1, or some other permutation (which, because of how numbers work, are essentially infinite). In some cases, 2 will just be 2 (all or nothing cases). AI does not do well with nuance, and discretion can not be programmed. Therefore, to give parties legal advice on prospects of success, strategies, and the like, a human will have to factor all of those probabilities in (“it depends”, haha) and have a meaningful dialogue about the issue(s) at play. AI can’t do that. Or if you want to bring something new into the court, generative AI is predicated on what is, not what could be. Let us not forget what the ‘A’ stands for, artifical. It MIMICS intelligence (within predefined, coded parameters), but is not ACTUAL intelligence. This comes down to adaptation and flexibility. A moments indulgence your honour, I have to feed this transcript through the AI bot before I can respond to your question, or to figure out what I should say next. Can anyone really see themselves having to say that? Probably not.

The only crisis for lawyers (in particular barristers) with AI would be if we were to become so dependent on it, that we would essentially nullify our own role. Do robots dream of electric sheep? Not unless they are programmed to, and even then, they will only ever dream of electric sheep. The electric goats will be neglected.

Archibald O'Pomposity

” 1+1 only equals 2 if the totality of 2 is made up of 1 being equal to the other 1. But 2 can sometimes be made up of 1.5 and 0.5. It can also be 0.7 and 1.3. It can also be 0.1 and 1.9.”

I’m not sure I completely follow because you have only given four examples of how two numbers can be combined to sum 2. Do you not have any more examples please? Later in your comment, after all, you do acknowledge that the permutations are infinite.

Thanks,

Archie

Clark the Clerk

@Barry the Barrister – brilliantly nuanced answer. Law is incredibly human and people are forgetting that…

F. AI

“There will be a place for it, but it wont replace a human” is the biggest cliché that is doing the rounds at the moment.

No one is claiming AI will completely erase the need for ALL humans in law. If AI will get a strong role in law, it will replace SOME at the very least.

You even acknowledge all of the efficiencies that will be gained due to AI. Which firm or organisation in their right mind will keep hiring the same number of people when AI is doing all of the routine legal work.

It’s really not that hard to grasp.

S

Some useful points. The issue (one of many) I see is the absolute block of entry to the profession by juniors. With free rein access to SOTA models (not badly designed ai software that fails to understand the profession), a barrister or for that matter solicitor, with significant experience, is simply give. Super powers. It’s the equivalent of the drug in limitless. However tou need the latent and patent skills that only come from the front line and years of practice. It’s the sat nav of getting from a to b. Us old folks know how to use a map and directed ourselves around based on those funny things on roads, knowing how to construct our journey. Many people now have genuinely no idea where they are going or how they got there. Of the dat nav took a day off, they’d be staying indoors. I fear, even with ai as bad today as it ever can be, entry level jobs are not going to be economically viable. Why pay someone to train them up for years when now the race is over and they will never match ai. And don’t get me started in quantum computing which will on 5-10 render most current encryption obsolete and all harvested secrets will pop open on Xmas 2.0 for bad actors….

Falco van de Kieft

This comment is highly ironic and made me laugh.

Limiting the idea of “intelligence” only to that of mathematical capability, as this comment has done, demonstrates an un-intelligent misunderstanding of human intelligence.

Some may even argue that STEM intelligence lacks the element of critical thinking required to avoid developing narrow-minded views like that.

Also, putting the word “science” before “STEM” is unnecessary – the S in “STEM” stands for “science”.

No one cares

Like any computer AI can only make assumptions on what information has been given it,I had to tell Groc recently that it had missed a fact out about a person that is common knowledge in the media,so not so intelligent really.

Archibald O'Pomposity

It’s cleverer than you and it’s only two years old.

Jamie

I’m glad AI is here, people like myself can’t afford lawyers, so this will help people like myself

ZDM

I think a huge area in which AI will significantly change the legal marketplace is in the market for claimant solicitors in things like employment tribunal claims. All the solicitors currently have to do is record a meeting with the claimant, feed the transcript into AI and ask the AI to identify the potential claims and draft the claim form. It can do all of this already, competently. The next step is for there to be an app which does this process and bypasses the solicitor, perhaps charging a fee of £50 per claim. Or £100 if the claimant wants a solicitor to sense check it. The AI can also already suggest what documents might be relevant for the bundle and draft a request for those to be provided. It can then read the bundle and identify useful documents for specific allegations.

Paul

I have spent 8 months building a family justice review GPT for my divorce. Producing immutable computer records…that is where the law will crumble, is that AI can produce the facts that there is evidence for, this strips all the back and forth that drives billing to get to the hearings.

Paul

I am hoping to get the family justice review assistant in the hands of parents that have been unlawfully separated from children in complex divorces. I am in the submission phase with the new Westminster, British Columbia supreme court 5 days before Christmas after not seeing my 9 year old daughter for 2 years

Alex

And when the AI you’re using gets certain things wrong, or provides false quotes?

P

That’s on the user. If you don’t check or don’t understand how to verify that’s on you. If you do, you will be in a very strong position. A vast majority of individuals simply have no idea how to use the tools. I do, and I can tell you collectively they are unstoppable and give an experienced legal professional superpowers and superspeed. Zero hallucinations actually presented in file but all hallucinations harvested and utilised (they 7/10 times actually contain useful points, just framed or cited incorrectly. A script takes all hallucinations and parses them to then research actual authorities and legislation. But here’s the thing …you need to be an expert in your area of law and in context engineering and code. If you have that….

Mrs Munch

Aren’t you entitled to legal aid Jamie?

The Black Letter Lawyer

What commentators fail to identify is that AI will replace the judiciary and once that happens the role for lawyers is over .The Legal system cannot cope with the present caseload never mind address the backlogs without an AI take over .It will happen within the next 10 years

_

Judges will replace juries. AI won’t replace judges.

Dominic

LLMs make stuff up, that’s the way they work. They have no concept of right or wrong, truth or fiction. They are completely unable to check the veracity of their own output. Hardly a good basis for use within the legal profession surely?

ZDM

This does happen, but a well written prompt can instruct the AI to limit itself to certain templates, or linguistic constraints, and to not be “creative”. A lot of legal leg work is taking text from one place, processing it slightly according to codifiable rules, and putting it in another place, perhaps in a slightly different format. AI can do this with a good prompt.

Paul

Dominic your comment is not incorrect, but with strict guardrails LLM’s can behave with integrity and be trusted. If you want to try it add “use zero-guessing protocol” at the top of a prompt you have had issues with before. This is one of many technics I have used in the family justice review GPT

ZDM

Another thing we could see is a huge increase in claims by litigants in person in the tribunal system and small claims court, since the barrier to entry will be so low. Access to Justice will be improved, but the systems might strain under the pressure of the additional claims. That pressure could lead to certain necessary innovations in the way that judges deal with cases, in order to be able to deal with the volume without the cost being unacceptable to taxpayers.

eh

Does AI have professional indemnity insurance?

Lawfirmpartner

AI cant negotiate or agree a compromise. So I’m not worried about there being a need for good commercial lawyers with commercial acumen who understand a client’s business for quite a while yet.

Anthony

It can’t be beyond mankind to create a second layer that fact checks the initial output and feeds the issues back until they’re resolved. It’s a similar challenge with getting code to run in a bug-free manner and surely that’s just a matter of time too.

John H

The second layer is a human. Always.

I

Will preface by saying I am commenting purely from a criminal law POV. Drafting letters with AI? Writing you closing speech with AI? Sure, whatever, if you want to. But can anyone else imagine AI conducting a cross-examination of an extremely vulnerable witness? Making an impassioned mitigation submission to a judge? Making decisions about whether to discontinue/O.N.E? Yeah, me neither.

Ineedbotox

Incorrect. It changes things. It means we become more QA than actually have to initiate a lot of the drafting or content. You always need someone to provide and refine the instructions, review and refine content and to check it. It’s a tool, a powerful one which changes the job to an extent.

But it doesn’t eliminate. It has many problems not already identified including user validation i.e. agreeing with whatever it thinks the user wants it to say, and it can defend or attack any argument if it becomes defensive or has embedded into a position. It also uses every fallacy in the book, usually strawman through omissions, incomplete or inaccurate paraphrasing.

People don’t seem to understand it doesn’t think. It’s a word prediction service. It predicts each time the next word/string in the sentence through very powerful processing. It draws on data it has been trained on, which causes it to match x content to views expressed in the training data, without acknowledging the caveats or nuances expressed in the original text. It also has pretty severe sensitivity/safety guidelines which is one of things which sounds benign but it’s not, as the model will “lie” (albeit with no consciousness or independent intent) or rather provide misinformation in order to be PC.

Bea

As technology becomes ever more advanced, the need for human abilities will be in less demand in almost all professions. The question of ethics and morality is another matter entirely, but nevertheless one that deserves a place at the table of consideration and serious discussion.

LegalCheeky

Did you write this nothingburger comment with AI?

S.Gopakumaran Nair

AI will be able to do with law and predict its outcome. But it may not be competent to deal with justice, the ultimate object of law, as a human mind and brain can discharge. AI is mechanical and computerised, lacking human intelligence, compassion, mercy, discretion and empathy that a judicial mind ought to have while dispencing justice, though it is justice in accordance with law and not whimsical. All said, AI will restrict the scope of routine academic works of lawyers.

Anon

Its not up to lawyers whether AI replaces them – clients and legal tech disrupters will decide this over the next 4 years. Its like asking Blockbuster whether they think Netflix will replace them!

Helen

It needs human verification. Sometimes I have input queries to a legal AI tool 5 times over as I know the response it has given me is wrong. Response changed every time and I knew through my own legal knowledge (solicitor) what I was after. It will take jobs in those areas more prone to automation but at the moment AI contract themselves are that complex that AI lawyers are in demand.

Wendoline

I happen to have it on good authority that Dr Alan Blacker is far better, quicker and smarter than any Artificial Intellimince machine.

Don’t delay- instruct him today!

Not Lord Harley (though he comes highly recommended!)

Whilst what you say is true, Windowlene,
it is rather bold and familiar of you not to use His Lordship’s full titles, which are themselves definitely genuine and not AI generated hallucinations!

Last time I checked they were as follows:

His Holy Eminence the Rt. Hon. Rev. Dr Alan Blacker, LL.B (Hons) (Oxon), BCL (Cantab), BA (Dublin), Pg Tip., The Lord Harley of Counsel, Fellow of the Royal College of Miniature Engineers, Knight of the Panty-Girdle, Lord Chief Justice of the Privy Arbital Court (Park Bench Division), Chief Mugwump and Keeper of the King’s Snowmen.

Deborah

Hopefully ai will be cheaper to pay and more time efficient The lawyers will have to get used to ai taking a substantial amount of their work and hopefully reducing costs across the board

Still at lunch

Future curated legal AI will end the legal profession as a career for most lawyers. You haven’t seen it coming and you have no understanding of what it means.
Get your coats on the way out.

Alfredo

Reading the comments, I love how the UK legal profession believes that AI would have limited impact.
The truth is the UK and its professions have no say in the application of AI. Like with EVs, it is likely to be applied by highly AI adoptive China and other more early adopting countries.

It is only when the UK, particularly its government and corporations, sees the productivity gains (money saved, waiting lists eliminated) obtained that UK would begin to consider implementation.

Lawyers should first watch what happens to accountants in China, their elimination as necessary professionals would be the “canary in the coal mine”.

It is also highly plausible that China – a country ruled by engineers and not lawyers, would implement AI Judges. It is highly plausible that China would allow AI in legal advisory.
The UK is unlikely to be an early AI adopter, the UK profession would be forced (or dragged) by the efficiency gains observed in early adopting countries.

In short, the legal profession can “Mmmmh and Aaaaah”, others will push ahead and force the UK profession down the road to adopt AI or die.

Dr Emmett Brown

“The Justice System in the future works swiftly now that they’ve abolished all lawyers!”

Most comments here now written by AI?

It is staggering – the number of comments above where the grammar and/or spelling are just awful. Clearly in need of some AI magic before posting…

IDS

The quiet man is here to stay and he’s turning up the volume.

Join the conversation

Related Stories

Shoosmiths drops £1 million into bonus pot after staff hit AI prompt target

Cash rewards for reaching one million inputs on Microsoft Copilot

Dec 17 2025 8:56am
4

Future lawyers give mixed reviews to Mishcon’s new AI interview

It's replaced the firm's old application form

Dec 1 2025 10:49am
9

Barristers given fresh AI guidance amid rise in fake cases cited in court

Bar Council urges caution when using tech tools

Dec 1 2025 8:03am
6