Channel 4 pits AI against trainee solicitor in legal drafting showdown

Avatar photo

By Legal Cheek on

19

Who triumphed?

Trainee solicitor Charlotte Jacques meets herself in AI (Credit Channel 4)

This week on Channel 4, an AI law firm went up against a human trainee solicitor. The goal? To find out which jobs could be next in line to be replaced by robots.

The UK’s first regulated AI law firm, Garfield AI — which we previously reported on — was put to the test against Charlotte Jaques, a trainee solicitor at Summerfield Browne, in Dispatches: Will AI Take My Job?. The programme, which aired on Monday, even featured its own AI presenter.

In a battle between machine and budding human lawyer, the pair were asked to tackle a real small-claims dispute between a builder and a client who refused to pay a £4,500 bill, each preparing a claim form for court.

The results were judged blind by Jaques’ supervisor, Zainab Zaeem, who noted that while Garfield’s version left out a key details, including that a WhatsApp message can amount to a binding contract, it was still good enough to be put before a judge.

*Spoiler alert*

The 2026 Legal Cheek Firms Most List

Zaeem ultimately deemed Jaques’ draft the stronger of the two, though she said she was “impressed by both documents”.

The cost comparison was perhaps the real headline-grabber. Garfield, co-founded by former Baker McKenzie associate Philip Young and quantum physicist Daniel Long, produced its version in around ten minutes for £100 + VAT. Jaques’ human-crafted effort, by contrast, took over three hours and cost more than £1,000.

And the client’s verdict? Based on price, he said he’d go with AI effort next time.

19 Comments

Offal Eating Demon with an Extra Eyelid

So, let me get this straight… the conclusion is (as expected, and as we’ve been stating for some time now) that AI can make a (potentially really) good draft, but only a human can tell whether the draft is really good or not.

So the focus simply changes and people will still need lawyers.

Good.

Archibald O'Pomposity

“the conclusion is (as expected, and as we’ve been stating for some time now) that AI can make a (potentially really) good draft”

Thank you for your simplistic analysis. You may be surprised to know that AI is continually improving, that the need for human checking is likely to reduce dramatically under the current trajectory; and that an AI capable of really good drafts will reduce the required numbers of human lawyers considerably. If you fancy a dwindling career as a proof-checker, then indeed you may breathe out with relief.

IT & Innovative Technologies

There is no good AI without human oversight.

Anonymouse

The problem is that in order for a human to tell whether a draft is really good or not they need to have experience. That experience will have been gained through years of paralegal/trainee/early associate work.

The concern is that this early career work where the experience is gained is approximately 90% cheaper if completed by AI – why would clients pay 10x more for a human to produce something that isn’t 10x the quality/value, and why would employers pay early careers lawyers to learn on the job (e.g. make mistakes, take a long time, develop organically/slowly) when they can just have an AI Programme do it for significantly cheaper and faster?

AI isn’t going to take the jobs of the decision makers and partners that are interested in the bottom line, it’s going to undercut the industry so that you don’t need paralegals/trainees/early associates to do the work that Senior Associates and Partners are too expensive to do. But who will end up reviewing the AI work once the Senior Associates and Partners move on/retire if the time and effort hasn’t been given to the junior levels? I find it genuinely difficult to recommend to 16/18 year olds that pursuing a career in law is a good idea.

Steve

Its already happening. Legal profession is being disrupted. No question, it is only to what extent. Answer: massively. Thank God I only have another 8 years or so in this racket.

Stephen

There’s no reason why a trainee should be paid 1000 for 3 hours work (billed at, not likely paid), maybe we start there…

This will reduce the ridiculous lawyer / solicitor costs across the board, but people will still have jobs.

Tarmara

Experience that clearly AI does not have. AI still missed key information. Even blindly tested a human won. There is no comparison.

Andy

From my experience using ‘AI’ tools, I haven’t been overly impressed and have had to substantively amend the output. That may be circumstantial and keen to hear other peoples’ experience.

There are also continuing issues with: the processing of client/confidential information that cannot be input into any random AI tool / cyber security; concerns over AI manufacturing false cases/information (which has been reported a couple of times now); and who takes on Professional Indemnity responsibility (if the AI tool is likely to miss key information, who’s picking up that responsibility?).

In a small claim, I can certainly see the benefit when faced with fixed costs regimes, but open it up to much larger claims, can these tools deal with the grey areas and nuances that lawyers are often faced with? Given the option of £100 AI bill, or a £5,000 lawyer bill backed by appropriate PI – and where we are informed the latter will produce a better result, I’d likely take the latter and minimise risk to avoid a potentially hefty dispute down the line that ends up costing hundreds of thousands.

It seems to me that much of the hyperbole is coming from those trying to sell the AI products and that expectations don’t necessarily live up to reality at present… that’s not to say it won’t and continuing to develop these tools can only be a useful thing in terms of time efficiency. But certainly seems to me there is still work to be done.

Barney the tree

I don’t think the threat is a £100 AI bill (from an AI only output vs a £5,000 human bill (from an human only output).

I think the threat is when firms wise up to this and start doing churn work using AI instead of paralegals and trainees on £27-45k a year salaries.

If you need to produce fairly simple documents (e.g. board minutes, licences to assign, letters) – this might take a fresh trainee an hour or two (let’s say at £200 per hour) whereas an AI can do it in 10 seconds. Say you have a transaction / case which repeats this process for the trainee a good 10 times over the deal / case timeline.

Suddenly it’s £2,000 of fees from trainees drafting simple docs against an AI purchased by the firm.

I imagine clients will be warming up to this pretty quickly if it can knock off a decent percentage of the bill – especially if their industry is also leveraging AI to reduce time and cost.

Ines

The results were judged blind by Jaques’ supervisor, Zainab Zaeem, who noted that while Garfield’s version left out a few key details, including that a WhatsApp message can amount to a binding contract, it was still good enough to be put before a judge.

This is a key detail omitted that could result in a professional negligence claim.

Barney the tree

Re-read my comment.

I’m talking about simple documents not claim forms setting out the entire basis of a claim.

No one is getting a professional negligence claim over a set of board minutes.

Benny Pomerantz

Thank you. Finally people are wising up to the immense gap between what the AI CEOs and media marketing plan and what AI actually is.
IT engineers invented it to pull prompts from a database. It is simply a bot. Simple legal arguments pulled from the internet need a way simpler AI bot.
What remains is the level of trust you have towards who coded your AI. Some AIs are hard coded to give whatever answer to keep the user happy. They will fabricate false cases when they find none to make your legal conclusions seem logical etc. I was astonished when using META AI when I checked google for case numbers, appeal courts, etc that turned out to be a flat lie.
The META AI when « confronted » referee me to the user agreement clause on harm caused…
To make it short: AI is IT program that can provide you nonsense for your money but you cannot sue. The guy who sold it to you clearly advised you that it can provide utter none sense

Retired eagle

The funny thing is that the supervisor said she liked the inclusion of reference to a reported case in the trainee’s draft, but reported cases aren’t supposed to be referred to in particulars of claim, so that was a black mark rather than gold star!

Benny Pomerantz

To answer your question;
concerns over AI manufacturing false cases/information (which has been reported a couple of times now); and who takes on Professional Indemnity responsibility (if the AI tool is likely to miss key information, who’s picking up that responsibility?).
It written in black ink under the Harm Caused section. AI can and will give you false information. Because it is simply a faillible product sold to you by a legal entity:a company. Lawyers tried in the EU and failed miserably to sue under the reckless endangerment penal laws. All cases dismissed on face value. The reason: you cannot sue an IT program. You cannot sue the manufacturer because the user agreement tells you are using a faillible IT program. Only some reckless endangerment to children made it to court because the user agreement was poorly written. Long since they updated it.
In America with a jury that can hit you with astronomical damages based on what they honestly believe. Their verdict can ignore some determination of law told to them by the judge. Now that’s a whole other bargain when you have harm done to children or some other chocking consequence of AI. That’s why some cases are under settlement négociations and those are generally put by State Attorney Generals. Only that level of resources can face AI manufacturers lawyers.
Sorry long post but legally this AI thing is more worrying for potential harm than for attorney fees. Thank you for reading.

Peter Forrest

There have been at least 2 cases recently when an AI-generated case was shown to be quoting entirely fictitious cases.

Fascinating Peter

Peter is new to AI. He goes on about it a lot at the golf club. He basically uses it like Google but he is very very good at using it like Google.

Archibald O'Pomposity

The shortsightedness and lack of vision in these comments is staggering. It reeks of denial. It is not unusual and not restricted to the bean-counters and legal spods who stuff the profession. AI will get better and schoolboy erors like the WhatsApp message / contract issue will be ironed out. In certain uses, and especially basic legal transactions, AI will make fewer mistakes than humans who also make mistakes – except AI will operate at a fraction of the cost. People are seizing on the silly mistakes made by AI while remaining wilfully blind to its current utility and continual improvement.

Let me be clear. Companies will embrace any technology that saves them money. Companies will embrace any technology that saves them money. Companies will embrace any technology that saves them money. Companies will embrace any technology that saves them money. The profession will be hollowed out – and why is that a bad thing in itself?

Anon

I haven’t watched the programme and I doubt I will, so I may be missing some context. However, it seems to me that it chose a pretty poor example – dispute over £4,500, so small claims track and costs restriction per CPR 27.14. What lay client in their right mind would pay £1000 for the drafting of a claim form (even assuming it included particulars of claim). God forbid if the firm then represented the client at the hearing …

Pro Bono

The comparison between the work that the trainee generated and that generated by Garfield was both unfair and inaccurate. For a start, the person who judged the two documents for comparison purposes was the trainee’s own employer, which is hardly unbiased. More importantly, however, I actually analysed the two documents that were shown myself and I had no doubt that the one produced by Garfield was the superior one. That produced by the trainee solicitor contained two obvious errors that her employer failed to spot. It was arithmetically wrong, and it included a claim for costs, which was inappropriate in a SCT case.

Another interesting aspect is that the programme explained that the document generated by the trainee would absorb four hours’ work at £225 per hour + VAT. This was ridiculous – the type of document she produced is something that would take any competent solicitor an hour at most.

Furthermore, charging a trainee at that rate is ridiculous, especially as it was a small provincial firm, not a firm in the city of London. It was a small claims case, where only nominal costs could be recovered, so the claim would have been completely uneconomic to pursue.

Had it been judged by someone truly independent I’ve no doubt Garfield would have won, bot on competence and on cost.

Join the conversation

Related Stories

AI beats lawyers at legal research, study finds 

But humans continue to outperform when issues require deeper nuance

Oct 22 2025 8:15am
8

Judges given guidance on spotting AI-generated submissions

Unfamiliar case names and US spellings among key giveaways

Apr 16 2025 11:40am
2

Beware of ‘deepfake’ clients, regulator warns lawyers

Concerns over money laundering and terrorist financing

Mar 13 2024 7:53am