Barrister becomes latest ‘victim’ of fake ChatGPT cases

Avatar photo

By Legal Cheek on

4

Reported to regulator


A barrister has been referred to the Bar Standards Board (BSB) after citing a non-existent case generated by ChatGPT in immigration tribunal proceedings, later arguing he was himself “a victim” of the AI technology.

Muhammad Mujeebur Rahman appeared for an appellant in an immigration matter when he included in grounds of appeal a reference to “Y (China) [2010] EWCA Civ 116”, claiming it supported arguments on delay. The tribunal found the case did not exist.

When challenged at a hearing in June 2025, Rahman initially claimed he meant to cite other authorities including YH (Iraq), R (WJ) v SSHD and Bensaid v UK. After being given a break he told judges he had “undertaken ChatGPT research during the lunch break” and insisted Y (China) was genuine and it was a decision made by Pill and Sullivan LJJ and Sir Paul Kennedy.

The panel gave him a deadline to either produce a copy of the judgment or explain what had happened if he could not. As the panel moved on to the next case, Rahman handed the tribunal clerk a nine-page internet printout containing “misleading statements”, including references to the fictitious Y (China) case under the citation for YH (Iraq). It made no mention of the key case on delay.

In a follow-up letter submitted before the deadline, Rahman explained that he had meant to cite YH (Iraq) and apologised for failing to provide the full and correct case name.

He attributed the mistake to “acute illness” he had suffered before drafting the grounds, as well as to a trip to Bangladesh during which he was hospitalised with diabetes, cholesterol issues and high blood pressure. He also argued that he should not be penalised for this error, noting that he has five dependants—his wife and four children.

At a further hearing, Rahman finally accepted that he had used ChatGPT to draft the grounds of appeal and to create the document he handed up via the clerk, but argued that he was “misled by the search engine and is thus also a victim”.

The tribunal said he had failed to carry out any checks on reputable databases such as Westlaw, LexisNexis, Bailii or EIN, and that his letter was “a less than honest attempt to pretend” he had simply made a typographical error and had not relied on AI.

Although it concluded there had been no deliberate fraud, the panel said Rahman had not acted with honesty and integrity, and that the use of fake authority likely contributed to permission being granted on one of the grounds.

Referring the matter to the BSB, the tribunal noted that lawyers have a professional duty to verify authorities and warned that “taking unprofessional short-cuts which will very likely mislead the Tribunal is never excusable”.

Rahman, who has since completed further training on immigration law and the use of AI, apologised for his conduct and argued he should not be referred to the BSB. He said he now has a proper understanding, has been honest, will act with integrity in future.

4 Comments

well well well

well well well

Definitely not generated by ChatGPT

AI is like a toaster—it can only make what you put in it. If someone’s using it to toast their integrity, that’s a people problem, not a robot uprising.

Disbelief Incarnate

It’s baffling to me how so many keep falling into this trap, particularly in something as niche as law. Ask ChatGPT to explain a first-year undergrad legal concept like objective/subjective recklessness and if you’re at all well-versed with the topic you’ll see how it hedges and doesn’t quite word its answer as precisely as you’d expect even an undergrad student to be able to, even if it is broadly correct.

Imagine trusting it to generate accurate answers on more specific topics. Some people must really think AI is magic. There’s no other word for it: negligent.

Alex

It’s just a new incarnation of an old problem. I’ve been in cases where my opponent only looked at a head note , didn’t have the latest version of the white book. Hadn’t appreciated that a case had been overturned etc. LLMs are a very useful and powerful addition, but bad lawyers will always be bad.

Join the conversation

Related Stories

ChatGPT

Another lawyer faces ChatGPT trouble

Documents referenced 'nonexistent' cases

Feb 4 2025 8:45am
4

AI avatar lawyer barred from US court

Cyber counsel frustrates (human) judges

Apr 8 2025 8:28am
3