Nearly half of respondents express limited confidence in AI-generated legal outputs

Only one in five lawyers say they place high trust in AI-generated legal work, according to new research that paints a picture of a profession racing to adopt the technology while remaining sceptical of its output.
A survey of more than 250 legal professionals found that 67% have had to override or correct AI-generated legal output, and nearly three in five (58%) said they would not feel comfortable submitting an AI-drafted document to a regulator or court. Meanwhile, 42% reported little to no trust in the technology at all.
The findings, complied by recruitment firm Paragon Legal, suggest that while firms are pressing ahead with automation, significant internal friction is following close behind. Nearly half of legal professionals (47%) said AI automation has sparked conflict within their team.
Lawyers appear most willing to hand over process-driven, lower-risk tasks. Document classification, compliance alerts, risk flagging, legal research and case law summarisation ranked as the most automated functions. But work involving discretion or professional judgment remains firmly off the table for many. Some 45% said final contract approval is off-limits for AI, 42% drew the line at ethics and compliance judgments, and 37% would not entrust litigation decisions to the technology.
The concerns driving that reluctance will surprise few in the profession. Accuracy and hallucinations topped the list at 57%, followed by data security and confidentiality (51%), liability exposure (45%) and ethical risks (44%). The anxiety is not without foundation, given recent high-profile instances of lawyers being criticised for citing fabricated AI-generated case law in court.
When asked what would increase their confidence, 41% pointed to mandatory human sign-off, while 20% wanted explainable decision-making and 17% called for built-in compliance guardrails. A stubborn 15% said nothing would make them trust AI regardless.
Despite the scepticism, almost two-thirds (62%) expect their team’s AI use to increase moderately or significantly over the next year. That shift is also expected to reshape recruitment, with 43% predicting a reduction in hiring or staffing needs because of automation. By contrast, 21% said they expect to recruit more tech-savvy staff to keep pace.
My firm is pushing heavily for you to utilise AI, but when you talk to partners it becomes clear that there’s no unified position on when and how you should be using it. Personally it is good for presenting information, but not good at getting the information correct.
I also hear “the only lawyers that will become obsolete are the ones that don’t use it”. All I would say in response is that I might actually stand a chance of surviving the cull if I haven’t trained the AI in the things I do.
You are highlighting exactly the problem and most SME law firms are facing. An AI policy doesn’t get you very far and often gives a false sense of comfort. Law firms need to take their time early on to set out, share and debate an adoption roadmap to lead people through what will be a long journey with lots of decisions to make. Some do and make this easier – most don’t. See https://www.cartonconsultants.com/ai-adoption-law-firms Always happy to have a chat.
Potentially AI will replace the bottom level of juniors, potentially completely obsoleting paralegals.
I’d expect the future is going to be solicitors turning more into negotiators and advocates, that human connection for a client can never be replaced. While AI will do the backend work, providing research, drafting first round clauses/terms and template documents.
AI will replace a lot of things we do, but its going to be a long time before AI replaces us fully. You just going to have to learn new tricks until then that keeps you employed, probably also build more communication skills.
Whenever I have used AI, it has missed crucial detail at times. You have to “supervise” it, as if it were a paralegal or an NQ
Excluding inventing new case law (which lawyers have to always verify), my feeling is that AI makes fewer errors than i made as a first year associate. Supervise it but use it! Nobody trusted my judgment either (except I suppose my supervisors didn’t worry me fabricating something). I verify cases and make my own judgement.
🎶
Anything you can do AI can do better!
AI can do anything better than you!
No you can’t!
Yes AI can!
No you can’t!
Yes AI can!
No you can’t!
Yes AI can! Yes AI can! Yes AI can!!!!!
🎶
I’d be interested to know when they carried out the survey and when the lawyers reported not trusting it, et cetera. With the speed that AI is coming on its levels of hallucinations and mistakes are getting far less with every month that passes. I created an AI tool to help my son revise for his maths GCSE (albeit not to give him the answers) and I couldn’t believe that less than two years before it wouldn’t have been able to do a simple algebraic equation – and it’s coming on much faster than that now.
The constant citing of the instances where people have been silly enough to take the output and use it, without reading it or checking it, is in my view just a way to avoid the conversation. Its wide scale adoption is inevitable.