University College London will ensure most law assessments are artificial intelligence-proof

UCL law school has stated its intention to “secure” assessments against artificial intelligence (AI), declaring the shift a “response to the future” in upholding trust and integrity amid “AI slop”.
In a chunky paper published by the top law school, leading academics analyse AI in legal education — the main message being that they will ensure more than half of assessments they run cannot be completed with AI assistance. Two main reasons underpin the shift:
“Our task as a law faculty is to ensure that our degrees are, and continue to be, both transformative educational journeys and powerful, internationally recognised and durable signals of our students’ achievements. AI does not change the core of that”.
The paper defines a “secure assessment” as one which guarantees that “AI does not substitute for the skills or knowledge acquisition being evaluated.” This includes written and oral in-person examinations.
UCL’s regulations for all departments already prohibit AI to “create or alter content” including in assessments, like coursework, “unless explicitly authorised…for a valid pedagogical reason”. However, the move by the law school to actively “secure” assessments is a new development which will affect undergrad and postgrad studies. The law school claims shifting to 50% or more AI-proof assessments is a return to how things were done before the covid pandemic, which since made coursework more common.
During the course of their legal careers, students may find themselves working with clients or in jurisdictions which are “still at the earliest stages of the digitisation of law, let alone the use of AI” and need to be prepared for this, the law school claims. Key skills like thinking on your feet in cross-examination and learning the ethical standards for working with sensitive evidence cannot be substituted by AI use, they argue.
The law school’s new approach is a response to AI tools which are continually improving. Assessors were wowed when chatbots scraped a pass mark in a 2022 Watson Glaser test and a contract exam in 2023.
Now, educators have to contend with the rise of “AI agents” who can perform certain tasks independently and proactively. ChatGPT’s recently-launched “deep research” function, which can take a research question and spend as long as half an hour scanning the web to generate a lengthier, more accurate essay-style product, is a further challenge.
The academics liken AI uptake to for-profit legal databases being sold cheaply to universities decades ago. The idea, the paper says, was for universities to train students to rely on the tools to “ensure a pipeline of future customers”. Universities like UCL tried to resist by setting up free case database BAILII, for example. Now, these academics suggest, AI companies are at the same trick, encouraging students to rely on their products rather than develop independent skills.
Microsoft’s AI, Copilot, is particularly interesting as the company have “embedded” it into their ubiquitous office suite, as the paper points out — making it obviously available to users. Shoosmiths recently announced a £1 million bonus pot if lawyers enter one million prompts into Copilot, showing inroads this tool has already made in the legal sector.
The paper acknowledges that lawyers can and are using AI tools, but emphasises that education is different. Lawyers are in part “content creators” the paper says, whilst students, it argues, are not. The paper says their “trusted” degrees should test “underlying skills” rather than the ability to produce content. The paper goes further, saying plentiful AI-generated media makes text “cheap” — calling it “AI slop” — so that “creativity and critical thinking” will be needed for students to use AI “masterfully”.
Educators around the world have been wrangling with AI. Victoria University of Wellington, in New Zealand, have this week announced handwritten exams will return this trimester. UCL laws use their paper as a call to action:
“[C]onscious decisions must sit at the heart of universities’ approaches to AI and education. Universities must not be passive rule-takers. We must not simply ‘adjust to’ speculative educational and professional visions of the future marketed by technology firms in order to sell more cloud computing and increase reliance on tools with questionable and uncertain utility. Universities must steer, and if necessary, themselves create, the technology they need for their missions.”
Elsewhere, the legal profession has made historic moves to embrace AI. This month, the SRA approved Garfield.Law — a regulated law firm driven entirely by AI, a first for England and Wales. Over in the judiciary, judges recently received refreshed guidance on using tools like Copilot as well as identifying AI-generated submissions.
Meanwhile, this year the pupillage gateway prohibited AI use in applications while some law firms offered tips on how to use the technology in applications.
The somewhat mixed messaging from regulators, universities, and professional recruitment has at times made it difficult for students to know where they stand. Legal Cheek‘s latest podcast engages this issue from aspiring lawyers’ perspectives, reflecting on personal experience, and asking whether students should use AI to potentially gain skills or ignore it and risk falling behind.