News

Westminster Uni academic uses ChatGPT to pass contract law exam

By on
14

But only just — bot performs as an average law student, Dr Ioannis Glinavos finds

A university academic has pitted ChatGPT against a first-year law degree exam paper in contract law, with underwhelming results.

Senior law lecturer at the University of Westminster, Dr Ioannis Glinavos, asked the bot to answer a selection of sample exam questions taken from a Pearson law textbook available online.

One scenario considered a contract to build a motor yacht for a computer tycoon and required ChatGPT to provide advice in response to supply chain issues and changes to a payment plan. Unfortunately, the bot’s response was marked as “tactical advice, not legal advice”, and unlikely to gain a pass mark in a law examination.

The bot’s responses continued to be hit and miss in all of the questions put to it, leading Glinavos to conclude its performance was equivalent to that of an average first-year law student. “The bot could sit in my class and I wouldn’t know the difference,” he said in the video (below).

Like most students the bot had areas of strength and weakness. Glinavos explained, “it can identify correctly the relevant legal issues but it struggles with analysis and authority.”

The 2023 Legal Cheek Firms Most List

“The bot is very close to the average student performance, thus not easily detectable as an AI-produced answer,” said Glinavos. Ruling that ultimately the bot’s attempt at the exam paper scored around 45% but was still good enough to merit a pass.

He finishes with a warning that current AI-detection tools for exams are not yet advanced enough to competently detect students using these types of tools to cheat. “It will eventually lead to the abandonment of online timed exams that are not proctored,” he predicted.

For all the latest commercial awareness info, news and careers advice:

Sign up to the Legal Cheek Newsletter

14 Comments

TS Cultist

Who cares if they use ChatGPT since they’re never gonna enter the profession anyway lol

(20)(2)

The Voice of the People

Let’s be honest here, the real problem isn’t chat GPT. Before that it was essay mills and ghost writers. Before that traditional book plagiarism.

The issue is the nature of assessment. Universities are simply ill-equipped and unwilling to make the changes needed. Cheating could be virtually eradicated over night but this would require major changes in how lessons are taught to actually make them engaging and interesting.

Note – a 80 page PowerPoint on frustration is neither “engaging” nor “interesting” when read by a lecturer in a monotone way. It is little surprise that many students put in equally abysmal effort in their courseworks when they are asked to answer the same questions that were asked several years before. Sometimes scenarios don’t even bother to change the names in the scenarios. Likewise, lecture slides are simply given a new theme from the year before.

I get it, lecturers are underpaid and poorly treated, and international students are cash cows that overwhelmingly attend such universities, so we don’t want to rock the boat too much – but let’s not pretend Chat GPT is the issue here.

How about an assessment that mimics a real life shipping dispute in real time, with students given advanced material and background (like you know, actual practice) and the examiner playing the role of an arbitrator – asking the students to work through the issue and explain their reasoning. All recorded. Students then need to submit work afterwards as an extension to the activity.

But no. We won’t do it. We would rather sit there reading from the same slides lesson after lesson and setting the same boring essay questions that have been set every year for the last decade. Then wonder why the UK is no longer a top destination and we all hate our jobs.

Speaking figuratively, ofcourse

(27)(5)

Legal Officer With A 2.ii

Universities could also offer similar tutorial/supervision sessions that Oxbridge do to make sure students actually read and digest what they learn.

Tuition fees are the same between Oxbridge and other universities. The preference recruiters have for Oxbridge graduates isn’t going to change whilst other students complete an essay a month.

(11)(1)

SkepticalLecturer

You may be surprised to find that plenty of lower tier institutions – where there is a focus on teaching and staff more often come from the professions – do exactly this kind of thing.

It is difficult to scale up because of a) competing demands on our time as staff and b) terrible top-down restrictions on assessment modes and logistics.

(4)(0)

Prompt engineer

Chat GPT performed “as an average first year student” because the prompts of Dr Ioannis Glinavos were average and flat. With the proper prompting Chat GPT gives 1st class answers.

(11)(1)

Anonymous

Literally scrolled to the comments to make the same point as this. Spot on.

(1)(1)

Voice of the People

In my ignorance I haven’t tried the most recent version of chat GPT so the following point may already be outdated.

HOWEVER, for those of us that are already experts with the subject material, the deficiencies in Chat GPT are very real and obvious. From my experience with Chat GPT, answers have the veneer of excellence, but underneath lack any real legal substance. Obviously a student would not be able to see these gaps.

The main issue I believe is in the use of authorities – which are largely robotic or mechanical in their use (also extremely limited). There is very little legal context given and the legal analysis of these case itself seems to be extremely limited. It’s very much like answering a question with unlimited access to Wikipedia.

Sentence construction and ordering is superb, but when you look a little deeper there is actually very little there.

(3)(0)

Adam

To be honest, I think this says more about the standards at the University of Westminster than it does about anything else.

(7)(2)

common sense

I’m sure a first year paper at the University of Westminster is the peak of academic rigour. Has anyone tried it against a decent uni paper?

(3)(3)

SkepticalLecturer

You probably don’t realize that the papers for core modules assigned across institutions are basically the same.

(6)(0)

common sense

Oh please. Take an Oxford FHS paper in Trusts and compare it to a finals paper at the University of Westminster. Compare the two reading lists for each uni’s contract modules. The standard is very different and you know it.

(2)(2)

average student

The paper tested wasn’t from Westminster. ChatGPT would read the article before commenting…

(1)(0)

onoz

People forget that ChatGPT is still in beta mode producing these kinds of results. I can’t even imagine its capabilities with further developments and updates.

(2)(0)

Voice of the People

I agree.

Suffice to say, and I think this point may have been missed – if my average student was scoring 45% – I would have major concerns over the adequacy of either the teaching team or the learning materials. Chat GPT would be the least of my worries!

I suspect that this has been exaggerated though – I find it hard to believe students are genuinely scoring 45% on average. This is a bare pass. There is little point differentiating on degree classifications if the outcome students can expect to achieve on “average” is just a pass. If this was the case and I was an embassy I would be removing Westminster from my approved providers. If I was a course leader, which I believe the OP was, I would be reviewing the competencies of my teaching cohort (not making YouTube videos).

Food for thought.

(2)(1)

Comments are closed.

Related Stories