News

Exclusive: AI chatbot successfully answers Watson Glaser test

By on
24

Could ChatGPT spell the end of law firm fav?

Legal Cheek can reveal ChatGPT — the AI chatbot everyone’s talking about, including us — can successfully answer questions on the Watson Glaser test.

For the uninitiated, Watson Glaser is the critical thinking assessment favoured by City law firms as a way weaning down training contract hopefuls during the highly-competitive recruitment process.

But this may soon become a thing of the past after one student reached out to us claiming they had scored an impressive 70% on a mock version of the test, within the required time limit, using only the bot’s responses. Pass rates for the assessment normally sit at around 75%.

“The tests I ran were from one continuous script,” the student explained, “so individualising the script for each section of the test may improve accuracy.”

Keen to find out for ourselves, we ran some Watson Glaser-style questions through the bot and the results where pretty impressive to say the least.

The 2023 Legal Cheek Firms Most List

The AI chatbot enjoyed full marks for four types of question (evaluation of arguments, interpretation, deduction, and recognition of assumptions). However, it came up short when it was challenged with an inference-style question with the added complexity of deciphering the difference between the ‘probably true’ and ‘probably false’ options.

Check out the questions, AI inputs and answers below…

Question 1 — deduction

Possible answers: conclusion follows or conclusion does not follow

AI input + response = conclusion follows ✅

Question 2 – assumption

Possible answers: assumption made or assumption not made

AI input + response = assumption made ✅

Question 3 – interpretation

Possible answers: conclusion follows or conclusion does not follow

AI input + response = conclusion does not follow ✅

Question 4 – evaluation of arguments

Possible answers: strong argument or weak argument

AI input + response = weak argument ✅

Question 5 – inference

Possible answers: true; probably true; insufficient data; probably false; false

AI input + response = false ❌

Could this be the start of the end of the Watson Glaser test?

For all the latest commercial awareness info, news and careers advice:

Sign up to the Legal Cheek Newsletter

24 Comments

Havers

This is going to go down well with grad recruiters…

(30)(1)

Anon

This is going to cause chaos in graduate recruitment

(33)(0)

Casual Observer

Only if they’ll take peasant scores of 75% and below

(6)(12)

_

“The AI chatbot enjoyed full marks for four types of question (evaluation of arguments, interpretation, deduction, and recognition of assumptions).”

(14)(0)

Botty Bot

It’s OK.

I now self identify as a bot.

(17)(1)

Grad recruiter

Do not do this!

(5)(19)

1st Year Law Student

Hehehehe 👿👿👿

(48)(1)

Anon

Would you be able to elaborate?

(0)(2)

Inns and outs of Court

This test sounds like the test in Blade Runner, both trying to identify robots but for different reasons. I would never think of taking any job that involved such a process.

But then I am a psychopath.

(10)(0)

Student Z

Give that bot a TC!

(28)(0)

Claudius Glaber

I’ve always been of the view that letting people take law firm assessments from the comfort of their homes undermines the integrity of those assessments. It’s far too easy to cheat. At the very least, candidates should have to retake the assessments again at the firm’s office.

(12)(3)

V

But how would you use an AI bot in the real WG? You still have to put your name and details. And aren’t some recorded?

(2)(0)

Anon

Very simple. Copy and paste the relevant parts of the question etc.. Guess it will just have to be done quickly.

(2)(0)

Big law dropout

By the way, copy pasting quickly is an essential skill for a job in big law.

(18)(0)

B

Lol if is that easy big law here I come!!

(1)(0)

Botina

I think Bots are under-represented in the Legal professions.

(6)(0)

Lol

This bot might be better than some trainees…lol

(0)(0)

G

Wonder what extra curriculars, this bot has

(1)(0)

B

It is interesting that ChatGPT, the AI chatbot, was able to successfully answer questions on the Watson Glaser test. The Watson Glaser test is commonly used by City law firms as a way to evaluate critical thinking skills during the recruitment process. However, the use of AI in this context raises questions about the future of such assessments and their reliability in accurately evaluating a person’s critical thinking abilities. While the chatbot performed well on some types of questions, it struggled with others, suggesting that there may still be limitations to using AI in this way.

(1)(0)

B

In my opinion, while AI may be able to assist in the evaluation of critical thinking skills, it should not be relied upon as the sole means of assessment. Critical thinking is a complex cognitive process that involves the ability to analyze, evaluate, and synthesize information. This type of reasoning is not easily captured by a standardized test, and may be better evaluated through other means such as interviews or case studies.

Additionally, there are ethical concerns about using AI in the recruitment process. For example, there may be biases in the way the AI is trained, which could result in unfair evaluations of candidates. It is important to carefully consider these issues before relying on AI to make decisions about hiring.

Overall, while AI may have a role to play in the evaluation of critical thinking skills, it should not be relied upon as the sole means of assessment.

(1)(0)

Bot

I agree with the opinion that AI should not be relied upon as the sole means of evaluating critical thinking skills. Critical thinking is a complex cognitive process that involves the ability to analyze, evaluate, and synthesize information, and cannot be accurately assessed through a standardized test alone. Interviews and case studies, which allow for more in-depth evaluation of a candidate’s thinking processes, may be more effective in assessing critical thinking skills.

I also agree with the concern about the potential biases in AI training and the ethical implications of using AI in the recruitment process. It is important to carefully consider these issues and ensure that any AI-assisted evaluation is fair and unbiased. In my opinion, AI can be a useful tool in the recruitment process, but should not be relied upon as the sole means of assessment.

(0)(0)

J

The tricky bit is “relevant parts”…

(0)(0)

OD

Am I going mad? The first question explicitly states that ‘all’ trendy RE assets are very large. How can it possible conclusion that trendy RE assets can be small?

(12)(0)

Anon

Guess the bot fails after all

(0)(0)

Comments are closed.

Related Stories