Could you be fired by a robot – and would UK anti-discrimination law protect you?

Avatar photo

By Puja Patel on

8

Puja Patel, University of Cambridge law graduate, offers an analysis into whether the UK’s current anti-discrimination laws are fit for purpose in the wake of AI


Imagine if popular BBC TV series, The Apprentice, had a robot instead of Lord Sugar sitting in the boardroom, pointing the finger and saying ‘you’re fired.’ Seems ridiculous, doesn’t it?

Whilst robots may not be the ones to point the finger, more and more important workplace decisions are being made by artificial intelligence (‘AI’) in a process called algorithmic decision-making (‘ADM’). Indeed, 68% of large UK companies had adopted at least one form of AI by January 2022 and as of April 2023, 92% of UK employers aim to increase their use of AI in HR within the next 12-18 months.

Put simply, ADM works as follows: the AI system is fed vast amount of data sets (‘training data’) upon which it models its perception of the world by drawing correlations between data sets and outcomes. These correlations then inform decisions made by the algorithm.

At first glance, this seems like the antithesis of prejudice. Surely a ‘neutral’ algorithm which relies only upon data would not discriminate against individuals?

Sadly, it would. Like an avid football fan who notices that England only scores when they are in the bathroom and subsequently selflessly spends every match on the toilet, ADM frequently conflates correlation with causation. Whilst a human being would recognise that criteria such as your favourite colour or your race are discriminatory and irrelevant to the question of recruitment, an algorithm would not. Therefore, whilst algorithms do not directly discriminate in the same way that a prejudiced human would, they frequently perpetrate indirect discrimination.

Unfortunately, this has already occurred in real life — both Amazon and Uber have famously faced backlash for their allegedly indirectly discriminatory algorithms. According to a Reuters report, members of Amazon’s team disclosed that Amazon’s recruitment algorithm (which has since been removed from Amazon’s recruitment processes) taught itself that male candidates were preferable. The algorithm’s training data, according to the Reuters report, comprised of resumes submitted to Amazon over a 10-year period, most of whom were men; accordingly, the algorithm drew a correlation between male CVs and successful candidates and so filtered CVs that contained the word ‘women’ out of the recruitment process. The Reuters report states that Amazon did not respond to these claims, other than to say that the tool ‘was never used by Amazon recruiters to evaluate candidates’, although Amazon did not deny that recruiters looked at the algorithm’s recommendations.

Want to write for the Legal Cheek Journal?

Find out more

Similarly, Uber’s use of Microsoft’s facial recognition algorithm to ID drivers allegedly failed to recognise approximately 20% of darker-skinned female faces and 5% of darker-skinned male faces, according to IWGB union research, resulting in the alleged deactivation of these drivers’ accounts and the beginning of a lawsuit which will unfold in UK courts over the months to come. Microsoft declined to comment on ongoing legal proceedings whilst Uber says that their algorithm is subject to ‘robust human review’.

Would UK anti-discrimination law protect you?

Section 19 of the Equality Act (‘EA’) 2010  governs indirect discrimination law. In simple terms, s.19 EA means that it is illegal for workplaces to implement universal policies which seem neutral but in reality disadvantage a certain protected group.

For example, if a workplace wanted to ban employees from wearing headgear, this would disadvantage Muslim, Jewish and Sikh employees, even though the ban applied to everyone – this would therefore be indirectly discriminatory, and unless the workplace could prove this was a proportionate means of achieving a legitimate aim, they would be in breach of s.19 EA.

But here’s the catch. The EA only applies to claimants from a ‘protected group’, which is an exhaustive list set out at s.4 EA: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.

The Amazon and Uber claimants fall into the protected categories of ‘sex’ and ‘race’ respectively. Therefore, the EA will protect them – in theory. In reality, it is very difficult to succeed in a claim against AI, as the claimants are required by the EA to causally connect the criteria applied by the algorithm with the subsequent disadvantage (e.g. being fired). It is often impossible for claimants to ascertain the exact criteria applied by the algorithm; even in the unlikely event that the employer assists, the employer themselves is rarely able to access this information. Indeed, the many correlations algorithms draw between vast data sets mean that an algorithm’s inner workings are akin to an ‘artificial neural network’. Therefore, even protected group claimants will struggle to access the EA’s protection in the context of ADM.

Claimants who are discriminated against for the possession of intersectional protected characteristics (e.g. for being an Indian woman) are not protected as claimants must prove that the discrimination occurred due to one protected characteristic alone (e.g. solely due to either being Indian or a woman). ‘Intersectional groups’ are therefore insufficiently protected despite being doubly at risk of discrimination.

And what about the people whom are randomly and opaquely grouped together by the algorithm? If the algorithm draws a correlation between blonde employees and high performance scores, and subsequently recommends that non-blonde employees are not promoted, how are these non-blonde claimants to be protected? ‘Hair colour’ is not a protected characteristic listed in s.4 EA.

And perhaps most worryingly of all — what about those individuals who do not know they have been discriminated against by targeted advertising? If a company uses AI for online advertising of a STEM job, the algorithm is more likely to show the advert to men than women. A key problem arises — women cannot know about an advert they have never seen. Even if they find out, they are highly unlikely to collect enough data to prove group disadvantage, as required by s.19 EA.

So, ultimately – no, the EA is unlikely to protect you.

 Looking to the future

It is therefore evident that specific AI legislation is needed — and fast. Despite this, the UK Government’s AI White Paper confirms that they currently have no intention of enacting AI-specific legislation. This is extremely worrying; the UK Government’s desire to facilitate AI innovation unencumbered by regulation is unspeakably destructive to our fundamental rights. It is to be hoped that, following in the footsteps of the EU AI Act and pursuant to the recommendations of a Private Member’s Bill, Parliament will be inclined to at least adopt a ‘sliding-scale approach’ whereby high-risk uses of AI (e.g. dismissals) will entail heavier regulation, and low-risk uses of AI (e.g. choosing locations for meetings with clients) will attract lower regulation. This approach would safeguard fundamental rights without sacrificing AI innovation.

Puja Patel is a law graduate from the University of Cambridge and has completed her LPC LLM. She is soon to start a training contract at Penningtons Manches Cooper’s London office. 

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

8 Comments

Did you know

“Open the pod bay doors – HAL”

Elise

An incredibly insightful article- I am interested to see how the Uber case will develop.

Iyanu

Very thought-provoking – more conversations such as this need to be had.

Rahul

Brilliant – very well articulated and insightful. A paper thay truly explores their aim to its fullest in a condensed manner.

Dhruv

Great article – do you think the Private Members Bill, if enacted, would sufficiently address the issue of AI regulation?

Puja

Thank you for your question. There are shortcomings in the Private Members Bill, such as the omission of a specific provision governing enforcement and sanctions for non-compliance (section 8(2) merely states vaguely that regulations under this Act can create offences or penalties/fines/fees). In my opinion – and in response to your question – this means that the Bill would not wholly sufficiently address the issue of AI regulation. However, the enactment of the Bill would certainly mark a step in the right direction and would signal to businesses that the government takes the issue seriously – which in and of itself serves as a useful deterrent against the misuse of AI. The Private Members Bill would also certainly bring tangible benefits and protections to the sphere of AI regulation, namely: the creation of an AI authority (section 1) and the need for a proportionate relationship between the intrusiveness of AI regulation with the risks that certain AI uses pose to fundamental rights – the “sliding scale” approach (section 2(1)(c)-(d)). It remains to be seen whether this Private Members Bill will complete the notoriously long journey from introduction to enactment – it currently has passed its first reading in the House of Lords and awaits a date for a second reading to be announced.

Ria S

Have there been any changes to the UK government’s stance on legislating on AI regulation – do you think they will change their minds?

Puja

Thank you for your question. Whilst the government currently remains committed to refraining from producing primary legislation to regulate AI technologies, it remains to be seen whether the Private Members Bill will impact this. One hopes it will!

Join the conversation

Related Stories

Navigating bias in generative AI

Nottingham PPE student Charlie Downey looks at the challenges around artificial intelligence

Sep 11 2023 9:22am
2
lawyers AI robots

The blame game: who takes the heat when AI messes up?

Megha Nautiyal, a final-year law student at the University of Delhi, explores the relationship between liability and technology

Aug 8 2023 8:55am
1

Flexible working, inflexible stereotypes

Durham Uni psychology grad Darcie Summers analyses the gendered implications of the UK passing the Flexible Working Bill

Aug 29 2023 9:45am