Can AI be criminally accountable?

Avatar photo

By Theo Richardson-Gool on

Ex-human rights practitioner Theo Richardson-Gool examines the problems posed by autonomous machines

lawyers AI robots

Artificial intelligence (AI) takes decisions away from humans, but who is accountable? Do different legal standards apply to AI? Here we explore how algorithms can amplify human prejudices and why this is a human rights issue, where scrutiny, transparency and accountability are called for. Further, philosophical consideration for the ontological effects of AI is given, where predictive systems may exacerbate certain social norms, like homophily, leading to greater social fragmentation and homogenisation of tastes.

Can AI be criminally accountable?

Is a human-based legal system fit for autonomous machines? Initially, it seems logical, for example, that a manufacturer of autonomous vehicles is held to account for any malfunctioning — where AI makes a decision which conflicts with our laws, such as running over a pedestrian to avoid injuring the passenger. But is the manufacturer at fault if mens rea cannot be proved, after all, AI makes decisions autonomously from its manufacturer or programmers? Which raises the question, can AI have a guilty mind? If the answer is no, how do you prove criminal liability because a guilty act needs a guilty mind in jurisprudence? In other words, it can be argued, under our laws, that AI lacks sufficient mental capacity to be guilty of a crime. This begs the question, do we need different legal tests for AI?

If a different burden of proof is required for AI, is greater oversight of the data input in the process of deep-learning, required? Autonomous machines are trained, by being fed data which in turn affects the artificial neural network — but, the user has virtually no understanding of the decision-making process, from the input of data to the output of decision. This is a ‘black box’ scenario, and its why Sherif Elsayed-Ali of Amnesty argues, “we should always know when AI is aiding or making decisions and be able to have an explanation of why it was made.” However, according to associate professor David Reid of Liverpool Hope University, this may not be possible, unravelling the reasoning process of AI is challenging because “the choices are derived from millions and millions of tiny changes in weights between artificial neurons.” In other words, we may not be able to compute the reasoning process. Therefore, transparency of the data input is especially important to shine a light into the ‘black box’, which gives oversight, allowing us to decrease potential AI biases, and even re-program or educate AI so faults are minimised.

Replicating human bias in AI

Any system designed by humans is likely to reflect our biases. Humans have discriminatory preferences, however, do we want our prejudicial tendencies to be extended by AI? This is what happened in Britain when the police used facial recognition software that “through replication or exacerbation of bias” projected human prejudices into AI, which meant it discriminated against black people.

Concerns about AI amplifying existing bigotry is a real problem which can lead to ‘unintentional discrimination’. Dr Schippers called this the “white guy problem”, i.e. the fear of racial and gender stereotypes being perpetuated by AI systems in policing, judicial decision-making, and employment, inter alia. Further, The Toronto Declaration: Protecting the right to equality and non-decimation in machine learning, 2018, petitioned for, public and private sectors to uphold international human rights law against technological advances that undermine human rights. In any case, jurisprudence fit for AI needs oversight, accountability, scrutiny and transparency at a design stage. Furthermore, as decision-making is given away to AI, we may want to consider creating an ethical AI to guide the underlying AI, so empathy is supported and unintentional discrimination is minimised.

So far, we have considered the potential ramifications of AI biases, in a largely ungoverned, unregulated market, however, what are the implications of AI on social behaviour?

Should AI encourage dissonance?

Homophily is an inherent trait of humanity which leads us to make friends with people similar to ourselves, as we socialise with people who hold similar attitudes. The new online social world which is driven by algorithms attempting to find likeable tastes, associations, and even friends — perpetuates this behaviour. We see this with the way that trends develop in certain groups or social media feeds. This is great for homogenisation of culture. However, it also creates echo chambers that can lead to a polarisation between groups — as the algorithms drive us away from dissonance. We have already seen this with popular culture, as marketers latch on to prefigured tastes so they can maximise profits, but now it is affecting how we interact, as our internalised preferences are confirmed and reaffirmed through AI, rather than being challenged.

Dialectical reasoning invites opposing perspectives to establish truth via a reasoned argument. However, if AI pushes us down silos, where views that do not adhere to our tastes are avoided then opportunities for dialogical exchanges, will be limited. A concern for philosophers, who are often on the periphery of culture but at the forefront of progress. Consider the ramifications for society if AI perturbs its users from breaking their social norms, wouldn’t it play to our own confirmation biases? Presently, YouTube uses AI to calibrate what keeps you viewing videos, i.e. it establishes your binge tastes — is this a good thing?

The idea that we are encouraged to indulge in our tastes rather than open ourselves up to new ideas, concepts or contrary perspectives, does not bode well for existential thinking. Whether it is, Sartre’s idea of radical freedom where rebelliousness is a virtue or, Foucault’s thought that emancipation can come from breaking norms and categories, or even Nietzschean resistance to a herd mentality. AI perpetuates narrow thinking, in other words. Should AI promote diversification by also challenging our tastes, or is that misleading? Possibly, preference settings are an option, where we choose how much dissonance we want AI to have in our lives. A bit like setting the level of honesty you want when deciding on which news source to read.

Difficult questions regarding the ethics of AI and how it is being used arise in the process of adopting it. Several points are clear based on these findings. First, greater considering needs to be given to whether AI can have a guilty mind or not. Second, we need transparency at a design and programming stage, when considering the data being input, so we maintain some oversight — and best avoid any extension of human prejudices in AI. Third, we need to consider creating an ethical AI system which guides general systems. And last, when applying AI to social media and the internet, serious consideration needs to be given as to whether we want a system which perpetuates echo chambers, and affirms of habits and tastes, or not.

Theo Richardson-Gool is a former personal injury and human rights practitioner. He is a graduate of IE Business School and University of London.

Want to write for the Legal Cheek Journal?

Find out more

Join the conversation

Related Stories

What does artificial intelligence look like?

Imperial physics grad Nishant Prasad explains what an artificial neural network is and what it does

Oct 23 2018 10:37am

Not if but when: The rise and rise of AI in legal practice

Lawyers being replaced by technology, should we be scared?

Sep 19 2016 11:47am

Is the artificial intelligence emperor wearing any clothes?

An anonymous associate at a leading City of London law firm notes a disconnect between their experience on the ground and the tech marketing noise

Oct 15 2018 10:16am