Regulate the robots? Law Society’s incoming president launches review

Avatar photo

By Polly Botsford on

As Society appoints ‘new’ CEO following dramatic resignation of predecessor

The Law Society has launched a review into the impact of robot-esque technologies, such as artificial intelligence (AI), on human rights and justice.

Unveiled today, the public policy commission is being spearheaded by incoming Law Society president Christina Blacklaws, and will involve interviews with tech companies, government and legal experts.

The adoption of AI in policing and the criminal justice system is emerging in a number of areas in England and Wales (the focus area of the report). Police forces are piloting facial recognition technology to cross-reference individual members of the public at certain events (such as the Notting Hill Carnival) with crime data.

Durham Constabulary has been using an algorithm called HART (Harm Assessment Risk Tool) to predict what is the level of risk of an individual committing further crimes over a subsequent two-year period. The algorithm then decides whether the individual is a candidate for a specific rehabilitation programme.

The 2018 Firms Most List

There are concerns, however, that such technology and algorithms can be biased and raise human rights issues. Liberty, the civil liberties organisation, is currently backing a challenge — which is being crowdfunded at the moment — by campaigner Ed Bridges against the use of facial recognition by South Wales Police.

The Chancery Lane commission will investigate these and other examples of AI to examine what sort of framework is needed “for the use of big data and algorithms to protect human rights and trust in the justice system”.

Law Society vice president and commissioner of the report Blacklaws commented that the use of AI “could — and sometimes does — keep us safer, preserve scarce resources and expand the reach of increasingly stretched law enforcement”. But she also warned:

“The design, sale and use of algorithms to deliver justice or maintain security also raises questions about unconscious bias, ethics and rights. Further potential risks may emerge when an algorithm is developed by a business focused on profit rather than by an organisation focused on delivering justice.”

Co-commissioners of the report, which is expected in early 2019, are Birmingham Law School professor Sylvie Delacroix and University College London computer science expert Professor Sofia Olhede.

The Law Society also announced today that interim boss, Paul Tennant, will stay on as chief executive. Tennant took the helm last year following the dramatic departure of Catherine Dixon in early 2017. Dixon quit during a spat with the 100-strong Law Society Council which, she argued publicly, blocked her attempts at much-needed governance reforms.

Sign up to the Legal Cheek Newsletter

Related Stories

Artificial intelligence unlikely to replace lawyers anytime soon, report suggests

The rise of the robots is ‘overstated’ say US profs

Feb 27 2017 11:56am

Beware the bursting of law’s artificial intelligence bubble

Legal tech is a long story of incremental change, not revolution

Jan 31 2017 9:29am

Robot solicitors? Firm and uni team up to apply artificial intelligence to law

Potential bad news for training contract hunters of the future ...

Jan 5 2015 10:04am