Worry more about AI going wrong, says Slaughter and May

By on

Magic circle giant highlights pitfalls of new technology

Slaughter and May has released a report that reveals the critical risks associated with the misuse of artificial intelligence (AI), stating that there’s “plenty to lose”. In its foreword, the report stresses these concerns “need to be taken very seriously” — perhaps AI adoption by law firms will be far slower than many currently anticipate.

Leading the discussion on AI risks, the report states:

Unfortunately, while much airtime has been given to the potential benefits of AI technologies, there has yet to be significant attention devoted to the risks and potential vulnerabilities of AI.

Among the red flags raised by the magic circle firm and perhaps the biggest threat posed by adopting new AI technology is if the software doesn’t work.

Computers, like humans, are bound to be stubborn and fail at some point. In the report’s words: “There can be any number of cases of failure, ranging from a typo in source code to a fundamental flaw in the overall design of a system.” These failures can expose businesses to reputational damage as well as hefty fines and financial losses.

If these AI systems do work safely and securely, they can have dramatic social effects (or “social disruption”, in Slaughters’ list of AI risks). There’s a lot of talk among employees, including wannabe lawyers, about robots one day displacing the human workforce. Self-driving cars are an early example — some fear we’ll be taking a permanent backseat soon. Slaughters adds:

We expect the ability of computers to continue to grow. Fundamentally, the human brain is a computer on a biological substrate, and so ultimately we should not expect there to be any tasks performed by humans that remain outside the capability of computers.

It may be that anything we can do, computers can do better. Law firms too are employing robots, to speed things up and keep costs low. It’s possible to get legal robo-advice, and instruct machines to run due diligence on reams of contracts in a fraction of the time that it would take a paralegal to do the same job.

Given its huge potential impact, what happens if an AI system falls into the wrong hands? Slaughters has flagged up the vulnerability of AI to “malicious use” as another potential risk.

Microsoft chatbot ‘Tay’ recently caused the software giant a lot of embarrassment when it started to post offensive tweets through its Twitter account. Tay had the ability to learn from tweets it received in order to decide what it would post. Within hours of Tay’s launch, it started to post racist comments, echoing the malicious tweets sent to it by various tweeters. Within 16 hours Tay was no more.

Slaughters’ message is that AI tech comes with a number of risky strings attached. That’s not to say that the firm isn’t a keen advocate of AI and it has clarified this: “we believe it has truly transformative potential in both the public and private sector.”

The root of these AI worries could stem from Slaughters’ experimentation with Luminance, a University of Cambridge-backed AI system that’s “trained to think like a lawyer”. Last year the magic circle outfit signed a deal with the software’s creators, but while it seems the firm has embraced the futuristic AI systems, its recent cautions could be the fruit of running trials on them.

With more money being pumped into AI research in the next decade than there has been in its entire history, excitement surrounding the idea of robotics is growing. But as new AI possibilities multiply, so too do the risks.

Read the report in full below:

For all the latest commercial awareness info, and advance notification of Legal Cheek’s careers events, sign up to the Legal Cheek Hub.