Book review: Susskind’s ‘How To Think About AI’ 

Avatar photo

By Polly Botsford on

5

Polly Botsford delves into Professor Richard Susskind’s ‘darker’ latest book and wonders: ‘Where has Reassuring Richard gone?’

Professor Richard Susskind

Richard Susskind has always been an optimist, thinking and writing about tech and the law without being a doomsayer, never a naysayer, an everything-possible kind of guy. Even when he was talking about the end of lawyers, it sounded like a positive (even for lawyers).

But his latest book, How To Think About AI, an excellent companion to understanding where we are in all things AI, is darker around the edges. “Balancing the benefits and threats of artificial intelligence – saving humanity with and from AI – is the defining challenge of our age,” he tells us. “I am … increasingly concerned and sometimes even scared by the actual and potential problems that might arise from AI.” So he begins his chapter on ‘Categories of Risk.’ Where has Reassuring Richard gone?

And yet, of course, How to think about AI is full of clear thinking as well as being a call to action. Susskind covers a lot of ground: alongside an exploration of the various hypotheses on where AI will take us (is it hype or the end of the world as we know it?), he recaps on AI’s brief history, starting with Alan Turing’s paper ‘Computing Machinery and Intelligence’ in 1950, and including ‘AI winters’ where progress was stagnant, for example, when we all got distracted by the internet. He explores the alarming risks artificial intelligence brings, and how it might be possible to control those risks. He wraps up with a crash course in consciousness, evolution and the cosmos to explore the various theories about where all this AI business sits within the grander ideas about life, the universe and everything.

But let’s roll back to the here and now. For lawyers, Susskind gets to the core of how they (as with other professionals such as doctors or actuaries) misunderstand AI. He starts by talking us through a distinction between process-thinkers and outcomes-thinkers: the former camp will think about something in terms of what it does and the latter in terms of what are the results. A lawyer provides his services through building up knowledge and experience, and advises a client accordingly. That’s a lawyer’s process. But the client is not interested either in the process of law or legal services, nor a lawyer’s depth of knowledge nor their fantastic reasoning. The client is interested in specific outcomes: avoiding a legal problem, solving a legal problem, getting a dispute settled, getting certainty.

The 2025 Legal Cheek Firms Most List

Susskind points out: “Professionals have invested themselves heavily in their work – long years of education, training, and plain hard graft. Their status and often their wealth are inextricably linked to their craft.” This means their focus is on processes not outcomes. They cannot envisage another way of getting to the result that the client really wants. From an AI point of view, this is inherently limiting.

To elaborate this point, Susskind lays out the three ways in which AI will change what we do: first by automation, computerising existing tasks; then by innovation, by creating completely new technological solutions (we are not designing a car that a robot will drive, we are designing driverless cars), and elimination. Elimination is where AI might well get rid altogether of the problems that we are trying to solve. In this last category, he gives the example of the Great Manure Crisis of the 1890s where there was so much horse poo on the streets of the world’s cities, that it endangered everyone. What came along was not a machine to get rid of manure but cars. Problem eliminated. Lawyers, like other professionals, cannot imagine AI beyond automation: “They cannot conceive a paradigm different from the one in which they operate.” It’s not that AI might get rid of all legal problems, but it is very likely that there may be a lot of current problems that simply won’t exist (and will be replaced by a whole set of new problems) and professionals are just not thinking imaginatively enough to see this. And thus, Susskind warns us: “Pay heed, professionals – the competition that kills you won’t look like you.”

When it comes to the courts and justice, Susskind has long campaigned for these to be massively updated by using technology. His chapter, ‘Updating the Justice System’, follows in the same vein only more adamantly so. For instance, on lawmaking, AI will require ‘legislative processes that can generate laws and regulations in weeks rather than years.” On courts, he reminds us of his book, Online Courts and the Future of Justice where he set out a blueprint for digitised dispute resolution that only engaged real judges as a last resort.

But he also argues we will need new legal concepts. Intellectual property law will need to be completely ‘reconceptualised’. And if AI systems will be more like “high-performing aliens,” he argues (borrowing a description by a contemporary historian), we will need a new form of legal personality: “if we are to endow AI systems with appropriate entitlements, impose sensible duties on them, and grant them fitting permissions.”

How to Think About AI is an unsettling but informing read. To be enlightened is to be alarmed is to be armed when it comes to AI; so every professional needs to read this book now – before it’s too late.

Professor Richard Susskind OBE is the author of ten books about the future. His latest, How To Think About AI, is now available. He wrote his doctorate on AI and law at Oxford University in the mid-80s.

5 Comments

Hmmm

One the central predictions of Richard Susskind’s previous book, Tomorrow’s Lawyers (2013), was that many people who went into law 20+ years ago have ended up rich almost by accident as they never went into the profession for the money, but that many of those going into law post-2013 would do so for the money but end up earning much less than they’d hoped because of tech and automation etc.

So far this prediction has been completely wrong, as junior lawyer wages at corporate law firms have continued to soar, alongside partner earnings. I’d like to see Susskind reminded of this. Anyone who followed his advice has lost a lot of money.

Anon

His own son is a commercial barrister.

If it were all doom and gloom for the profession, I’m sure he would have persuaded him to look for other ways to make a living.

_

In fairness, I think he’s always said that the very high end of law will do well. What he’s got very wrong though is BigLaw. No doubt he’ll claim that he’s early and assocs at big law firms will eventually be turned into ‘legal engineers’ or whatever.

Anon

That’s partly true, but who those Associates are has changed.

Back when I qualified (2005) if you weren’t a partner by 7/8 PQE you were over the hill. Most of my trainee supervisors (about the same as now, 3-6 PQE at the time) were full equity by the time I was 3 years qualified. This was at an MC firm. Nowadays people are getting made up at 10-12 PQE and the number making it is ~10% of a trainee intake.

So Associate pay has gone up but the real money is getting further away and harder to get into. It would be incredibly rare now for a trainee at a firm that size to have 3 supervisors who were all equity before 10 PQE, not so back then.

Grant Castillou

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Join the conversation