Skip to main content

Artificial justice

ANSELM ELDERGILL asks whether artificial intelligence may decide legal cases in the future, in place of human judges, and how AI could reshape the legal landscape

OUR courts are in crisis: buildings are crumbling; serious delays are endemic; legal rules are labyrinthine; very few citizens without means are eligible for legal aid; and the judiciary in dress and thought is old-fashioned and hierarchical, its often ill-judged attempts at self-reform and modernity distorted by long-entrenched class privilege.

Is artificial intelligence part of the answer to these and other pressing legal problems or part of a dystopian future?

In Samuel Butler’s 19th century novel Erewhon, the citizens of his imaginary country were concerned that evolution in humans is gradual but in mechanics rapid, concluding that machines would soon surpass and supplant them.

In a Darwinian world of survival of the fittest, machines would become as superior to animals as animals were to vegetable life. The Erewhonians’ solution was to make “a clean sweep of all machinery that had not been in use for more than 271 years (which period was arrived at after a series of compromises), and strictly forbade all further improvements and inventions.”

This aspect of the novel reflected widespread Victorian anxiety about the effects of industrial progress on human well-being, expressed in an extreme but understandable form by the Luddite destruction of machinery.

A similar anxiety is brewing with the arrival of artificial intelligence (AI), the fourth industrial revolution.

What then is artificial intelligence, what are its uses and limitations, and how does it differ from digitalisation and artificial general intelligence (AGI)? How will it affect our courts and legal system?

Digitalisation is the process of converting information like documents, images or audio into digital formats. It is already widely used in legal settings, for remote court hearings, video conferences, the electronic exchange of documents, digital databases of judicial decisions and other purposes.

Used sensibly, it can provide a faster, cheaper and/or better service to court users and clients in a way that is consistent with their legal rights and the rule of law.

The AI with which most of us are familiar is sometimes called “generative AI” or “narrow AI.” It is a set of technologies that enable computers to perform tasks that typically require human intelligence.

These usages include text generation, translation, summarising documents, self-driving cars, virtual assistants such as Siri and Alexa, chatbots, financial analysis and medical diagnosis.

The take up of AI has been rapid and the results sometimes stunning. Some 400 million people a week now use ChatGPT. In 2024, the Nobel Prize for Chemistry was awarded to three scientists who used AI to predict the structure of millions of proteins and to invent a new protein.

In the legal field, automated tools are available for drafting documents, calculating damages in personal injury cases and assessing maintenance payments for dependants.

Law firms and insurance companies may use AI to predict the outcome of a legal dispute. It is “brilliant” at summarising documents and produces them in about three seconds, according to Felix Zimmerman, a partner at city law firm Simmons & Simmons, which is “undeniably life-changing for practitioners.”

Microsoft Copilot can take “coherent, structured, real-time” notes of meetings.

There are, however, significant limitations to what AI can do. According to Zimmerman, when he asked a question around valuer negligence, the summary provided was “specious, superficial and in some key respects wrong.”

Consistent with this, AI testing by the city firm Linklaters found that the responses were still not always right and lacked nuance. Linklaters concluded that AI tools should not be used for legal advice without expert human supervision.

There have been some well-publicised disasters where lawyers ignored this counsel. In a New York case in 2023, a plaintiff’s attorney cited six cases produced by ChatGPT which the judge found not to be real.

When the chastened lawyer later confronted ChatGPT, asking if the bogus cases were indeed real, it responded that they were and “can be found in reputable legal databases such as LexisNexis and Westlaw.”

It is not surprising that generative AI has difficulty providing accurate, reliable legal advice. Its limitations have been neatly summarised by Bernard Marr, a leading expert in the field. It is essentially a highly skilled parrot capable of mimicking complex patterns.

However, like a parrot, it does not truly “understand” the content it creates. It operates by digesting large datasets and predicting what comes next, whether that is the next word in a sentence or the next stroke in a digital painting.

When it writes a poem about love, it does not draw on any deep, emotional reservoirs; instead, it relies on a vast database of words and phrases typically associated with love in human writing.

The holy grail for developers is artificial general intelligence (AGI). This refers to a stage of AI development where machines have the versatility and adaptability of human intelligence. They are able to understand and respond to complex human requests, adapt to new situations and learn from experiences.

If you ask an AI engine what AGI is, it replies that, “AGI is expected to have capabilities similar to human intelligence, including reasoning, problem-solving, perception, learning, language comprehension, and cognitive and emotional abilities.”

AGI is a concept, not yet a reality. Some researchers believe that it will take decades, even a century or longer, to achieve, or might never be achieved. Other well-informed experts believe that it is between three and 10 years away.

Indeed, Kevin Roose, a technology columnist at the New York Times, who denies being a guy who took too many magic mushrooms and watched Terminator 2, thinks that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more AI companies will claim to have created an artificial general intelligence.

He posits that when this is announced, there will be debates over definitions and arguments about whether it counts as “real” AGI. However, these mostly won’t matter, because the broader point — that we are losing our monopoly on human-level intelligence and transitioning to a world with very powerful AI systems in it — will be true. The Great Wave is coming.

Predictions about technological developments are, of course, often inaccurate. In 1943, Thomas Watson, the president of IBM, famously predicted that there would be “a world market for maybe five computers.”

Nevertheless, AI is proceeding at a phenomenal pace. Few people 25 years ago would have foreseen the advances of the last quarter-century. It would therefore not be surprising if by the 2030s we are edging towards AGI systems that can match or outperform humans at many cognitive tasks which today are the exclusive territory of the human brain.

A common prediction is that lawyers need to prepare themselves for a future in which 80 to 90 per cent of what they did before is fully automated.

Where does that leave the law and society? There are many aspects to consider, and many more that we, or at least I, cannot yet imagine, nor therefore consider. Much or most of what I write here will no doubt turn out to be wrong.

Sophisticated digital and AI systems carry a considerable risk to individual freedoms in the areas of privacy and confidentiality, the processing of personal data, police and private surveillance, and the collection and use of information about us by powerful online corporations.

AI software is already used in automated debt collection to generate enforcement notices and/or documents that can be taken to court for issuing. It is probably only a matter of time before civil courts use AI to generate summonses based on AI generated claim forms for common actions such as debt collection, bankruptcy or housing possession proceedings.

In the criminal sphere, vehicle speeding systems exist that detect speeding, capture evidence and automatically generate and mail out penalty notices.

The disadvantage of many automated systems is that they remove human involvement, judgement, compassion and discretion. The person who has a serious illness or a learning disability or has been made redundant receives the same inhuman treatment, without regard for their mitigating circumstances.

If the case is contested, the client company or public authority using the machine cannot give their lawyer instructions about why it acted as it did, because the decision to commence proceedings was that of an algorithm.

There is then the question of who should be liable for decisions made by computers that cause someone harm. With generative AI it could be the software manufacturer in the case of accidents caused by driverless cars, an AI-generated wrongful diagnosis or the use of AI to refuse a person a welfare benefit to which they are entitled. There could also be an action for negligence against a professional who relied on AI without adequately scrutinising its response for accuracy.

New legal principles and rules, such as a presumption of causality, will be needed to deal fairly with many of these situations.

Generative AI is already used to assist judges in ways that are controversial. In the US, the COMPAS algorithm is used by judges and parole officers to make decisions about bail, sentencing, parole and probation by analysing factors such as criminal history, age and employment history in order to predict an individual’s risk of recidivism (reoffending). An analysis by ProPublica in 2016 suggested that it was more likely to falsely classify black defendants as high-risk compared to white defendants.

Algorithms can have an impressive veneer of science and neutrality that is unjustified. This is a concern because of what is called “the sheep effect” (effet moutonnier), where a judge follows the algorithm’s advice without further autonomous assessment of the peculiarities of the case and the applicable law.

This is a particular issue in judicial review cases where historically judges show great deference to the decisions of elected public authorities, and may in future show even greater deference when complicated automated decision-making (ADM) systems are used by them.

Understandability and transparency are essential. ADM systems must be designed in such a way that humans can oversee their functioning and know what is in the “black box.” The algorithm and the input data must be accurate, it must be free of bias and discrimination, and the weight given to different factors must be knowable, otherwise a dissatisfied citizen cannot understand how the decision was reached or be given adequate reasons for it.

Some advocates of automated decision-making acknowledge the opacity of many AI applications but consider it to be a price worth paying. Indeed, some believe that the accuracy of ADM systems could outweigh the need for a hearing prior to a decision. Filippo Donati of the University of Florence notes that a lack of uniformity and predictability in human judicial decision-making undermines legal certainty, and some scholars argue that “robotic” justice can rectify this.

AGI — the holy grail — opens the door to computers deciding legal cases in place of judges and making decisions in place of civil servants on public law matters such as entitlement to benefits, social housing or residence in the UK.

In the case of true AGI, the computer is able to exercise judgement, learn from experience and make autonomous decisions. Can such a computer be negligent and how would that be proved?

What if COMPAS had been an AGI system that used its own reasoning powers and judgement to reach decisions that falsely classified black defendants as more likely to reoffend? On what basis would an AGI’s judge’s decision be open to appeal?

Given economic pressures, it is not surprising that the use of algorithms in the judicial field is spreading in many countries. Proponents are keen to emphasise the benefits for alternative dispute resolution procedures, particularly those involving small claims.

In March, the legal services minister Sarah Sackman KC spoke of the potential of technology to help citizens “solve their legal problems by themselves.” The well-known legal guru Richard Susskind recently wrote that AI systems “will replace conventional legal services … [and] place the law in the hands of everyone.”

Sackman also said that legal services and lawtech will be part of the government’s industrial strategy and are essential to bolstering Britain’s global competitiveness. The master of the rolls spoke of “unlocking the economic advantages of digital trading, digital legal services and online dispute resolution.”

The likely consequence of this kind of commercialised justice is an even wider chasm in the service provided by our judges and courts to the rich and poor and an increasing inequality of legal arms.

UK-founded lawtech companies have raised £1.7bn in total investment but most of this is directed at the lucrative “documents and contracts” market. Only 8 per cent of LawtechUK start-ups are classed as access to justice/business-to-consumer ventures.

The service available to the poor without legal aid will be dressed up as innovative, reliable and fair, placing the law in the hands of everyone. Those who cannot afford lawyers or to access our courts must make do with AI dispute resolution procedures — we don’t want to see you in court — while the commercial and higher courts will continue to offer a lucrative, first-rate service to those who can afford it, such as the city and well-heeled international litigants.

Pope Francis warned last year that the world’s problems cannot be solved by technology alone. Technological developments “that do not improve life for everyone, but instead create or worsen inequalities and conflicts, cannot be called true progress.”

Digitalisation, AI and foreign-controlled social media platforms such as X have already had profound consequences for human beings in terms of toxic texts, misinformation, propaganda and harmful content. The potential for widespread deception “strikes at the core of humanity, dismantling the foundational trust on which societies are built,” according to the Pope.

The US thinker Ross Douthat believes that the virtual age is making many cultural norms and ways of life seem obsolete. People are increasingly not dating and having children, making real-life friends and forming communities.

He predicts that the balance of political and military power will tilt toward the nations that develop and control AI and AGI. In all likelihood, this will be the US and China.

OpenAI, which was co-founded by Elon Musk, has just closed a record-breaking $40 billion funding round, the most ever raised by a private technology company. Musk has wealth of $386bn, Jeff Bezos $212bn and Mark Zuckerberg $201bn. Microsoft’s value today is $2.77 trillion. These figures dwarf the wealth of many nation states. Space exploration, once the province of governments, is now increasingly in private hands.

Unlike elected governments, these privateers do not have a democratic mandate. When developing technologies such as AI, they are driven by their corporate interest, not the public good, and in some cases, as with Musk, by a far-right political agenda.
According to Douthat, faced with this AI and AGI onslaught, “Europe as a social-democratic fortress … probably isn’t going to make it.”

We need to try to make it. As Maria Civinini has argued, it is imperative that European governments focus on risk neutralisation and alignment with human values. The recent Council of Europe Framework Convention on Artificial Intelligence and Human Rights and the European Commission’s AI Act is a good start.

A profoundly difficult balancing act between preserving European humanism on the one hand and avoiding stagnation and decline on the other is required. We must be mindful that the delivery of justice and public authority decision-making by AI and AGI algorithms, and by virtual courts and lawyers, will have serious social costs in terms of personal isolation, dehumanisation, mental ill-health, community cohesion and the ability to form and sustain human relationships.

AI brings into focus what we mean by “justice” and how important our traditional legal process is for the common good. The 21st century needs a new definition of justice, in place of the traditional definition that justice is rendering to everyone their due.

Justice is in fact a human relationship that involves human beings treating and judging each other fairly and rendering to them their due. It has a human face.

Hopefully, some of what I have written — or AI has written for me — is useful.

Anselm Eldergill was until recently a judge in the Court of Protection. He is a solicitor, an associate at Doughty Street Chambers and an honorary professor at Queen’s University Belfast and University College London. This is a monthly column that appears on the second weekend of the month.

OWNED BY OUR READERS

We're a reader-owned co-operative, which means you can become part of the paper too by buying shares in the People’s Press Printing Society.

 

 

Become a supporter

Fighting fund

You've Raised:£ 7,138
We need:£ 10,862
19 Days remaining
Donate today