Subscriber OnlyCourts

Consequences of making AI central to legal decision-making ‘terrifying’, says human rights commissioner

Public acceptance of court outcomes depends on humans deciding them, Attorney General says

A policy decision must be made to keep the human at the centre of legal decision making, the Council of Europe’s Commissioner for Human Rights, Michael O’Flaherty, said.
A policy decision must be made to keep the human at the centre of legal decision making, the Council of Europe’s Commissioner for Human Rights, Michael O’Flaherty, said.

The consequences of putting artificial intelligence (AI) instead of humans at the centre of judicial and legal decision-making are “frankly terrifying”, the Council of Europe’s Commissioner for Human Rights told a top-level law conference in Dublin.

A policy decision must be made to keep the human at the centre of legal decision making, the commissioner, Michael O’Flaherty, said.

Ireland’s Attorney General, Rossa Fanning, took a similar view, saying: “We don’t want to dehumanise the justice system just because we have clever computers.”

While AI can assist dispute resolution, particularly in compilation of data and discovery of documents, an “inherent part” of judicial decision-making requires there be a human being at the end of it for the outcome to be accepted by the public and by the parties to the litigation, he said.

READ MORE

It is possible to feed into a computer thousands of sentencing decisions for different types of crime and for AI in quite a sophisticated way to evaluate where on the scale of sentence a sentence could come, he said.

To look at AI in that way “completely overlooks what human beings would expect from the process”, Mr Fanning said. “They will expect a person to listen to their story and engage with the evidence, and they will expect a decision given on the basis of the humane judgment and the exercise of discretion of an experienced judicial decision-maker.” “Something very essential would be lost without that,” he said.

Marko Bošnjak, president of the European Court of Human Rights (ECHR), said AI posed challenges for human rights.

While the European Convention on Human Rights contains no provisions that directly address AI, many other challenges had not been envisaged, he said.

“The convention is a living instrument, we will find ways and means to address this.”

A recent decision of the ECHR concerning the use of facial recognition technology and AI to track down a Russian activist showed the need for proper state regulation of AI to guard against arbitrary interference with rights, he said.

All three men stressed AI has positive as well as negative potential during a penal discussion on AI and the legal process last Thursday at the European Law Institute’s (ELI) annual conference in Dublin.

The ELI has almost 1,700 individual members from the judiciary, legal professions, and academia across Europe, and some 150 institutional members, including European Union institutions and supreme courts.

Attended by about 400 delegates, the two-day conference discussed issues including the impact of digitalisation on law and society and AI regulation and ethics.

Commissioner O’Flaherty said the EU AI Act 2024 – the world’s first legal framework on AI – is designed to regulate the market and is “good as far as it goes” but gaps need to be addressed, including concerning the private, security and defence sectors.

Technology companies, he said, continue to avoid responsibility for the promotion and publication of some “horrific” content on their platforms.

He wished to stress the positive potential of AI but to also point out the potential for risk to human rights is “very real” and wide-ranging.

The primary driver of AI is not for better outcomes and quality but rather efficiency and speed and that context “must cause us to pause”. AI is only as good as the quality of data it uses, time and again it has been found that it is driven by wrong data and that leads to wrong results, he said. There was too little focus on the mistakes it makes and his concern is that AI should be “trustworthy.”

He expected the ECHR will have to make determinations around the effectiveness and appropriateness of AI regulation by European states, he said. The state of regulation across different countries is in a “sort of chaos” and there is a need for consistency, he added.

Mary Carolan

Mary Carolan

Mary Carolan is the Legal Affairs Correspondent of the Irish Times