Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Advance of AI creating a moral minefield

Artificial intelligence is now capable of making decisions without any human intervention, so how do we navigate the ethical issues raised?

Crash test dummies invented in 1950s were based on the male physique
Crash test dummies invented in 1950s were based on the male physique

Welcome to the moral minefield of AI ethics. Companies and governments now possess vast amounts of data about individuals and the computing power to process it. But, up until quite recently, human intervention was required to make decisions based on the insights produced by the technology. Not any more. AI is now capable of making those decisions without any human intervention whatsoever.

That may sound like technology doing things more quickly than humans can, but it goes much further than that. What happens if the technology is used by a gambling company to target people with a propensity to wager large amounts of money without any concern as to whether the individuals involved may run into financial difficulties as a result?

Or, if people find themselves denied credit because of their address and ethnicity. That may sound far-fetched but racial profiling has been a problem for a long time in law enforcement and other areas.

Of course, there are ways to programme and train the AI so that it has an ethical basis for its decisions. But that depends on the ethics you choose.

READ MORE

UCD professor of operational risk in banking and finance Cal Muckley explains that there is a spectrum to choose from, beginning with the Christian golden rule of treating others in a way you would like to be treated yourself all the way to pure utilitarianism where the end is used to justify the means.

“It’s a question of Kantian ethics versus the utilitarian ethics,” he explains. “Kantian ethics put humans at the centre. Humans must always be treated with dignity and not as a means to an end. With utilitarianism, if the ends justify the means, then proceed.”

Under utilitarianism, the most ethical choice is the one that will produce the greatest good for the greatest number. Unfettered, that could lead to some startlingly bad decisions and outcomes. For example, if four people whose lives depend upon receiving organ transplants are in hospital with one requiring a heart, and the others need lungs, a kidney and a liver, what happens if a healthy person with those organs comes in to visit. Would that person have their organs harvested to save the other four? You might say that no human would ever make such a callous and amoral decision, but a machine might.

“That’s the ethical spectrum for the deployment of AI,” says Muckley. “The regulators seem to be coming down somewhere between the two. They are outlining principles and expect people to abide by them. The European Union is distinguishing itself in this regard and seems to be looking for the high moral ground. It is applying human rights to a greater extent than other regulators.”

Medb Corcoran, who leads Accenture Labs in Ireland, is also in favour of the human-centred approach. “For me, it really comes down to seeking to ensure that AI is used for the betterment of society and to make the majority of people’s lives better. And that it is not used to harm people. It all comes down to that.”

She points out that predictive models have been used in banking and other areas for 20 years or more but that the difference now is their scale and the speed of the decision-making. Reducing bias and increasing the fairness of those decisions is critically important.

“It can be done quantitatively,” she says. “We can check for bias and the representativeness of the data being used. For example, a company using AI to review CVs for recruitment doesn’t realise that nearly all the past CVs are from males and the AI has learned to repeat previous biases.”

One approach is fairness by unawareness, she explains. This screens out ‘protected variables’ such as gender, ethnicity and so on so that the machine is blind to these particular characteristics. But that has its limitations.

“There are often other proxy variables in the data set,” she points out. “For example, years of military service. In Israel males do three years as opposed to females, who do two.”

On the non-quantitative side, she believes having diverse teams creating AI systems is helpful. “For example, crash-test dummies were invented in the 1950s and were based on the male physique up until 2003. Female seat belt wearers were 43 per cent more likely to be injured in an accident. There was no malice involved, it simply didn’t occur to them.”

Explainability is important if people are to trust AI and make their data available to organisations using it, adds Muckley. “AI relies on probabilities to make decisions, but where do these probabilities come from? They are produced by opaque machine-learning algorithms. People don’t know what’s driving the decisions. We are doing a lot of work on explanatory AI. We are trying to open up the black box. We are at a fulcrum point between a utopian and dystopian society. It’s an exciting place to be and hopefully we can do something to push it in the right direction.”

If this all sounds very tricky, things are only going to get more complicated and head-wrecking. Psychologists claim that emotions are critically important for ethical decision making and there are already claims that Google’s Lambda AI technology is displaying signs of sentience and emotional capacity. When does AI cease to be a machine and what will that mean for our efforts to guide it ethically? Over to you, Mary Shelley.

Barry McCall

Barry McCall is a contributor to The Irish Times