Facial recognition technology is not advanced enough to be safely used for criminal investigations by the Garda Siochana, an artificial intelligence expert has warned.
Michael Madden, a professor of computer science at NUI Galway, flagged legal and ethical risks for the State if Minister for Justice Helen McEntee presses ahead with her proposals to allow its use by gardaí as early as the year end.
“I’m just not sure the technology is sufficiently mature enough yet for widespread deployment,” he told The Irish Times. “It hasn’t been validated and tested enough. The technology is not as mature as the vendors would claim. That is just one concern.”
Prof Madden said none of the current technology is “100 per cent accurate”, adding: “That is a real issue here when very serious decisions could be made to consider someone as a (crime) suspect.”
Kathy Sheridan: The sanitised version of nativity story rings increasingly hollow
Your EV questions answered: Am I better to drive my 13-year-old diesel until it dies than buy a new EV?
Workplace wrangles: Staying on the right side of your HR department, and more labrynthine aspects of employment law
The great trifle revival: ‘Two creamy, delicious things on top of a boozy, fruity, delicious thing’ - what’s not to like?
[ Q&A: How will new facial recognition technology be used?Opens in new window ]
The academic pointed to facial recognition by tech giant Amazon falsely identifying 28 members of the US Congress as people who have been arrested for crimes in a test of the software in 2018 by the American Civil Liberties Union (ACLU).
Amazon had been selling its controversial Rekognition software to police forces in the US. But along with other industry big-hitters Microsoft and IBM, it suspended sales for law enforcement purposes in 2020 after a backlash over police brutality and the Black Lives Matters movement.
“If companies in the US are rowing back from offering official recognition services to police forces then we need to reflect on that. We need to be concerned,” said Prof Madden. “The US is, in general, more open to surveillance and the use of public data than Europe is, so if US companies are opting out it is certainly a strong signal to us in Ireland to be cautious.”
Prof Madden said companies behind the technology like to claim a high degree of accuracy but “demonstrate this by testing on a very simple set of data”.
“When it is deployed in the real world it is never as accurate as it would have been expected,” he added. “There are cases in the US where these problems have been demonstrated. Users really think just because a computer says something is true, then it must be true.”
According to Prof Madden, the State and Garda could also face legal and ethical challenges in using images often collected from public sources to develop databases.
Where databases are heavily biased towards images of white men - which they have been - the system is better at distinguishing white men from each other, but struggles “to distinguish people who don’t fit that profile.. building in biases which are racist,” he added.
Private companies providing the service are also coming under increasing scrutiny for data protection and privacy breaches, he said, pointing to a market leader US-based Clearview AI, which was hit with a £7.5 million fine by the UK’s Information Commissioner’s Office this week for scraping 20 billion images from Facebook and other social media platforms.
The Metropolitan Police and the UK’s National Crime Agency are among the company’s previous clients.
“Companies argue people putting their images on social media is fair game, but it is not really,” said Prof Madden. “People post these images for a purpose, and companies are using it for a completely different purpose for which people have not given their consent.”
Kevin Winters, who heads human rights law firm KRW, highlighted a report by the University of Essex into the facial recognition system used by the Metropolitan Police in London “which found an incredible 80 per cent incorrect identification rate”.
“So I have to ask, has this report and other empirical research from elsewhere in the world been taken into account by the Garda and the Minister for Justice before trying to introduce these changes?” said Mr Winters. “In terms of policing and fighting crime this is a step backwards. We aren’t yet at the point where technology can replace traditional human recognition as identification evidence.”
Resources would be better spent in protecting and supporting witnesses in the criminal justice system, according to Mr Winters.
“This is a worrying development,” he added. “I despair about where this will go. It could be very costly in terms of droves of citizens agitating on the back of wrongful recognition because of these changes.”
Another human right lawyer in Ireland, who asked not to be identified, warned Ireland’s “very poor” record in “maintaining the integrity of personal records” is a further cause for concern.
“I think this is open to exploitation, I don’t think the safeguards will be robust enough,” the lawyer said. “Let’s say I’m at a protest about something I feel strongly about - say the National Maternity Hospital or climate change or the war in Ukraine, and something kicks off. The next thing is I’m identified from facial recognition technology and I’m being investigated. There is a whole consent issue here too around the use of an image.
“If I’m one of 10,000 protesters, then I’m just a face in a crowd, part of a group image, not an individual image. When that is being used then as an individual image, then there are privacy rights, personal data rights.”
The lawyer questioned why facial recognition is a priority for the Garda, struggling to operate at capacity, and at a time when there is a “big wave of prosecutions coming before the courts” for money laundering and electronic crime.
“There are no faces in those crimes. What good is facial recognition technology in prosecuting those? No use whatsoever. Is this really a good use of resources at this time? Is it just another big project that is going to sit on the shelf?”