AI’s decisions should be explained to people affected, Oireachtas told

Call for more worker representation on Govtech, the board responsible for regulating the safe use of AI in the public sector

Professor Greg O'Hare of Trinity College Dublin, told the Oireachtas enterprise committee on Artificial intelligence there should be 'appropriate considered engagement with all of the stakeholders involved'. Photograph: Oireachtas TV/PA Wire
Professor Greg O'Hare of Trinity College Dublin, told the Oireachtas enterprise committee on Artificial intelligence there should be 'appropriate considered engagement with all of the stakeholders involved'. Photograph: Oireachtas TV/PA Wire

Artificial Intelligence’s decision-making must be explained to the people impacted as its use increases, an Oireachtas Committee has heard.

Speaking before the Oireachtas Joint Committee on Enterprise Trade and Employment on the issue of AI in the workplace, Professor Greg O’Hare of Trinity College said that the legislation relevant to AI needs to “support explainable AI”.

Professor O’Hare was responding to a question from Sinn Féin TD Louise O’Reilly on the issue of transparency in the use of AI in the workplace. Ms O’Reilly warned that workers may find it “very hard to understand” decisions made by AI that impact their employment.

“If you were to show me an algorithm, it wouldn’t mean anything to me,” said Ms O’Reilly. “If you were to tell me that the algorithm was the reason why I, as a delivery rider, didn’t get any shifts last week and don’t have any money to pay my rent, I would find that very hard to understand.

READ MORE

“It’s not just about publishing the algorithm, there has to be a deeper understanding for workers to get to grips with. How can we do that?”

In response, Professor O’Hare said the question was “profoundly difficult” to answer.

“Even were it to be the case that someone like myself, a professor of artificial intelligence, were to look at a picture of an AI application that was using deep learning, I would have greatly difficulty in being able to establish on the surface how it arrived at its deduction and its recommendation or its conclusion.

“Whenever we talk about transparency, we really do mean things like audit-ability and explainable AI. It is incumbent that such systems, and the legislation is mandating that such systems, are able to support explainable AI.

“Some of these systems are huge in their extent, their complexity is enormous, and I would counsel people that of course it is crucial that we have appropriate considered engagement with all of the stakeholders involved. Such engagement ought not be rushed, and necessarily takes quite some considerable time, but the velocity at which the uptake and the deployment of AI systems is being deployed does not afford us with that level of time.”

Earlier in the meeting, those appearing before the committee agreed that Govtech, the board responsible for regulating the safe use of AI in the public sector, ought to see more worker representation on its board.

“The Govtech board is made up of representatives of the Enterprise Advisory Forum, which is made up of big IT companies.” said Ms O’Reilly. “There’s no representatives that I’m aware of representing workers, representing human rights, putting a different perspective on this, it seems to be there’s no balance. Should we ensure that the Govtech board is stronger or more fit for purpose?”

In response, Ronan Lupton of the Bar Council of Ireland said that the issue of representation ought to be reassessed.

“Taking your question head-on in relation to the board and functionality, you’re always going to have stakeholders across the spectrum, and if the observation is that the spectrum isn’t covered properly, then the answer to your question is that needs to be looked at by the minister again,” said Mr Lupton.

“It might be a recommendation from this deliberation that occurs. I think the answer to the question is, ‘It needs to be looked at’.”

Nathan Johns

Nathan Johns

Nathan Johns is an Irish Times journalist