Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

Shining a light into the black box

Algorithmic or ‘coder’ bias can give rise to prejudiced AI-informed decisions and, thus, to unfairness

Photograph: iStock
Photograph: iStock

The use of AI systems for credit scoring, insurance pricing, and job candidate selection has been criticised as too clinical, but they may also possess biases picked up from the humans who train them.

AI software selects suitable candidates by utilising machine learning algorithms to analyse extensive data sets of applicant information. These algorithms identify patterns and correlations between applicant qualifications and job requirements, enabling the software to shortlist candidates who best match the criteria.

“To improve accuracy, the software should be trained on diverse data sets and regularly updated to adapt to changing job market needs. Prioritising job-related skills and experiences over irrelevant factors ensures the selection of candidates who are most suited for the role. Given the higher risk associated with AI per the AI Act, measures such as transparency, regular audits, bias mitigation, and human oversight are essential,” says Onatkut Varis, director, Technology Risk Advisory, Deloitte.

In today’s fast-paced world, talent teams are often inundated with a staggering number of resumes, making it humanly impossible to thoroughly review each one.

READ MORE

“We need to look at AI, including Generative AI, as a supporting tool to HR teams – often just a single person – to analyse, summarise, and extract key information from CVs. Crucially however, the focus of AI should never be to select relevant candidates but to help talent teams and hiring managers in the shortlisting process. At the end of the process, it is always the human-in-the-loop that makes the final decision,” says Aarthi Kumar, director, data and AI at EY Ireland.

Jobs are deconstructed into component skills, and large language models (LLMs) can quickly pre-screen CVs to identify relevant or transferable skills for specific roles, reducing the time and cost to recruit. Furthermore, AI can analyse body language, tone, and facial expressions in video interviews, with emerging start-ups even promoting AI interviewers.

“However, there are serious considerations when using AI, such as the potential for hidden biases and its handling of nuanced or complex roles. Additionally, candidates are likely to become increasingly adept at using AI to optimise their CVs to appear relevant to AI screening tools. The entire recruitment process will transform, so recruiters maximise the benefits of AI for themselves while neutralising the distortion of AI on the candidate side,” says Tania Kuklina, management consulting director, KPMG in Ireland.

Professor Cal Muckley, UCD, points out that while it is correct to acknowledge potential ‘emotional’ and ‘prejudicial’ shortcoming of AI in recruitment, but that HR experts in this process can also make decisions which are less than optimal – unfair and irrational.

Algorithmic or ‘coder’ bias can give rise to prejudiced AI-informed decisions and, thus, to unfairness. This bias can be inherent in the machine learning model building process, eg including sensitive trait predictors such as ethnicity or, more likely, a variable which can be correlated with ethnicity such as postcode or job title and inherited in the data set, eg historical loan officer credit decisions which may have been prejudiced, whether deliberately or inadvertently.

“How to keep it out? Ethics training for the financial data science team is important: especially regarding data set preparation and maintenance, constraints in the optimisation process of the machine learning models, and post estimation tests for equity can also be implemented,” suggests Prof Muckley. “Machine learning has the potential to play a valuable role in the selection of top leadership in the future.”

Looking then at decisions in insurance or loans, it is recognised that there are many forms of bias that can exist including bias in those that are responsible for building models.

Jean Rea, consulting partner, KPMG in Ireland suggests a multifaceted approach will help to manage this risk, which could include: implementing AI frameworks including ethical and bias considerations and raising awareness through culture and training programmes.

“For use cases that have significant impact on the firm or consumers, explain­able algorithms could be utilised where the importance of each model feature would be explained and rationalised. This could be supplemented by independent review and challenge of the modelling by a second line model risk or model validation teams. Model performance should be monitored including development of fairness and non-discrimi­nation metrics. Finally diverse teams used in building and validating models can help to identify and challenge biases,” she says.

To ensure transparency and accountability in AI, developers should adopt frameworks incorporating explainable AI techniques. These methods enable users to understand decision-making processes. It’s vital to establish clear ethical frameworks and integrate human oversight. Detailed documentation of AI design, training data, and decision-making processes must be maintained and accessible for examination.

“Regular audits, third-party evaluations, and bias detection algorithms enhance accountability. Clear ethical guidelines and regulatory frameworks are essential to uphold high standards, fostering trust and reliability in AI’s impact on critical decisions,” says Varis.

Kumar agrees: “At EY, we understand the importance of a human-centric approach to AI. We bring a multidisciplinary and diverse view to help our clients design accountable and fair AI models. It is about finding the right balance between leveraging technological efficiency and maintaining human empathetic touch in decision-making. AI is here to support, enhance and be a co-pilot for human decision.”

“Indeed depending on the use case it may not be appropriate to use AI, as even with overlaying governance measures the residual risk may still sit outside risk appetite,” says Rea.

Prof Muckley sees more benefits to the introduction of AI. “We show that credit decisions (eg loan decline for LGBTQ people) by fintech, relative to those by traditional loan officers, are comparatively less prejudiced.

“It would appear that not having a loan officer ‘eye balling’ an applicant can reduce the extent of unlawful discrimination in loan decisions,” he says.

Jillian Godsil

Jillian Godsil is a contributor to The Irish Times