Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

‘Cybercriminals are leveraging AI to craft highly realistic scams’

AI tools allow hackers to create incredibly sophisticated fakery

CommSec founder David McNamara: 'Cyber security is now the biggest threat to business continuity'
CommSec founder David McNamara: 'Cyber security is now the biggest threat to business continuity'

Generative artificial intelligence (GenAI) is arming cybercriminals with tools once reserved for elite hackers, transforming both the scale and the sophistication of attacks. Fake voices, cloned video and hyperrealistic emails now slip past traditional defences, enabling even low-skill attackers to mount convincing scams and deploy adaptive malware.

Shomo Das, head of cyber strategy and architecture, PwC Ireland
Shomo Das, head of cyber strategy and architecture, PwC Ireland

“Cybercriminals are leveraging AI to craft highly realistic scams, including convincing emails, fake voices, videos and online profiles,” says Shomo Das, head of cyber strategy and architecture at PwC Ireland. This machine-crafted deception extends to romance fraud, deepfake extortion and fabricated news campaigns designed to manipulate opinion at speed.

He argues that defence demands a cultural as well as a technical shift. Organisations must model new AI-driven threats, harden architecture to detect and adapt, train staff against evolving scams, enforce strict access controls and scrutinise partner security. In a threat landscape reshaped by AI, vigilance is no longer optional, it is collective survival.

The regulatory environment is also tightening. From NIS2 to the AI Act, compliance now aims to shield critical infrastructure, water, power, food and logistics from both state-sponsored and independent attacks. “Cybersecurity is now the biggest threat to business continuity,” says David McNamara, founder of CommSec.

While larger firms often have the resources to adapt, smaller businesses, mistakenly believing they are too insignificant to be targeted, risk becoming weak links in the chain. Sanctions under NIS2 can reach up to €10 million or 2 per cent of global revenue, alongside GDPR penalties. Avoidance requires more than box-ticking: regulators expect genuine progress, robust defences and embedded governance.

For those unprepared, compliance can be burdensome. But for those who embed cybersecurity into daily operations, it becomes routine. McNamara warns that doing nothing is an open invitation to be made an example of, and that enforcement will target not just the largest entities but also well-resourced SMEs to send a message across the market.

Subhalakshmi Ganapathy, chief IT security evangelist, ManageEngine
Subhalakshmi Ganapathy, chief IT security evangelist, ManageEngine

Generative AI’s impact is not limited to phishing emails and social engineering. It is now producing deepfake video and voice, crafting personalised attacks at scale and automating the exploitation of zero-day vulnerabilities. “Generative AI allows attackers to lower their barrier to entry while simultaneously raising the bar in terms of attack complexity and realism,” says Subhalakshmi Ganapathy, chief IT security evangelist at ManageEngine.

Her prescription is precise: AI-powered detection tools, zero trust architecture, advanced phishing protection, unified security operations and rigorous third-party monitoring. In such an agile threat environment, cybersecurity must be a boardroom imperative, not a technical afterthought. She stresses that the sophistication of attacks demands the same sophistication in defence, a combination of technology, governance and sustained training.

Dani MIchaux, EMA cyber lead, KPMG Ireland
Dani MIchaux, EMA cyber lead, KPMG Ireland

The criminal economy itself is evolving. “Resilience is key within the mindset of dealing with new and unexpected situations,” says Dani Michaux, EMA cyber leader at KPMG in Ireland. She notes that generative AI is now embedded across the cybercrime life cycle, from persuasive phishing to deepfake authorisation of fraudulent payments. Agentic AI is automating and accelerating every stage of intrusion, while AI-enabled tools for deepfake creation and malware are being sold or rented on dark markets, enhancing the “cybercrime as a service” model.

Defence in this new economy means embedding trust in AI, harnessing AI for detection and response, securing digital identities, protecting IoT systems and designing for recovery. Enterprises must map AI use across the business, assess AI-specific risks, embed governance and invest in training. Humans, Michaux stresses, remain the cornerstone of resilience, even in an AI-driven threat landscape.

Yevheniia Broshevan, chief executive and co-founder, Hacken
Yevheniia Broshevan, chief executive and co-founder, Hacken

Yevheniia Broshevan, co-founder and chief executive of Hacken, is blunt in her assessment: “Generative AI has changed the game for cybercriminals. Hackers no longer need to be skilled coders. With tools like WormGPT or FraudGPT, they can launch complex attacks at speed and scale we have never seen before.”

She outlines how criminals are weaponising AI: hyper-personalised phishing, smishing and vishing campaigns that scrape open-source intelligence to mimic individuals or organisations almost perfectly; the automation of phishing kit creation, malicious payload drafting and credential-stealing code; and even registering domains without human intervention.

More advanced threats are emerging, such as prompt injection and polymorphic malware that can alter themselves to avoid detection. Some attackers fine-tune their own models, “ThreatGPT” or “Occupy AI”, to automate reconnaissance and run full attack chains. Deepfakes add another potent layer, enabling executive impersonation and voice phishing that can trigger fund transfers or data disclosure in a single convincing call.

Her defence strategy starts with AI-augmented threat detection, using machine learning to identify anomalies in email, login attempts and user behaviour, and extends to human readiness. She advocates simulated phishing campaigns, deepfake voice call drills and prompt-injection attack exercises so staff can recognise and respond to new tactics. Technical defences must be layered: strong data hygiene, sandboxing, model monitoring, output validation, AI-driven red-teaming, zero trust, multifactorial authentication and network segmentation.

Broshevan’s conclusion is unequivocal: “AI has supercharged cybercrime, but organisations that innovate and prepare can still stay one step ahead.”

Jillian Godsil

Jillian Godsil

Jillian Godsil is a contributor to The Irish Times