Subscriber OnlyLetters

Roadmap for AI in health is welcome but five years is an eternity

Many doctors admit privately that, when pushed for time, they will use their phones to consult a publicly available AI tool

Letters to the Editor. Illustration: Paul Scott
The Irish Times - Letters to the Editor.

Sir, – The recently launched “AI for Care: The Artificial Intelligence Strategy for Healthcare in Ireland 2026-2030” is a welcome, if extremely cautious, roadmap for our health service. However, as someone working at the coalface of technology, I believe the strategy overlooks a critical behavioural risk: the “I get better at home” problem.

The nature of AI is uniquely democratised. Unlike previous generations of enterprise software that required massive infrastructure, the world’s most powerful frontier models are now available to anyone with a smartphone and a €15 a month subscription. This creates a dangerous capability gap between the tools a doctor or nurse uses in their personal life and the official, governed systems they encounter at work.

I have already seen this tension play out. Several hospitals have recently issued stern reminders to staff that the use of personal AI instances is strictly forbidden, yet those same institutions have been slow to provide any functional, safe alternative.

Many doctors admit privately that, when pushed for time in a clinical environment, they will simply pull out their phones to consult a publicly available AI tool. This isn’t a lack of professionalism; it is a pragmatic response to an institutional information bottleneck.

The “AI for Care” strategy emphasises a phased, five-year roll-out that is only just commencing. In the world of AI, five years is an eternity. If the HSE’s official tools are cumbersome, overly restrictive, or lagging generations behind consumer tech, we will not see “patiently awaited adoption”.

Instead, we will see medical staff bypassing official channels to seek quick answers from personal instances of ChatGPT, Gemini, or Claude.

This is not merely an efficiency issue; it is a significant security and data privacy risk. If the public service does not dramatically accelerate its delivery of easy-to-use, high-performance AI tools, it inadvertently encourages a “shadow AI” culture where sensitive medical context could be leaked into unsecured, personal consumer models.

To ensure public trust and data security, as the strategy intends, the HSE and the broader public service must move at a speed that acknowledges the reality of the consumer market.

If the workplace version is inferior to the one in a clinician’s pocket, the “human in the loop” will simply find a better loop. – Yours, etc,

DARA McGANN,

Drumcondra,

Dublin 3.