“Hello, my name is Dr ChatGPT. How can I help you today?”
Imagine a medical consultation with a robotic creature. Will you feel comfortable? Will you trust this product of artificial intelligence to look after you effectively and safely?
I’m not sure how far away such a reality is, but it is certainly attracting heavy investment. The latest version of artificial intelligence (AI) to hit the headlines, ChatGPT, has been creating a stir since the US start-up OpenAI made the text-based dialogue system accessible to the public in November 2022. ChatGPT stands for Chat Generative Pre-trained Transformer.
ChatGPT may be a long way from the AI technology that fuels a functioning robotic doctor but there is a clear intent that machines will eventually replace human physicians. Those arguing for this eventuality say that deep-learning AI systems continually integrate new knowledge and perfect themselves with a speed that humans cannot match. They also highlight the benefits of using AI to treat patients, including increased availability, lower costs and no risk of mutual infection.
Beauty & the Beast review: On the way home, younger audience members re-enact scenes. There’s no higher recommendation
Matt Cooper: I’m an only child. I’ve always been conscious of not having brothers or sisters
A Dublin scam: After more than 10 years in New York, nothing like this had ever happened to me
Patrick Freyne: I am becoming a demotivational speaker – let’s all have an averagely productive December
Sceptics argue that AI in healthcare is overhyped, profit driven and not always in patients’ best interests. But even if we were to have high-level evidence of the superiority of AI to medical professionals, would that justify replacing humans with machines?
In an editorial in the British Medical Journal Dr Vanessa Rampton says the question asks us to differentiate between the technical prowess of AI and the more fundamental question of whether human physicians can provide something that machines will never be able to.
In my opinion, robot simulated empathy can never replicate human forms of communication. Human doctors can relate to patients as fellow mortals and vulnerable beings. Patients need to be cared for by people, especially when we are ill and at our most vulnerable. A machine will never be able to show us true comfort.
In the intimate crucible of a doctor-patient consultation, there is a need to appreciate patient’s values, their non-verbal communications and their social circumstances. These factors become especially important if a patient has symptoms for which no diagnosis can be found, or cure is not an option.
AI may have the potential to become a useful and innovative aide in healthcare, but I believe there will always be room for humanity
According to Rampton, patients emphasise that sensing your doctor truly cares about what you are going through, really wants to help and is able to establish a “genuinely intimate and empathetic connection” makes a big difference to their ability to manage their health.
Research from Yale University, published last year, offers an up-to-date understanding of patients’ views of AI in healthcare. Most of the 926 respondents were very concerned or somewhat concerned about AI’s unintended consequences, including misdiagnosis (91.5 per cent), privacy breaches (71 per cent), less time with clinicians (70 per cent), and higher healthcare costs (68 per cent).
Patients acknowledge that AI could help physicians integrate the most recent scientific evidence into medical care. But there is a strong feeling that AI in medicine should be disclosed and controlled to protect patient interests and meet ethical standards.
[ Dead or alive? The doctor’s challengeOpens in new window ]
In the radiology arena, people seem reasonably relaxed to have diagnostic technology work hand in hand with the radiologist. Recent research shows that AI software can detect TB from chest X-rays at an accuracy level comparable to, or better than, that of the radiologists tested.
AI may have the potential to become a useful and innovative aide in healthcare, but I believe there will always be room for humanity. If digital technologies enable the development of new forms of knowledge and diagnostic accuracy, it would seem foolish not to welcome them. But a key question remains: as technology continues to change relationships between patients and doctors, how can we maintain an essential trust in the process?
Without this bedrock of trust we may need to slow down trends towards more automation in the consulting room.