Overreliance on generative artificial intelligence (AI) risks eroding future doctors’ critical thinking skills while reinforcing existing biases, a British Medical Journal editorial has warned.
The authors from the University of Missouri urged medical educators to exercise vigilance and adjust curricula and training to mitigate the technology’s pitfalls.
In the article they pointed to the widespread use of GenAI tools, expressing concern over a lack of institutional policies and regulatory guidance.
Overreliance on AI by medical students and trainee doctors poses several risks including automation bias, deskilling, cognitive offloading and outsourcing of reasoning, it said. Uncritical trust of automated information after extended use of AI tools, it added, can undermine critical thinking and memory retention, crucial to the profession.
READ MORE
The article highlighted security issues and the potential for breaches of privacy and data governance as a risk factor of particular importance due to the sensitive nature of healthcare data.
[ Meta’s ‘optional’ AI chatbot that you can’t disableOpens in new window ]
Critical skills assessments that exclude AI need to be designed, the report advised. These assessments would use supervised stations or in-person examinations, with a focus on bedside communication, physical examination, teamwork and professional judgment.
The evaluation of AI itself as a competency was recommended, because “data literacy and teaching AI design, development, and evaluation are more important now than ever, and this knowledge is no longer a luxury for medical learners and trainees”.
Also emphasised by the authors was the need for medical trainees to understand the principles and concepts underpinning AI’s strengths and weaknesses, alongside appropriate situations and methods whereby AI tools can be usefully incorporated into clinical workflows and care pathways.
The article urged regulators, professional societies and educational associations to take action by producing and regularly updating guidance on the impact of AI on medical education.
[ AI medical tools downplay symptoms in women and ethnic minoritiesOpens in new window ]
The detrimental impact of artificial intelligence’s use in medicine has already been documented in recent research.
One series of studies conducted by researchers at leading US and British universities suggested that medical AI tools, powered by large language models (LLMs), have a tendency to not reflect the severity of symptoms among patients who are women, while also displaying less “empathy” towards black and Asian people.
AI is increasingly being used across the Irish health system, particularly in the area of medical imaging and diagnostics, with the Mater hospital in Dublin earlier this year launching a new centre for AI and digital health.
In October the Medical Council published a position paper on the use of AI in the profession, stating that advancements in AI bring “significant” ethical, legal, regulatory and professional challenges for doctors.











