Doctors may not be able to discern inaccurate advice coming from clinical artificial intelligence tools, raising questions over how to tackle automation bias in the treatment room.
As the variety of applications for AI increased, the data-intensive healthcare field was becoming a hotbed for new uses, with medical software giant Cerner already providing several AI-based clinical decision-support programs.
According to a recent study published in Nature, AI technology for use in healthcare settings needed to be created with the aim of increasing physician-computer collaboration rather than the use of AI, which works as a substitute decision-maker.
The study, led by MIT researchers Susanne Gaube and Harini Suresh, compared how a trial group made up of roughly 260 radiologists and internal medicine doctors responded to inaccurate advice when given by a colleague, as opposed to inaccurate advice given by AI.
Of the two groups, the physicians were generally less experienced in the task at hand – examining an X-ray – and were found to be more trusting of AI advice than the radiologists.
“Overall, the fact that physicians were not able to effectively filter inaccurate advice raises both concerns and opportunities for AI-based decision-support systems in clinical settings,” the authors wrote in Nature.
At the crux of this phenomenon, according to the authors, was the fact that when an AI system offered a diagnosis, the physician was triggered to test whether it was correct or not rather than search for original evidence.
Dr David Lyell, a researcher with the Australian Institute of Health Innovation at Macquarie University, told Wild Health that this changed relationship between the doctor and the task at hand was a form of automation bias.
“[Using AI-based decision support tools] changes the nature of the relationship between the clinician and the task, because instead of actively performing the task, they’re now supervising a machine that’s performing the task,” he said.
Another growing concern, said Dr Lyell, involved alert fatigue, where clinicians were given so many routine alerts that they “switch off” and missed a critical one.
Safely implementing AI in healthcare would mean striking a balance between over-reliance and under-use, Dr Lyell told Wild Health.
“We need people to pay attention to important alerts, and to utilise advice they get from decision support systems when it’s the right advice,” he said.
“But at the other end of the spectrum, we have [issues with] over-reliance on decision support, and that can lead to errors [as well].”
To combat automation bias, the authors of the Nature article proposed a system where AI decision support was provided upon request and developing tools that measured degrees of uncertainty attached to AI suggestions.
“In our study, we found that while physicians often relied on inaccurate advice, they felt less confident about it,” the authors wrote.
“Tools that can understandably communicate their own confidence and limitations have the potential to intervene here and prevent over-reliance for these cases where physicians already have some doubt.”