16 March 2023

Ethical traps ahead for AI in health

Aged Care AI Research Technology

A landmark study published in the UK this week has proposed a comprehensive set of practices and principles designed to ensure artificial intelligence (AI) is used ethically in healthcare and medicine.

The study calls for AI-generated content to be clearly labelled as such; for copyright, liability and accountability frameworks to be established where appropriate; and for AI to always be considered as a tool to assist human decision makers, not replace them.

Published in The Lancet’s eBioMedicine journal, it details how large language models (LLMs) could potentially transform information management, education and communication workflows in healthcare and medicine. LLMs are a key component of generative AI applications that create new content, including text, imagery, audio, code and videos, in response to text or voice prompts, such as ChatGPT.

But according to the paper’s author, Australian AI ethicist Dr Stefan Harrer, LLMs also remain one of the most dangerous and misunderstood types of AI.

The study, he says, is “a plea for regulation of generative AI technology in healthcare and medicine and provides technical and governance guidance to all stakeholders of the digital health ecosystem: developers, users, and regulators – because generative AI should be both exciting and safe”.

According to Dr Harrer, there is an inherent danger in LLM-driven generative AI, since it can authoritatively and convincingly generate and distribute false, inappropriate and dangerous content on an unprecedented scale.

But that point is getting lost in the noise surrounding the newest generation of powerful chatbots, such as ChatGPT, he says.

Dr Harrer emphasises this danger lies in AI’s inability to comprehend the material it analyses.

“In the generation of medical reports, what is currently a manual process – ploughing through records from various sources in various forms – can with this technology, theoretically, become an instruction to the AI to potentially read all those documents and then produce a summary or a diagnostic and medical report,” he tells Wild Health.

“There are two steps: the clinician giving the prompt, and the AI responding to it.

“That’s very, very tempting, but the thing is the generative AI does not understand the content, the language is not comprehended by the AI. It has no way to assess whether any of the content it created is correct, whether it omitted something, or whether it created an incorrect statement in its output.

“This means that every output created by generative AI needs to be closely vetted by human subject matter experts to make sure there’s no misinformation or wrong information in there,” he says.

Dr Harrer – who is chief innovation officer of the Digital Health Cooperative Research Centre, a major Australian funder of digital health research – proposes a regulatory framework with 10 principles to mitigate the risks:

  1. Design AI as an assistive tool for augmenting the capabilities of human decision makers, not for replacing them.
  2. Design AI to produce performance, usage and impact metrics explaining when and how AI is used to assist decision making and scan for potential bias.
  3. Study the value systems of target user groups and design AI to adhere to them.
  4. Declare the purpose of designing and using AI at the outset of any conceptual or development work.
  5. Disclose all training data sources and data features.
  6. Design AI systems to clearly and transparently label any AI-generated content as such.
  7. Audit AI on an ongoing basis against data privacy, safety, and performance standards.
  8. Maintain databases for documenting and sharing the results of AI audits, educate users about model capabilities, limitations and risks, and improve performance and trustworthiness of AI systems by retraining and redeploying updated algorithms.
  9. Apply fair-work and safe-work standards when employing human developers.
  10. Establish legal precedence to define under which circumstances data may be used for training AI, and establish copyright, liability and accountability frameworks for governing the legal dependencies of training.

The study scrutinised several AI tools for ethical design, release and use principles, and performance, including OpenAI’s chatbot ChatGPT, Google’s chatbot Med-PALM, Stability AI’s imagery generator Stable Diffusion, and Microsoft’s BioGPT bot.

“As impressive as their performance is in many ways, you cannot use them in an evidence-based sector, such as healthcare and medicine, without a mechanism in place – a human in the loop – to check the outputs before you act on them,” Dr Harrer says.

“The problem we see right now is that the latest generation of generative AI has essentially been unleashed onto the public but, generally, with very insufficient guardrails, often with insufficient education of users as to how they work, what they are and what they are not. People have essentially been invited to experiment with these tools widely.”

Distributing incorrect data can have a further, compounding effect, he says.

“If you ingest, re-ingest or amplify incorrect content created by generative AI and then return it to contaminate the public knowledge database, then use it as you go forward to train generative AI or to feed into other AI, you amplify the falsification in the source data – and that’s worrying as it is.

“But if you think about the scale at which this can happen with the generative AI tools that we have now, then it’s truly worrisome.”

The study highlights and explains many key applications within healthcare and medicine, including:

  • assisting clinicians with the generation of medical reports or pre-authorisation letters;
  • helping medical students to study more efficiently;
  • simplifying medical jargon in clinician-patient communication;
  • increasing the efficiency of clinical trial design;
  • helping to overcome interoperability and standardisation hurdles; and
  • making drug discovery and design processes more efficient.

Dr Harrer isn’t pessimistic about the development of ethical AI models in healthcare and medicine. He predicts the field will move from the current competitive “arms race” to a phase of more nuanced, risk-conscious experimentation with research-grade generative AI applications.

However, a paper published by Monash University in the Journal of Applied Gerontology last month has found that introducing AI into aged care homes can exacerbate ageism and social inequality.

The technology has been used in areas ranging from addressing the loneliness of residents through chat, video and image sharing, through to medical diagnosis and assessments tools. But its value can be compromised by the choices of carers on how best to use technology for older people in these settings.

“AI can perpetuate ageism and exacerbate existing social inequalities,” says lead author Dr Barbara Barbosa Neves, senior lecturer in sociology and anthropology at Monash.

“When implementing AI technologies in aged care, we must consider them as part of a suite of care services and not as isolated solutions.”

The study reveals more work is needed to better incorporate how older people are viewed in the design and implementation of AI technologies. The findings show ageism can be generated by design that includes views of older people as dependent, incompetent, and disinterested in technology.

“The use of AI in aged care must be done with consideration of the potential impact of these technologies on wellbeing, autonomy, and dignity of older residents,” Dr Neves says.

According to Dr Harrer, the enthusiasm with which some AI applications have been received has meant the area is “extremely supercharged right now”.

“But what that means is everyone needs to take a step back, take a deep breath and think about how to develop this from here on in a responsible, ethical way,” he says.

“If we don’t, I believe it would be one of the greatest missed opportunities in the field of science – churning out ever-more complex models trained on ever-more data but forgetting ethical and responsible design, deployment and use frameworks”.

If you are interested in the topics of healthcare reform and want to be part of the conversation on how we move our system forward, Wild Health is hosting the Australian Health Leaders’ Summit this May in Canberra. You can see the full agenda and get your tickets here. Be quick though as tickets are going fast. We hope to see you there.