New guidelines for AI have pointed out all the do’s and don’ts of the new tech.
The Australian Commission on Safety and Quality in healthcare has released its new guidance for the daily clinical use of AI tools, including a series of concise guides on AI usage.
These guides cover developing understanding of the potential shortfalls of AI and the need for human oversight, to continuous evaluation and transparency of its use.
“The rapid advancement and adoption of AI can result in new and increased risk, especially as evidence of safety and efficacy may lag behind implementation,” the Commission said in their clinical use guide.
“This guidance and associated clinical scenarios support clinicians, together with their patients, in using AI safely and responsibly in patient care and are structured to support the steps of ‘before you use’, ‘while you use’ and ‘after you use’ AI tools.”
The Commission consistently reiterates that while AI is a developing technology it must still adhere to professional and legal obligations as part of best practice.
Another key message was AI’s role purely as an assistive tool and not a directive method, hence the Commission’s focus on human oversight.
“To safely integrate AI into clinical workflows, clinicians must familiarise themselves sufficiently with the intended use of each AI tool, to understand the benefits and potential harms,” the Commission said.
“AI development often occurs in highly controlled and idealised settings which can outpace the development of robust evidence in ‘real-world’ clinical settings.
“Clinicians, together with their patient, must carefully and transparently weigh the potential harms against the anticipated benefits prior to use in clinical settings.”
Related
Transparency is another key point of the guidelines, which recommend the establishment of informed consent procedures as part of routine practice.
“The broad and varied uses of AI make consent complex and there is no single approach,” the Commission said.
“The method of disclosure and type of consent required will normally be determined by your organisation and depend on the nature of the AI tool.”
A common limitation listed in the guidelines was the potential for AI “hallucinations”, caused by errors in machine learning of training data.
These hallucinations are fabricated information that typically occur due to AI computational models lacking information diversity or not being correctly trained on the patient cohort.
“It can lead to biased outputs. Over time, these outputs may also change,” the Commission outlined.
“There are documented cases where AI tools have disadvantaged certain patient groups due to under-representation in the training data.”
“This bias raises important ethical and equity concerns, along with potential clinical risks such as inappropriate treatment recommendations, inaccurate healthcare records and diagnostic errors.”


