AI in healthcare: unrealised benefits, irreversible consequences

3 minute read


The Department of Health, Disability and Ageing appears set on allowing AI into healthcare in some capacity. Here’s what stands in its way.


A new government review has laid out the biggest concerns about artificial intelligence within the healthcare industry, including the emerging risk of patient data being re-identified.

Completed earlier this year but published this week, the Department of Health, Disability and Ageing review looked at Australia’s various legislative and regulatory settings on AI and consulted various industry stakeholders including the RACGP, the Pharmacy Guild of Australia and the AMA.

In some senses, AI in healthcare is a horse that has already bolted – or is at least nosing its way out of the stable door.

According to the report, AI is already being used in consult rooms as a scribe, in aged care homes as a companion, on healthcare websites as a chatbot, as a predictive tool in medical records and as a screening tool in oncology and dermatology.

The other facets of healthcare which could eventually be touched by AI, according to the DoHDA report, include insurance, billing, training, consent, privacy and health data.

This last formed a major area of focus.

“Stakeholders highlighted that, given the many data sources now available, even the most robust techniques for deidentification may no longer be sufficient to safeguard patient privacy,” the report said.

“Some respondents pointed out that reidentifying of patient data is a likely outcome.

“Several clinical stakeholders raised that certain patient data, such as skin scans and genetic data, is impossible to deidentify.

“In these instances, deidentification cannot be assumed to be a safeguard for patient privacy.”

People who live in smaller communities – and are therefore more easily identified – and children were specifically mentioned as being at higher risk of suffering the consequences if their health data was to leak.

“Data leakage is one way where the breached data generally cannot be retrieved or deleted, so the damage may continue for years after the original exposure or leakage,” the report said.

“For children, who are not able to consent initially, the impacts may be felt for the long term.

“There is also the possibility of AI introducing changes to patient records that are difficult to reverse or irreversible, such as errors arising from ‘hallucinations’ or inaccuracies.”

The report also alluded to disagreement amongst the consultation cohort as to whether the patient or the health service provider owns patient data and whether patients should be remunerated if their data is sold on.

Just over half of the respondents said they thought personal healthcare information should be kept in Australia.

There were also different attitudes toward what constituted “low risk”; the report pointed out that while the risk from an AI scribe mistake may seem very low, it becomes riskier over time and is inversely correlated with performance.

In terms of future direction, the report identified a need for “national and centralised policy leadership to steward equitable benefit realisation”.

In other words, there needs to be official advice tailored to the health sector.

“The existence of low-quality and misleading information about AI in health care can adversely impact decision making,” the report said.

“Further, the use of AI to generate information about health care can result in poor-quality outputs.

“Having access to a trusted source of accurate, reputable, and timely information about AI in health care would support its safe and responsible use.”

It also raised the possibility of an incentive framework for the medical technology industry to follow that would reward AI development and usage practices that deliver high quality, accurate and safe products to the market.

More than 70% of the consultation respondents supported a dedicated body that would oversee AI in healthcare.

End of content

No more pages to load

Log In Register ×