The AI horse has already bolted

4 minute read


And it’s the health sector’s responsibility to learn how to ride.


As healthcare becomes more demanding and complex, it’s time to harness the potential of AI to work smarter not harder, said an expert panelist at today’s RACP Congress plenary session.

“To say the AI horse has bolted is an understatement,” cardiology trainee and panelist Dr Anupam Rao told delegates.

“The Melbourne Cup is over, the punters have gone home, and then a bunch of physicians have turned up.

“Medicine is becoming complicated, disease is becoming complicated, what is our solution to this?

“To work smarter.”

According to Dr Rao, that will involve leveraging AI tools.

But as a professional in the health sector, navigating the currently available, imperfect AI systems within Australia’s currently regulatory framework, was a tough ask.

Pediatrics director and researcher at the University of Syndey Dr Sandra Johnson told delegates that the responsibility for duty of care when using AI technologies remained in the hands of individual physicians.

And step one of that journey was informed consent.

“Our duty of care is to look after patients and to provide reasonable care and skill in every aspect of the work we do in medicine,” said Dr Johnson.

“We must protect patient data and consider the issue of data ownership.

“Where the doctor does not understand the validity, reliability or trustworthiness of the system, the doctor should not use the tool.

“Our responsibility is to understand how the AI was trained – is the data that was used in training representative of the population where your system is going to be used? What were the questions that were asked of the system? — so that you can explain this to your patients and therefore obtain informed consent.”

But while Australia’s current regulatory framework means responsibility lies with individual practitioners using AI systems, ultimately accountability should be shared with manufacturers, said Dr Johnson.

Panelist Kai Van Lieshout, who had firsthand experience navigating the implementation of AI in the health sector while implementing his AI software, Lyrebird’s Medical Scribe, said that the company worked hard to lift some of the burden from practitioners.

“What we have done is removed the onus from the doctor,” he said.

“We have a full patient portal of explanatory pages where we explain AI, we explain how we keep everything secure, and especially data sovereignty.

“Removing that burden from the clinician in the moment of the consult has been key for us.”

The panelists agreed that there was a growing responsibility on physicians to educate themselves on the basics of artificial intelligence.

“We’re on this road and we’re all learning together,” said Dr Johnson.

As with any innovation in medicine, physicians must possess the knowledge to accessibly discuss the use of the technology, medicine or procedure with patients.

“We should all get used to a shared vocabulary,” said Dr Rao.

“In every field of medicine, there are new techniques, there are new procedures that come along.

“We all have to upskill and we all have to learn.

“The college needs to give us a platform where we can update members and fellows and trainees in a concise and consistent way.”

Speaking to The Medical Republic’s sister publication Health Services Daily, director of the Australian Institute of Health Innovation Professor Enrico Coiera said that balancing safety with progression was a constant challenge.

“This is one of the reasons why regulation is so important because it provides protection,” he said.

“There’s an endless tension between innovation and being safe.

“If you look at federal governments, they talk about risk-based regulation for AI, and when do you consider a high risk setting the exemplar is always healthcare.

“In Silicon Valley, they say move fast and break things, but in healthcare the things you break are people, so maybe let’s not move too fast.”

Dr Coiera said that moving forward, it was imperative to prioritise patient safety.

“I think there’s nothing wrong with being considered,” he said. “You shouldn’t be laggard but equally, don’t do things that are that are unsafe.”

But beyond the responsibility of individual practitioners, it’s up to leadership organisations to bring recommendations to governments and maintain pressure to ensure the system becomes fit for purpose, said Dr Coiera.

“We need to work together in regulation and governance,” he said.

“What I love about AI is it encourages collaboration, consensus cooperation and less competition.

“It’s time we stop trying to compete with our physician colleagues, it’s time for us to work together and think together so we can all move forward into the future alongside this very, very rapidly developing technology.”

End of content

No more pages to load

Log In Register ×