Building trust for AI in healthcare

4 minute read


What do we need to be more concerned about – AI, or the humans who use it?


Some people would trust AI with their lives, while others don’t trust AI as far as they could throw it (which raises the interesting question of how one would throw different types of AI technology).  

Dr John Lambert, chief clinical information officer at NT Health, told delegates at the Digital Health Festival in Melbourne last week that he had cognitive dissonance when he thought about the topic. 

“I can see how amazing and incredible generative AI is … But I’m really worried about the consequences of these tools. I think they are untested [and] are being used in ways that interfere directly with patient care,” he said. 

“We’re using them to write summaries of patient notes, for God’s sake. You cannot get more invasive into the care pathway [than that] – and we’ve got nothing really evidence-wise to say that they can’t cause tremendous harm.” 

Dr Lambert remains optimistic about the use of generative AI in healthcare, so much so that he isn’t worried about the AI. It’s the humans using the AI that concerns him, citing recent research comparing AI-generated responses to patient messages with clinician-generated responses. 

“When the looked at all the responses, the manually generated responses had more action statements and urgent care recommendations [compared to the AI-generated responses].  

“But what was most interesting was when clinicians were presented with the [AI-generated} drafts and could modify them, the increment in action and urgent care statements was tiny compared to the percentage that would have said something if they hadn’t had the [AI] model.” 

To translate, it means clinicians weren’t editing the AI responses to include the same recommendations they would have included in their own responses, which sets off alarm bells for Dr Lambert. 

“The model is changing the way clinicians are thinking. It’s changing their behaviour. 

“Now, that might be fantastic. Maybe we’re being overreactive and calling too many people in, and we’re going to save the health system billions of dollars through less callbacks. But maybe not. 

“And until we research it, we don’t know. I think the responsibility on healthcare clinicians and healthcare systems is to research this stuff at pace to work out whether it is safe or not, because if it’s safe, it’s bloody brilliant.” 

Dr Bryan Tan, chief health officer at Salesforce, told delegates about the results of The Trust Imperative 4 (GenAI: The Trust Multiplier for Government) report, produced in collaboration with the Boston Consulting Group as part of the 2024 Digital Government Citizen Survey. Approximately 10% of the 40,000 respondents came from Australia and New Zealand. 

Twenty percent of Australian and New Zealand respondents felt the benefits of generative AI outweighed the risks. One-third of respondents felt the risks outweighed the benefits, while another third felt the benefits and risks were about equal. The remaining respondents were unsure if the benefits outweighed the risks. 

A closer look at the data revealed an interesting, but not surprising, association between familiarity with AI and trust.  

“Those who used AI quite regularly [and had the highest knowledge of AI] were four times more likely to say the benefits outweigh the risks compared to respondents with just a basic knowledge of AI,” Dr Tan explained. 

“And those who had the most knowledge of AI were 15 times more likely to accept the benefits outweigh the risks than the cohort who had no knowledge of AI.”  

But having greater exposure to, and experience with, generative AI does not automatically mean people are completely comfortable with using the technology in healthcare. 

End of content

No more pages to load

Log In Register ×