We’re not making this up…

3 minute read


Unlike these AI radiologists, which really are.


When it comes to the ever-increasing incursions of AI technology into healthcare, your Back Page scrawler often feels as if he’s watching a slow-moving train wreck.

How can one not feel a sense of impending disaster when we learn that, according to a report in Radiology Business,  the CEO of a New York-based hospital public benefit corporation has expressed a desire to replace highly trained radiologists with visual language AI models.

Speaking to a business panel late last month, the boss cocky of NYC Health + Hospitals, Mitchell Katz, told participants that “a great deal of radiologists” could be replaced by AI right now “if we are ready to do the regulatory challenge”.

Those words carry some weight, because NYC Health + Hospitals is the largest municipal healthcare delivery system in the United States, operating 11 hospitals, with 45,000 employees looking after at least 1.4 million people.

By way of example how this job-shredding technology might work, Katz said women’s healthcare, in particular, could be improved by automating breast cancer screening with AI tools.

By sidelining radiologists until an AI system flagged a reading as abnormal, hospitals could achieve “major savings”, he said.

Unsurprisingly, US radiologists beg to differ.

San Diego-based Mohammed Suhail, for example, told media that Katz’s comments were “undeniable proof that confidently uninformed hospital administrators are a danger to patients,” and were “easily duped by AI companies that are nowhere near capable of providing patient care”.

“Any attempt to implement AI-only reads would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naïve.”

One could argue that radiologists naturally would say that, given what they have to lose – but they do have some unsettling research to back up their fears that AI in the X-ray room ain’t all it’s cracked up to be.   

A case in point, Stanford University boffins reveal in a yet-to-be-peer-reviewed study that AI-powered chest X-ray tools have the unfortunate habit of – how shall we put this? – “just making shit up” if they can’t actually answer a question.      

The paper, titled The Illusion of Visual Understanding, shows how the AI tools successfully passed medical benchmark tests without ever seeing actual images of X-rays.

The AI tools “readily generate detailed image descriptions and elaborate reasoning traces, including pathology-biased clinical findings, for images never provided”, the Stanford boffins said.

The research team labelled this phenomenon “mirage reasoning”, adding that because mirages weren’t based on anything real, the usual hallucination safeguards used in AI tools were not enough to deter them.

Which, prima facie, would suggest the US radiologists might have a point – not that these facts are ever likely to get in the way of hospital administrators and the chance to save a bucketload of money.

Of course, here in Australia we could never be so pea-brained as to entrust public welfare outcomes to the vagaries of an automated decision-making technology, could we?

What’s that you say? Robodebt?

Help keep fallible humans gainfully employed by sending story tips to Holly@medicalrepublic.com.au.

End of content

No more pages to load

Log In Register ×