How do we make sure that all our doctors are competent to use or start using AI, and remain that way year on year, given how fast this technology is currently evolving.
Would it surprise anyone to know that there are certain persons in government currently weighing up the idea of a licensing regime for doctors around the safe use of medical AI?
Yep, an AI driving license.
It’s in the “thinking it through” stage currently. But thinking fairly seriously.
Mostly, how it could be introduced relatively seamlessly, without the usual blowback from the various peak doctor bodies, or upsetting doctors unnecessarily.
If the Medical Board of Australia was in the least bit agile, it would have moved on this idea already in a very easy and relatively non-controversial manner: a compulsory unit of CPD for every doctor in Australia on using AI safely.
Would the peak bodies really stare down the idea that an AI medical licensing regime would just be more government red tape keeping our doctors from what they do best, or that this would be somehow questioning the initiative of our already highly trained medical workforce in an unfair way?
It’s hard to see on what logic they could.
AI is a real time live experiment in complex medical information management, patient empowerment and doctor productivity which, though exciting in its potential, is so far pretty dangerous given the level of change being thrust on doctors with formal no training or education in what they’re getting themselves into.
You never get a big new drug classes introduced without significant CPD preparation before actual market release.
Doctors can already easily cause harm. With AI, they could feasibly do it far more efficiently if they don’t understand the basics or use it the wrong way.
As far as the provider side AI vendors are concerned, it’s all care, and virtually no responsibility. They make it very clear that their AI isn’t a replacement for a doctor making their own decision .
Everything a doctor gets out of their AI during a consult, even the decision support stuff, has to be reviewed and thought about carefully say the vendors. Any mistakes made by the AI are not on them, they state.
Human nature tends to suggest doctors won’t always be following this sage advice.
What if, for example, you’re in your 16th hour of a registrar hospital stint, with way too many crazy patients to get through?
An obvious thing for the government to consider in helping ameliorate this rapidly evolving problem is some sort of regularly tested licensing regime.
Which does, initially at least, sound a bit of a horrifying extension to compliance if you’re a 12-15 year educated and trained doctor trying to get your 50 hours of CPD done each year.
But in what other way could the government, and presumably the Australian Medical Board address this immediate problem, quickly?
They need to get every doctor in Australia base educated in some way in a uniform manner about the specifics of how medical AI is unfolding in Australia, the risks, especially in a clinical setting, and somehow test that they have a base level of understanding.
There are so many moving parts in medical AI at the moment – the TGA position, any number of vendors offering any number of variations on a theme, the rapid iteration of patient side AI via ChatGPT and Claude, the integration of all of this into existing doctor EMRs – from the GP to the hospital – and so on, that you’d have to think that at this point of time, this might initially have to be a yearly compliance process.
Although we can’t see that any government overseas has yet instituted a formal licensing regime for AI for their doctors, a few relevant ones are moving rapidly towards the idea. It makes sense if it can be done with tact and collaboration.
You can imagine the hoo haa that would erupt if Department of Health Disability and Ageing (DoDHA) suddenly announced without much if any consultation, that as well as – but much more likely as part of – a doctor’s compulsory yearly 50 hours of CPD, each doctor using AI in practice would need to pass a regular AI diving test.
Notwithstanding, the concept at least is both logical and simple.
AI is flooding the zone in day-to-day medical practice and the spectrum of knowledge of what’s available, how to use it safely and what the risks are for the patient and doctor is now wide and getting wider each day.
The government has some duty of care to try to get every doctor in the country to demonstrate a uniform understanding of the basics. They will likely want to do that on initially an annual basis given just how fast the technology is moving.
Of course, nothing like this is ever all smooth sailing.
Like a driving test, you would be able to fail. Peak bodies and doctors will not love that idea.
Because if you did, you would technically not be able to formally use AI in a consult until you passed. And, like a driving test, if you did use AI without a license you might get into trouble, probably from AHPRA – ouch, that’s a nasty but effective stick to use.
I’m a doctor and I’m not using AI
What if a doctor arcs up and says “baaah, I hate AI, I’m a good doctor without it”?
Then fine, you conceivably would not need an AI medical driving license.
It’s doubtful the government could force a doctor to get one, if someone decided they weren’t going to use AI in practice.
But you’d probably have to prove you weren’t using it and make it clear to a patient you weren’t using it going forward as a part of your yearly AHPRA registration.
That’s an ironic flip of today’s dynamic where the onus is generally on a doctor to be saying, I am using AI to their patient, “are you OK with that”?
Might this mean we start seeing signs like this in some doctor’s receptions?
“This practice is both bulk billing and we do not use any AI, so you can feel safe [from Skynet] here”.
You’d feasibly have to take out the Skynet bit.
And any claims as to being safer than AI practices wouldn’t likely fly either – it might be deemed misleading advertising under certain circumstances, at least according to the Competition and Consumer Act 2010.
How times might be changing.
AI vs non-AI practices and patient dichotomy?
And what of the likely majority of doctors who, for productivity and patient experience reasons, decide to work with AI – studies out of the US suggest that upwards of 85% of all doctors are now using it or testing the use of it.
How might they compete with the above quaintly framed piece of Orwellian schadenfreude?
“This practice is Bulk Billing and uses AI….errh, because it makes our doctors smarter and more efficient”?
Maybe not that.
What about:
“Do you use AI to help you on your health journey? Great. So do we. Come on in and our AI can talk to your AI so we can decide together on what we do next”.
The dynamic of AI vs non-AI practices, which feels entirely likely, may result in a new dichotomy of AI vs non-AI patients and doctors.
But that is probably the least of the worries of a government weighing up the pros and cons of introducing some sort of licensing regime.
The major hurdle would be convincing the peak bodies that such a regime is practical enough given the circumstances to introduce, and at the same time, that it could be seamless enough in its introduction so as not to perturb the collective emotional state of an already over stretched workforce – particularly some GP groups and hospital doctors.
Related
Major independent CPD providers think it could work
Both major independent providers of CPD to GPs in Australia, HealthEd and Medcast, think that, depending on the details of what the government actually wants to get across and test, the idea is a sound one which could be implemented fairly seamlessly inside the current CPD framework.
“I think it’s a good idea in principle,” says Dr Ramesh Manocha, CEO of our largest independent GP CPD educator HealthEd, noting he “can’t see any difficulties” incorporating AI training into existing CPD programs.
But he does say the devil will inevitably be in the detail of exactly what the government is trying to achieve.
“What the curriculum actually is” and “what exactly the government wants to test” will really be the litmus test on whether it would be seamless or not, he says.
“The question is what the curriculum is. Do they just want to address skilling, risk or both,” he says.
“What does the government want to licence specifically?”
Medcast’s Dr Stephen Barnett agrees there is “a need for us to stay current in how to use AI”, and to get some sort of baseline in for all doctors using AI, describing it as “a new intervention…a new device in healthcare”.
He also thinks it can be done, but the challenge would be “how they [government] implement and how they do that thoughtfully”.
Both CPD leaders pointed to the compulsory CPR CPD module in which every doctor has to complete two to three hours of supervised training every three years as something that is recognisably similar to the medical AI license idea.
Says Dr Manocha, “with CPR… it’s every three years… but for AI… I don’t think every three years is going to work. It’d have to be every year to start because the technology is moving so fast and things are changing so fast in government around their governance of it”.
The other recent example of something resembling an emergency “licensing regime” was the requirement from DOHDA during covid for doctors to complete mandatory base education and training on covid vaccine administration.
During covid, all doctors and health professionals involved in the vaccine rollout were required to complete mandatory, specialised training to administer covid vaccines.
The COVID-19 Vaccination Training Program (CVTP), managed by the Department of Health and Aged Care, was compulsory for authorised providers and was mandatory until October 1, 2023.
From any perspective the impact of AI on how a doctor practices, particularly now that the major vendors like Heidi are introducing a decision support layer which is integrated to its scribing and summaries functionality, feels like it needs some sort of base line education and training for every doctor, and fast.
Who would pay for it?
A new national CPD program that had to be completed by all doctors within one year of its introduction would not come without a significant new price tag.
But doctors would almost certainly not be asked to contribute given the nature of the situation – the impost of yet another compliance burden for them.
If the government is really going to go this way then they are going to have to make the idea as easy to swallow as possible. Which means they will likely pay for any new infrastructure requirements such as online testing.
A doctor may still have to pay one of their CPD providers to get to their usual education day or webinar where the baseline education is carried out as a part of the normal program, but that would be no more investment than they already have to make.
What would such a program most likely look like?
The government isn’t trying to boil the ocean on AI safety, risk and governance at this point of time. All they want (we think) is to make sure every doctor has done baseline education and maybe even a bit of supervised training.
That could easily be done by attending a couple of webinars or live lectures and to sit an online driving test sometime soon afterwards.
If you pass the test, you get the same sort of regulatory AHPRA tick you get for having passed the CPR course – although no one actually fails that course.
Should you fail the test, you could quite simply sit it over and over until you pass. Just like a driving test.
At that point the government could at least be assured a doctor has some sense of the basics, including one really important theme: if you are using decision support or generating patient care plans and the like using your AI, you must always personally review it and make the final decision yourself. That means the buck for any clinical decision-making errors will sit with the doctor, not the software.
At the moment AI decision support layers are still making some very basic mistakes, in some studies at alarmingly high rates.
A key message of AI license training would likely be, you sign it off, and if you don’t, and it’s a screw up, it’s on you not the AI vendor.
This is a message all the AI vendors have been keen to tell their users with a view to not being entangled in complex “software as a device” regulations.
In Australia there are still some who argue that vendors pushing decision support modules inside AI, like the Heidi’s Evidence layer, are crossing the line into being “software as a device”. The feeling is that this part of the product needs to be captured by TGA regulation.
But this would surely be a productivity quagmire.
Doctors have used sophisticated information-based decision support tools such as Up To Date and ClinicalKey for years now without anyone asking for it to be regulated. All that the new AI vendors are doing it is incorporating similar knowledge bases into the context of the consult created by the AI summaries. And they’re doing it at a price every doctor can afford.
But whether Heidi can use as a defence that all their software is provided on the strict condition that no decisions are autonomous on the part of their AI, or not, doctors of all shapes and sizes are adopting the technology at such speed and in such numbers that the TGA could never hope to reign them all back in now, even if it wanted to.
The numbers are overwhelming. Trying to stop doctors using the new emerging AI evidence layers to help them make decisions in a consult would be like trying to make consumers not use ChatGPT, Claude or even Dr Google in helping them on their health journey.
The medical AI in consults ship has well and truly sailed, complete now with comprehensive decision support, and probably soon with other novel information features that your average doctor may not understand how to use without them or their patients being at some sort of new risk.
Medical AI driving school to achieve the same baseline level of AI driving knowledge across the board, notwithstanding poor implementation issues, feels like a win win for all healthcare system stakeholders.



