What does that mean for health information providers?
ChatGPT is starting to roll out its Health feature to a limited number of early users in Australia, with its sights set on integrating with providers and booking systems in the future.
OpenAI said access to ChatGPT Health would be expanded to users currently on a waitlist within the next few weeks, while broader access to more users will be rolled out in February.
Crucially, OpenAI is hoping that users will happily upload their medical data and connect their wellness or fitness apps to ChatGPT Health, which is “designed to help you understand and navigate health and wellness information more effectively – by optionally connecting your own health data”.
“To keep your health information protected and secure, Health operates as a separate space with enhanced privacy to protect sensitive data. Conversations in Health are not used to train our foundation models,” OpenAI said.
Just a few days after OpenAI’s announcement, Anthropic unveiled Claude for Healthcare, which enables US subscribers to upload their health records to get more personalised responses.
“When connected, Claude can summarise users’ medical history, explain test results in plain language, detect patterns across fitness and health metrics, and prepare questions for appointments,” Anthropic said in a statement.
The rollouts come as Google removed some AI health summaries following a Guardian investigation that found those AI overviews were putting people at risk of harm by false and misleading information.
In one case, Google’s AI Overviews gave inaccurate information about the “normal” range in liver function tests and did not account for an individual patient’s demographics, which experts said was dangerous and alarming.
Dr Ben Hurst, CEO of booking engine HotDoc, said the launch of ChatGPT Health was a significant improvement on searching online to get medical information.
ChatGPT Health was probably more trustworthy and reliable than “Dr Google”, he said.
“But the obvious challenge is that patients don’t always have their complete context. They can emit important information. It’s an improvement on Dr Google, but it’s not as good as your regular GP,” he said.
For people in countries with limited access to medical services, ChatGPT Health had opened up “a world of possibility people get access to good information”.
“In Australia, where people do have access to a good GP, I think it has to be seen with some level of scrutiny and that it’s not there to replace what is already a good thing, which is access to really good primary care services across the country in the large part,” Dr Hurst said.
HotDoc was “an enabler between your existing provider and a patient”, he said.
“So I don’t think that in the current state of ChatGPT, it’s something that’s going to disrupt our model,” he told TMR.
“Over time patients might use ChatGPT as a navigational tool and so services like HotDoc, Healthengine, Healthdirect, which offer methods of connecting with a service provider, we probably need to think about agentic SEO, so that the patient is able to connect with the right doctor when their entry point is through a ChatGPT-like interface. That’s something we’re definitely thinking about but it’s still very early days.
“By far, the vast majority of patients when they are searching, they either come direct to HotDoc or they do a search through Google.”
Dr Hurst said integration would become more of a consideration for HotDoc if consumers started using AI agents to not only help navigate, but to make the end-to-end booking.
“It’s one thing to book a hotel, it’s one thing to book a restaurant, but there are … more challenges around integration, around privacy when it comes to handling healthcare bookings and any exchange of any healthcare information,” he said.
“So I think it’s something that will happen in the future, but I think it’s years, not months.”
Healthdirect Australia CEO Bettina McMahon said the launch of ChatGPT Health was “a really positive thing and very exciting,” but it needed to be done in a way that retained trust.
“It would be really frustrating if this was done in a reckless way, and it caused a whole lot of distrust for the Australian population, because that could put us back. And I see this as such a valuable tool,” she told TMR.
“Large language models assisting people as personal health assistants … are a great opportunity. I would hate to see it squandered early by a reckless implementation.
“I see this is a really exciting development, and it’s one we were expecting, and it’s one at Healthdirect and at board level we’ve been discussing and preparing for.
“Our concern is making sure that these agents pick up quality information.
“I would love to have more formal partnerships with some of them, whether it’s ChatGPT Health, Gemini or any of those models that are getting health questions, so that there’s a hand-off into our digital symptom checker or 1800 Medicare where they can speak to a registered nurse and get that human involvement if they need to.
“If that’s simplified and made easier for Australians, then I think it could avoid a whole lot of the clinical safety risks that people are concerned about.”
Related
Ms McMahon said Healthdirect had been anticipating that fewer people would be accessing the Healthdirect website directly through search engines.
“We’ve been thinking about how we achieve our mission of people making healthy choices in their lives when they’re finding information and consuming it from a different source.”
The launch of 1800MEDICARE this year was part of that, she said, and Healthdirect was now optimising content for agentic AI use.
“We want our content, which is clinically and quality assured, to be feeding their AI agents, rather than commercially driven or non-evidence-based information.
“Our mission isn’t to drive people to our website. Our mission is to make sure Australians make better informed decisions about their healthcare and to interact with the health system when they need to.
“We’ve been thinking over the last six months about what we foresee as a massive transition as people move away from routine web search into agentic conversations using their AI tool of choice, and that AI tool will then be managing the search engine interface.”
According to a report by OpenAI, 5% of all ChatGPT messages globally relate to healthcare. “That equates to 40 million people every single day around the world asking chat GPT about their health,” Ms McMahon said.
And AI overviews have led to a dramatic drop in click-through rates, with a report from Authoritas showing that sites previously ranking first in Google results could lose up to 79% of their organic traffic if their link appeared below an AI overview.
Healthdirect still gets 75% of its traffic through search engines, and less than 1% of traffic came through AI conversational agents, Ms McMahon said.
But that number grew 80% last year, she said. “When you’re on an exponential curve like that, even though it’s very small absolute numbers, we expect that to go up.”
“If that continues at the rate of 80% a year growth or more, then we want to make sure that ChatGPT health and others are using and reaching into Healthdirect information as a priority, and so we’ve been optimising our content so they can do that easily.”
Ms McMahon said ChatGPT was really good at the initial data, intake, ingestion, analysis and information back to a consumer, but less good at understanding the pathways of care in Australia.
“For Healthdirect, we’d rather them go to the right place first time, rather than going to an inappropriate provider and then having to pay a gap fee and then come back out and end up in a general practice anyway.
“I think the bigger questions are, what are the new models of care that can cope with finding people with early warning signs? Can the health system cope with it?
“And how do we, rather than just tipping them into the same old models of care, do this at scale in a way that the country can afford and the health system can cope with?”
ChatGPT Health is not available in the EU, UK or Switzerland due to “complex privacy and regulatory environments” in those regions but is being rolled out in Australia and most of Asia, Africa, the Americas, and the Middle East.
When TMR asked ChatGPT which international and Australian databases it was using to source information, it said:
“ChatGPT Health’s ability to answer general health questions (like “what is cholesterol?” or “how does blood pressure affect risk?”) is based on the same large language model used by ChatGPT generally – trained on a broad mixture of licensed data, data created by human trainers, and publicly available information, including medical literature, textbooks, and health-related resources.
“This training lets it generate coherent and informative responses, but it doesn’t access real-time clinical databases or directly pull from specific journals or proprietary medical systems at the moment of your query.”
For instance, it doesn’t “live query” Mayo Clinic, WebMD, or other specific sites each time a question is asked, and it doesn’t have live access to these websites or their proprietary databases.
But if users explicitly opt in by connecting personal health data such as medical records, Apple Health data and wellness and fitness apps, “ChatGPT Health can ground its responses in your own personal health information”.
ChatGPT Health won’t directly pull information from private Australian medical databases, websites or guidelines – such as myDoctor, Victoria’s Better Health Channel or clinical and immunisation guidelines, it said.
And OpenAI has not announced direct integration with national or government-run health record systems outside the US, ChatGPT said.
“In Australia, the government operated My Health Record system is not currently directly accessible by ChatGPT Health. There’s no official integration at this stage; ChatGPT can only use data you voluntarily upload or connect via apps,” it said.
ChatGPT says it doesn’t have live access to clinic booking systems or real-time availability, it can’t make appointments or guarantee who is taking new patients, and it doesn’t replace referral pathways.
But it said ChatGPT Health could help patients find a local doctor or specialist by using their context – medical condition, location and preferences – to find the right type of clinician, and “help you prepare for the visit – what to bring, what to ask, and how to describe your symptoms clearly”.
Based on how ChatGPT Health was being built, it was confident of a “realistic path” in which it would link to Australian providers and booking systems in the future.
“What is very likely in the future” was a second phase of deep linking to Healthdirect search results and HotDoc/HealthEngine provider pages, it said.
That would be followed by a third phase of full integration into real-time booking and telehealth – if regulations, clinical governance and partnerships allow it, ChatGPT said.
“No company has publicly committed to this yet – but it’s clearly where the industry is moving,” it said.
As for the privacy trade-offs around connecting your health data, ChatGPT said users would gain more personalised explanations of test results, better trend tracking, help preparing for doctor visits and less generic advice.
But “you’re trading some privacy for convenience and insight”, it said. Risks lie in potential data exposure, in the fact that OpenAI is not a healthcare provider, that personalised answers can feel more authoritative than they should, and that ChatGPT Health does not have the same as legal protections as those covering doctors or hospitals.
In response to the launch of ChatGPT Health, Professor Chris Trudeau – a US expert on how people read, interpret, and act on legal, governmental and medical information – said health literacy now required AI literacy.
“The problem is that when AI starts explaining symptoms, test results, and treatment options, a person’s health literacy alone is no longer enough,” Professor Trudeau, a professor at the University of Detroit Mercy School of Law, wrote on LinkedIn.
“The risk is not misinformation. It is confident information delivered without context, limits, or accountability.”
Disclaimers about limitations are all very well, but they don’t erase the reality that users will lean on ChatGPT Health, he said.
Professor Trudeau said answers grounded in personal records would feel safer and more authoritative, and there was a real risk that users would place too much trust in OpenAI.
“The risk is not simply that the model can be wrong. The risk is that it can be wrong while sounding calm, confident, and complete. That combination is persuasive, especially for people who already feel overwhelmed.”
Professor Trudeau said if ChatGPT Health was going to be a “net positive”, it needed to clearly distinguish between what it is quoting directly from the medical record, what it is inferring from patterns or context, and what it does not know.
It should also ask a patient to explain information back in their own words in order to test whether the explanation worked.



