Psychologists warn that people are increasingly turning to chatbots for friendship, advice and even love.
The Australian Clinical Psychology Association is urging GPs to be aware of the potential negative impacts of AI-powered chatbots on patient wellbeing, as use of the tools skyrockets.
AI therapy chatbots, for instance, have proliferated over recent years. A google search for “therapy chatbot” results in advertisements for at least six different AI services but just three articles giving general information about the safety and risks of using such a service.
ACPA president Dr Erika Penney, who also researches generative AI at the University of Technology Sydney, told The Medical Republic that very few of the AI therapy tools available on the web were TGA-regulated or Australian Privacy Principles-compliant.
“I personally don’t actually ever prescribe chatbots to clients because of that fact,” she said.
“However, that might change if we start to see more tools that have that compliance.
“Sometimes clients will tell me that they might use tools like Wysa, which does have healthcare compliance with the NHS in the UK, and I feel a lot less worried [because] … I know there are a lot of RCTs and research looking at its impact on depression, anxiety or chronic pain, than I do if someone says they’re using ChatGPT, because these open source, non-purpose-built for healthcare tools do not maintain confidentiality and privacy.”
Dr Penney urged GPs to consider screening some patients – particularly those who are younger or more tech-savvy – for AI misuse.
She suggested asking general questions about how often they use AI, whether they feel better or worse after using AI tools and whether they have a sense of dependence on AI.
“I would be looking for things like … the sense that there’s more secrecy about AI use, … is there a lot of emotional dependence that you’re hearing about in the way they interact with a chatbot? Is it giving harmful or quite romanticised advice?” Dr Penney said.
“There’s a lot of examples of people being in quite unhealthy romantic relationships with AI, and we do see a worsening of mental health symptoms or a withdrawal from real world supports.”
Online forums like r/MyBoyfriendIsAI have tens of thousands of users who share updates on their ‘relationships’ with chatbots. One popular post features an AI companion’s marriage proposal, while another post states that the user is no longer considering dating people in the real world.
“It is important to for GPs to be aware of this emerging term of ‘AI psychosis’, where people have these prolonged or intense interactions with AI large language models, and it ends up developing or exacerbating delusional thinking, paranoia or manic episodes,” Dr Penney said.
Related
If a doctor suspects that a patient may have an unhealthy reliance on AI, she said it was important to avoid shaming them.
“As clinicians, we can get a bit judgmental sometimes about technology use,” Dr Penney said.
“I think the first thing is to normalise that … a lot of people find talking to talking to AI bots helpful, and there’s nothing wrong with wanting to connect or reach out for help.
“I want to destigmatise that; people are often on these tools because they want help or they want connection, and those are valid things to want.
“But in that normalisation process, I would then talk to someone about how I feel some concerns about whether [their AI use] is helping them move towards their values and goals, or whether it’s taking them away from those.”



