AI cybercrime has arrived, and health is most at risk

6 minute read


Two reports highlight the risks AI could have on data privacy and security. The proportion of workers using organisation-approved AI has jumped from 18% to 67%, outpacing cross-industry averages.


Artificial intelligence data leaks, particularly from healthcare organisations, are emerging as a growing cybersecurity threat, two new industry reports have found.

As hospitals, clinical and health services adopt generative AI tools to help with documentation and administration, investigators are seeing a pattern where sensitive information is being inadvertently exposed through prompts and uploaded documents.

This article originally ran  on TMR’s sister site, Health Services Daily. TMR readers can sign up for a discounted subscription.

The trend raises questions about whether the rapid uptake of AI in healthcare is outpacing governance, monitoring, and data-loss prevention safeguards.

Two reports, two perspectives

The findings come from the Threat Labs Report: Healthcare 2026 and the CyberCX Threat Report.

CyberCX analysed data from more than a hundred serious cyber incidents it responded to in Australia and New Zealand during 2025. For the first time in several years, the firm found financial and insurance services overtook healthcare as the most impacted sector.

The Threat Labs report provided a more global perspective. It analysed anonymised usage data from a subset of Netskope healthcare customers worldwide, collected between December 2024 and December 2025 with prior authorisation.

Despite the different datasets, both reports point to the same emerging risk: sensitive information increasingly leaving healthcare environments through everyday digital tools.

AI adoption creating new pathway for data leaks

While not yet the most damaging cyber threat facing healthcare, both reports suggest AI technology is reshaping the risk landscape.

CyberCX warned that attackers are already beginning to use generative AI to accelerate cybercrime.

“For the first time, CyberCX saw threat actors using generative AI to create custom, bespoke commands and malware to reduce the time between initial access to an organisation and achieving their malicious objectives,” the report said.

Investigators said the more immediate risk is internal, however. Both reports highlighted a growing number of “data spill” incidents where employees upload sensitive material to public AI tools.

“In many of these instances, the organisation did not have enterprise licensing for the AI platform, data loss prevention (DLP) controls in place, or adequate network logging, often making it impossible to identify or quantify the data spillage,” the CyberCX report said. 

Healthcare workers are often turning to AI tools to save time in documentation-heavy roles. Clinicians and administrators may use AI to summarise notes, draft letters or prepare reports. But doing so can involve copying internal information into external systems.

That behaviour becomes particularly risky when the data involves identifiable patient information.

Healthcare data is far more likely to be exposed

The Threat Labs report found healthcare organisations are far more likely than other industries to accidentally expose regulated information when using AI tools.

“Analysis of data policy violations in the healthcare sector shows that regulated data overwhelmingly dominates exposures, accounting for 89% of incidents, a figure significantly higher than the global average of 31%,” the report said.

The risk is partly around how AI tools are used. 

“Whether a user asks an AI system to summarise documents, datasets, or code, or relies on it to generate text, media, or software snippets, the workflow almost always requires uploading internal data to an external service or otherwise connecting your internal data stores to an external AI app. That requirement alone creates substantial exposure risk,” it said.

Part of the issue is the proliferation of personal genAI accounts being used at work.

The report found 43% of healthcare workers are using personal accounts making it challenging for security teams to properly monitor for data leaks.

To address this, healthcare organisations are increasingly pushing for their staff to use company-approved genAI. In the last year, the proportion of workers using organisation approved AI has jumped from 18% to 67%, outpacing cross-industry averages (26% to 62%).

But experts warn that simply adopting enterprise AI tools is not enough.

“It is vital to ensure that secure adoption of AI platforms is followed by enforcing strict data-handling controls, continuous monitoring, and implementing clear data governance frameworks,” the CyberCX report said.

Cloud storage and personal apps another weak point

Another key area of risk is personal cloud applications. Workers might inadvertently upload sensitive information or regulated data to accounts such as Google Drive, Gmail, or Microsoft OneDrive.

Employees rely on these accounts for convenience, collaboration, or access to AI tools, but “this behaviour introduces significant challenges for organisations trying to protect sensitive information,” the Threat Lab report highlighted.

Attackers may exploit the trust employees place in these applications to distribute malicious files.

“In healthcare, Azure Static Web Apps, GitHub, and Microsoft OneDrive were the platforms most frequently exploited by attackers for malware distribution, with 8.2%, 8%, and 6.3% of organisations detecting employees attempting to download malware from each app, respectively,” the report said.

“Strengthening DLP coverage, improving employee education, and rigorously enforcing clear data-handling policies to contain the growing threat of both accidental and malicious data exposure can be an effective strategy for reducing risk in both areas,” the Threat Lab report recommended.

Email scams remain the most common healthcare attack

While AI-related data leaks are gaining attention, the CyberCX report found the most common cyber incident affecting healthcare organisations remains business email compromise (BEC).

These attacks occur when criminals gain access to an organisation’s email system. They said small to medium healthcare organisations are particularly vulnerable.

“CyberCX believes this reflects the large number of small and medium-sized entities, such as doctors’ practices and clinics, which tend to have a lower maturity security posture and limited guidance on best practices, which are often outsourced entirely to third-party managed services providers.”

Once attackers gain access, they may send fraudulent invoices, redirect payments, or continue spreading phishing emails across the organisation.

Technology adoption is moving faster than security policy

Taken together, the reports suggest the healthcare sector is entering a new phase of cyber risk driven by rapidly evolving digital tools. It doesn’t replace existing threats but layers new and evolving risks on top.

The Threat Lab investigators said strengthening oversight, data loss prevention (DLP) and AI-aware security will become even more important over the coming year.

They recommended:

  • Inspecting all HTTP and HTTPS downloads, including all web and cloud traffic, to prevent malware from infiltrating your network;
  • Blocking access to apps that do not serve any legitimate business purpose;
  • Using DLP policies to detect potentially sensitive information;
  • Using Remote Browser Isolation (RBI) technology.

Ray Canzanese, director of Netskope Threat Labs said companies that operate without security guardrails governing cloud and AI usage are likely to suffer regulated patient and clinical data leaks, and potentially high regulatory penalties.

“Deploying company-approved applications that meet employees’ demands for convenience and productivity, along with relevant security tools that offer full visibility and control over usage and data movements, should be a high priority for healthcare organisations to strike a balance between modernisation and security,” he said.

Download the Threat Labs Report: Healthcare 2026

Download the CyberCX Threat Report.

End of content

No more pages to load

Log In Register ×