Healthcare workers put security in virtual bin

5 minute read


Unworkable IT systems have inspired doctors and nurses to come up with creative hacks just to get through their day.


Cybersecurity, as this organ regularly reminds readers, is important in healthcare.

It’s certainly more important than doctors and nurses doing their jobs – at least if you ask the people who design permissions, authentications, deauthentications and other cyber protections for hospitals.

Every GP will have a favourite practice management software bug, and be familiar with the notorious PRODA, which, per a recent RACGP submission begging for a more streamlined PBS authority system, “lacks basic functionality”. This is a polite way to describe a system that logs you out after five minutes of inactivity, requires dual authentication at every log-in, fails to save patient details and cannot handle drugs with multiple authority access codes.

But GPs may be getting off relatively lightly, according to a draft paper brought to our attention by the tech blogger Fred Herbert, which finds security protections in hospitals are so impractically stringent that flagrant workarounds and violations are the norm.

The paper is titled “Workarounds to Computer Access in Healthcare Organizations: You Want My Password or a Dead Patient?” (snappy but also sloppy – the choice is password or a live patient).

It starts by saying cybersecurity efforts in healthcare are often defeated and evaded, not by dastardly black-hat hackers but by “clinicians trying to use the computer system for conventional healthcare activities”.

Without cyber defence tools hospitals are vulnerable to attack. “Unfortunately, all too often, with these tools, clinicians cannot do their job – and the medical mission trumps the security mission.”

The team interviewed healthcare and IT workers and bosses and shadowed clinicians at work to come up with an entertaining-in-an-infuriating-way list of problems and their ad-hoc solutions. Here are a few examples of both:

Permissions – systems assume clinicians only need access to one speciality when many work across multiple fields; juniors change responsibilities like socks, and have to get their access reconfigured every time, losing access to previous patients’ data.

Password authentication – users write down and share passwords. “Sticky notes form sticky stalagmites on medical devices and in medication preparation rooms. We’ve observed entire hospital units share a password to a medical device, where the password is taped onto the device. We found emergency room supply rooms with locked doors where the lock code was written on the door – no one wanted to prevent a clinician from obtaining emergency supplies because they didn’t remember the code.”

Routine password expiry – doctors who do monthly hospital rounds spend half their visiting time with the help desk just to be allowed in. Every. Month.

Deauthentication – one clinic’s dictation system has a five-minute timeout and a one-minute login process, so that in a 14-hour an estimated 1.5 hours is spent logging in; clinicians defeat proximity sensor-based timeouts by putting Styrofoam cups over detectors; a junior goes around pressing the space bar on every keyboard to prevent timeouts.

Doctors and nurses create “shadow notes”, which they actually use, alongside official records.

This one isn’t even cybersecurity-related but was too good to leave out: “In one EHR, patients meeting protocols for blood thinners prophylaxis force clinicians to order blood thinners before they can end their computer session – even if the patient is already on blood thinners. Clinicians must carry out a dangerous workaround of ordering a second dose (lethal if the patient actually receives it), quit the system, then re-log-in to cancel the second dose”.

There very much is a need to protect patient data. But is it too much to ask for developers and their bosses to ask doctors and nurses what their jobs entail minute to minute and feed that information into their designs?

Yes, it is, according to comments from some of Herbert’s readers, who have worked on the other side:

  • “… every decision involved someone saying ‘well, how can we know how the end users would use this,” and me gesturing emphatically out the conference room window at the hospital across the street. I wanted to walk over and ask people for help, but they hated that idea, in favour of working from even-then-outdated advice like the password expiration nonsense mentioned early here …”
  • “at every company i have worked with i have make a fuss about talking to real users, and trying to push product and engineers to talk to the real end users – NOT just the upper management or financial people that placed the order. they are always hugely resistant, it blows my mind”
  • “at my job, I’m specifically and explicitly kept from learning how internal teams, much less end users, use my software. I only learn of our mutually wrong assumptions from tech support tickets, and only when they escalate to my level”
  • “Our most recent IT system got installed after one hour of “training” for everyone, followed by almost a full year of taking no feedback whatsoever from anyone actually using it”

The authors end their paper on a curiously cheerful note: “[I]n the inevitable conflict between even well-intended people vs. the machines and the machine rule makers, it’s the people who are more creative and motivated.”

Yay for human creativity and motivation under frustrating conditions, and a big shrug for data security, we guess.

Send all your passwords and story tips to penny@medicalrepublic.com.au.

End of content

No more pages to load

Log In Register ×