All that glitters is not necessarily good for us. The more glittery the ‘solution’, the more likely I am to be wary.
I was born in an era of scientific miracles; the trip to the moon, the invention of the cochlear implant, the beginnings of organ transplantation and many, many others.
At the time, science gave medicine its power. Community trust in doctors was underpinned by the miracles offered by scientific discovery. The white coats, the Latin terms and various medical technologies coded as “science-y” to the general population, and this invoked respect and trust.
I often think the iron lung was only possible because of the belief in the magic of science. Would people have tolerated this treatment if the community’s faith in science had been less complete?
Nursing uniforms, technological equipment and archaic language all signified something bigger and more complex than the average person could understand, certainly before the internet democratised knowledge. Belief in the power of medicine rested on faith that doctors and nurses had special scientific knowledge and understanding.
The importance of objectivity
In my view, however, the dominance of science was beginning to wane by the time I finished medical school at the end of the 20th century. Other ways of knowing and being were growing, and the patient’s voice was more prominent.
Nevertheless, numbers, graphs, flowcharts and other visual indicators of quantitative data still scream credibility, objectivity and TRUTH, and this brings me to faith in numbers.
In my lifetime, anything with numbers has been seen as more reliable, more valid and more real than anything with words. Quantitative methods imply the data is objective, with PROPER evidence. There may be some stories and anecdotes scattered through a report to keep the reader interested, and to make a nod to lived experience, but “truth” is based on science.
The problem is that the scientific method has been co-opted to increase the credibility of others not well versed in its methods. In some cases, science has become a bit of an approving bumper sticker; pop in a nod to “evidence-based practice” and a reference, and you immediately establish yourself as a credible source.
This means the scientific method is sometimes compromised to drive a particular agenda.
Diluting science to make an argument
Health services research is prone to using data of dubious scientific quality. There is, of course, excellent health services research, but there is also research that is designed to push a certain model, point of view, product or service. Frankly, it is marketing disguised as science.
Those of us of a certain age remember the Ponds Institute, a series of ads over many years relying on gorgeous blondes in white coats promoting skin products. They claimed their products would “increase radiance by 92%” or “reduce the seven signs of ageing”. Sometimes, they backed it with an anecdote by a validated expert, usually a doctor of dermatology. It was all nonsense, of course, dressed up as science.
Health services research can use similar strategies, so here’s how to recognise the tactics.
The research is designed to prove something
Good trials start with a null hypothesis.
Researchers try to disprove the hypothesis they hold dear and there are good reasons for this. A null hypothesis forces researchers to search for an alternative point of view. It’s hard to do, but good researchers in clinical trials have to be prepared for the outcome where the placebo is better than the new fancy drug. And these outcomes need to be published.
In health services research, there are too many trials designed to prove something works. Often there is no attempt to measure the alternative at all.
If you see a trial where a Likert scale on patient satisfaction asks you to indicate whether your health experience is “somewhat awesome, highly awesome or extremely awesome” there is a problem. That sort of design is, unfortunately, not uncommon.
The funders, researchers and evaluators benefit from a positive outcome
Conflict of interest exists, and it affects the design, outcomes and publication of trials.
We all know about pharmaceutical companies and vested interests, but business owners, funders, entrepreneurs and people seeking greater clinical and organisational power are conflicted too. If the whole team, including the steering group, are desperately wanting a positive outcome, it is not surprising that they get one. Whether it is conscious or not is irrelevant.
It is worse if the research leaders are also on the team that designs national policies, particularly when they drive a “solution” based on the trials they have conducted. It’s the reason the world got into trouble with oxycontin. The pharmaceutical companies designed, ran, published and profited from the trials. Health services can do the same.
The research uses numbers to represent concepts you can’t count
I loathe the use of clinical outcome tools in mental health research. I understand why they need to be used, but they are deeply flawed. Why?
Let’s take a Likert scale item, measuring “hopelessness”. In a K10, people are asked “About how often did you feel hopeless?” and can tick items from “never” to “all of the time”.
If I score a 4 and my friend scores a 2, do I have twice the hopelessness that she does? What does that even mean? Are we talking about the same thing? If the boxes need to be ticked if I want to get into a service, will I tick a certain box if I think it will help me get in? Are there cultural reasons why I will or won’t identify “hopelessness” as part of my symptoms? These scales may be reliable, but I do wonder if they’re valid.
Related
There aren’t enough people in the “trial” to prove anything
I read a trial recently that suggested rolling out an intervention to all older Australians. The method? Nineteen people who did NOT have the condition under question and filled out at least some of the survey. More than half suggested the intervention MIGHT be helpful.
That is guesswork, not data.
The conclusion doesn’t match the results
Sometimes the conclusion statement is a bit, well, overstated.
Like the study above, which suggested the intervention would change outcomes for all older Australians. On the basis of a study of 19 people. Without the disease. Who finished part of the survey.
It’s impossible to see a bad outcome
If we don’t ask about negative outcomes, the reader thinks there aren’t any.Especially if the researchers are invested in positive results. Approaching these papers with that in mind is … prudent.
The data is old
The choice of papers is important. In the National Digital Health Strategy (2021) the writers cite a paper from 2010 – the only systematic review in the document.
You have to ask why you would use a 2010 paper when there are so many more current research studies available.
Critical thinking: more necessary than ever
As critical thinkers, it is important that we call out poor research when we see it.
Doctors review papers for publication, sit on committees that consider them and advocate for better evidence. Our roles are especially important when there are vested interests involved, particularly those funded by governments with an obvious desire to prove a particular point of view. The more experts contest the outcomes of poor science, the safer policies can be.
The theology of numbers
The problem is that people have faith in numbers which means data looks more real, more solid and more persuasive than it should. We need to keep this in mind when we interact with entrepreneurs who try to persuade us that their Big Data and innovative technology are demonstrably awesome.
They may be, but we shouldn’t be taking their claims on faith and we shouldn’t be swayed by the technological glitter.
It’s hard not to be impressed, when there are shiny systems with impressive looking sites and deep investment by reputable companies, but we need to be careful. A lot of these systems will be helpful, but so was oxycontin.
This is where we belong as doctors.
New interventions can be impressive steps forward, or expensive and dangerous dead ends. It is easy to be cast as cynical luddites who oppose change. Maybe this is where I am, but I doubt it.
I have lived in an era of scientific miracles, and I have also seen extraordinary scientific harm. The more glittery the “solution”, the more likely I am to be wary. And maybe that’s not a bad thing.
Associate Professor Louise Stone is a working GP who researches the social foundations of medicine in the ANU Medical School. She tweets @GPswampwarrior.


