How the pressure of publish or perish affects us all

11 minute read

The hyper-competitive research space and drive to be published may be leading to intellectual malnutrition When the internationally renowned Baker IDI Heart and Diabetes Institute revealed that one of their top researchers had faked data and retracted two papers in September, it made world news. The lead author admitted to the fraud after a separate […]

The hyper-competitive research space and drive to be published may be leading to intellectual malnutrition

When the internationally renowned Baker IDI Heart and Diabetes Institute revealed that one of their top researchers had faked data and retracted two papers in September, it made world news.

The lead author admitted to the fraud after a separate sub-analysis discovered anomalies in their research that suggested a popular blood pressure drug could help patients increase their physical fitness.

The paper and a sub-analysis were retracted in JAMA and Circulation Research.

This comes off the back of another high-profile event in the world of retractions. In the first three months of this year, a number of major journals had to retract around 170 academic articles for peer review fraud.

So what’s happening? Are researchers just losing their way? Or are we just more aware of retractions when they happen?

Websites like Retraction Watch have brought an exposure and scrutiny to retractions that has never been seen before.

Now, the interested public gets updates several times a day as new corrections and retractions are announced. A far cry from back when retractions and corrections took place largely behind the closed doors of universities and research institutes.

However it’s important to note not all corrections to scientific papers are retractions, and not all retractions are due to fraud or misconduct, Professor Virginia Barbour, chair of the Committee on Publication Ethics (COPE) says.

With a spotlight on retractions, the public may think that researchers are doing terrible things and that we need to be very careful about what is in the published literature, Barbour says.

But there is a driving factor that goes beyond simply the technological ability to spot errors in need of correction, or a few bad apples acting in unethical isolation.

“Actually underlying this [is a] much bigger thing, which is this pressure to publish,” says Barbour.

Researchers are many, and competition for funding and employment is fierce in academic institutions. So how does an employer or funding body judge the achievements of a researcher?

One simple metric is the number of research papers to their name.

“We know that researchers experience this pressure to publish and this does actually lead them to cut corners [which] can lead to papers that may not be reproducible again down the line,” she says.

“It’s driven essentially by an obsession for how institutions are ranked,” notes Barbour. “They are very dependent on the ranking of journals because funding flows to institutions in terms of how they rank.”

“If you only reward people for one thing, that’s what they’ll work toward.”

And work toward it they have.

Estimates from PubMed and other databases put the number of articles published each year at around 1.4 to 2 million.

About half of them no one ever quotes again, says Dr Stephen Leeder, emeritus professor of public health and community medicine at the University of Sydney.

The former MJA editor-in-chief says there is a real concern that the quality of publications “is seriously outstripped by the quantity at the moment, and it would be better to have fewer high quality publications than more low quality publications”.



In 2013, the British physicist Peter Higgs has raised his own concerns with the competitive demands on researchers in an interview with the Guardian.

The man who gave his name to the Higgs boson said that he probably wouldn’t have a job in today’s environment because he would not be considered “productive” enough, having published fewer than ten articles in his life.

Leeder agrees, and says he is concerned that such a myopic take on good researchers may become a barrier to creating research “that is innovative, that describes genuine new knowledge, that advances society’s understanding of itself and the world.”

“There’s a dilution of good quality research by this pressure to generate large numbers of publications,” he says.

Even doctors who aren’t active in research nevertheless depend on research of the new knowledge that drives new therapies or new diagnostic tests, Professor Leeder said.

“So you can’t be indifferent to this anymore than somebody in the community should be indifferent to the quality of the food they eat.”

“This is the equivalent of junk food, which is junk publications,” he said. “And those sort of things just are intellectual malnutrition.”

Unfortunately for an individual researcher, publications are necessary for career development, especially promotion, he says.

He has had conversations with young research workers in public health who say that they would like to publish fewer papers and of higher quality, ‘but the pressures are such that every time I turn around I have to produce a publication’.

“I think what it does do is make research workers sensitive to pressures that aren’t necessarily those that generate the best new knowledge for society.”

“And the institution needs you to publish in high impact journals so that it can claim credit for that when it comes to determining funding for research and funding for academic positions,” Professor Leeder said.

“So on both counts there’s a lot of pressure to publish.”



Proof of alien life, evidence that vaccines cause autism, spurious research discounting climate change.

These curious pieces of research first exposed Associate Professor Michael Brown, from the Monash University School of Physics and Astronomy, to the world of predatory publishing.

Brown started noticing that people would sometimes quote sources and papers that looked legitimate in online debates, “but when you dug under the surface, the quality of papers was very low and very often you’d find they’re in these very obscure journals”.

A bit more digging revealed that these publishers were taking part in what’s called predatory publishing, even a form of vanity press.

Where traditional publishing companies largely make money from individual and institutional subscriptions, the open access model has shifted the burden to the researcher to pay fees to have their work published.

And rather than relying on a printing press to get a hard copy of a journal out to readers, research is now primarily accessed online.

This means that the cost of production has collapsed, Professor Brown explains, “so if you can produce a suitably formatted PDF then you can run [a publishing company] from home on a relatively modest computer”.

“There’s effectively no quality control of these publications, so they’re published with typos, if it’s scientific there’s no robust statistics [and] they often misrepresent literature,” he says.

There are certainly some researchers whose CVs are littered with suspect journals, whose motives Professor Brown questions.

But others are more legitimately duped, he says.

Names and faces of esteemed researchers are often used to fill editorial boards, without permission, to lend the journal an air
of credibility.

And some journals actively spam researchers to get them to publish in their journals, often not being completely up front about the fees involved, Brown explains.

“They’re seeking to extract as much money as possible,” he said. “In some cases [charging] thousands of dollars.

“So often when I get a spam email it’s offering [me] to be guest editor of a special issue, which is for a significant journal and is a big deal,” Brown says. “Things like that are playing a little bit to a researcher’s desire to have certain achievements.”

Associate Professor Jeffrey Beall, an academic librarian at the University of Colorado in Denver, is renowned for launching a website dedicated to calling out predatory publishers and journals.

“Many of these publishers are corrupt and exist only to make money off the author processing charges that are billed to authors upon acceptance of their scientific manuscripts,” he explains in his blog,Scholarly Open Access.

And business is booming.

The number of predatory publishers has skyrocketed from 18 to 693 between 2011 and 2015, according to Professor Beall’s figures.

Another recent analysis published in medical journal BMC Medicine found that the number of articles published in predatory journals jumped from 53,000 in 2010 to around 420,000 in 2014.

This means that between one fifth to one third of all published papers are now potentially published in these journals.

While the report shows a majority of these articles are published in Africa and Asia, at an average cost of US$178, the problem isn’t limited to developing countries.

One of the largest companies accused of predatory publishing, OMICS Group, was recently outed for having half of their 700 peer reviewed journals defunct, “and the rest suffering a credibility problem” Radio National’s Background Briefing reported.

Hundreds of Australian academics are associated with the group, whether knowingly or not, and OMICS is organising a series of medical conferences in Australia for 2016.

“Just like being aware of Nigerian prince scams that arrive via email, the only way of fully defending against these is by being aware of these predatory publishers and how they operate and what they do,” says Professor Brown.



“I think the time has come to find another way to publish science,” BMJ’s former editor of 25 years, Professor Richard Smith told The Medical Republic.

He doesn’t think the endless publishing of small articles is translating into good health outcomes for patients or doctors.

Instead, Professor Smith said he would like to see the focus shift from where an article is published, and instead to factors like how it is being used in the real world and how it is impacting health.

For example, the number of citations attached to particular articles, how much they are tweeted or how much are they incorporated into guidelines are alternative metrics that could be used.

“I think journals have been important since the 17th century when they were invented. But I think now in the age of the internet, particularly now that we can publish the full data with the study, I think that moves things into a very different space.”

The main reason that journals persist is that they are the primary way that academics are assessed, yet the idea that what is published in the big journals is more significant or more likely to be right “is just plain wrong”, Professor Smith argues.

He says the big journals so often focus on randomised trials, particularly ones with funding ties to pharmaceutical companies.

“Seventy percent of the trials published in the New England Journal of Medicine and the Lancet are funded by pharmaceutical companies,” Smith said. “And there’s a real worry that [pharmaceutical companies] are publishing their big positive results in those big journals, and other research is either not being published at all or is being published in obscure places.

“It doesn’t mean that if you’re published in the New England Journal it’s automatically a wonderful paper, in fact there’s quite a lot of evidence that papers published in the big journals are more likely in the longer run to turn out to be wrong, because they’re kind of new and exciting.”



While the publish or perish refrain is still heard echoing around university halls, a change may be underway.

In 2009, the Australian Research Council (ARC) introduced a research evaluation exercise that prioritised the quality rather than the volume of research presented.

“We have certainly seen a shift in behaviour, and you see far fewer lower quality papers being published,” ARC CEO Aidan Byrne says. “In the past when you’re just counting papers, I think there’s a tendency to take work and atomise it up.”

The problem being that it becomes longer and more difficult to create a good paper, and there are still pressures within institutes to be seen as active, Byrne said.

“Having that as a pre-eminent attribute is a useful way to stop the trivial gaming of the academic system – which is just to produce more papers,” says Byrne. “So it’s decreased the bias, but it doesn’t remove it.”

The ARC and NHMRC funding bodies are currently updating their 2007 Australian Code for the Responsible Conduct of Research, and it is expected to have more teeth in tackling academic misconduct.

However, the pressures on researchers are similar to those in other domains, Byrne says.

“You think about cycling and people taking drugs to enhance performance; in some ways academic integrity is just that in the academic domain,” says Byrne.

“It’s a very competitive field and resources aren’t enough for everyone,” says Byrne. “People are performing at the world standard and competing on a world stage, so there’s always a temptation, and we haven’t got a system able to remove the temptation.”



This is part two of a three part series on publishing. For more, check out

Part one: The paywall paradox

Part three: The gold standard: what you should know about peer review

End of content

No more pages to load

Log In Register ×