The gold standard: what you should know about peer review

13 minute read


  “Is this journal peer reviewed?” is one of the most common questions people ask when appraising new research. Even now, at a time when the publishing landscape is rapidly changing and old models are in the throes of disruption, traditional peer review is often seen as the keystone of research credibility. A shortcut that […]


 

“Is this journal peer reviewed?” is one of the most common questions people ask when appraising new research.

Even now, at a time when the publishing landscape is rapidly changing and old models are in the throes of disruption, traditional peer review is often seen as the keystone of research credibility. A shortcut that signifies an academic paper can be trusted.

And yet, the past few years have seen high profile failures of the peer review process, which have shone a spotlight on its flaws and limitations.

In March 2015, BioMed Central (BMC) retracted 43 papers after it emerged that reviews had been fabricated across at least 13 different journals.

This scam, like several others that had come before, involved a form of self peer review where fake email addresses were provided to the journals for potential referees, but were ultimately used by the author themselves or their friends to write the positive review.

The BMC editors became suspicious after noticing misspelled names and mismatched email addresses.

An investigation revealed that the fraud appeared to be orchestrated by a third party company enlisted by the researchers to help with language and improving the reports.

The papers were retracted “because the peer-review process was inappropriately influenced and compromised,” the journal websites said.  “As a result, the scientific integrity of the article cannot be guaranteed.”

Another example of flawed peer review was the publication of a widely-criticised study by NASA scientists claiming to show that arsenic had replaced phosphorus in the DNA
of bacterium.

Historically, peer review has not been a gatekeeper to the much important scientific research.

The practice has been around in some form for 300 years. However, it became more formalised in the 20th century, with Nature implementing a formal peer review system as late as 1967.

Einstein’s work was largely published without peer review, as was Watson and Crick’s landmark paper that went on to win them a Nobel Prize.

And it is only recently that the process itself has really begun being studied.

 

WHAT IS PEER REVIEW?

People have used the same words to describe peer review that Winston Churchill used to describe democracy: that it is the worst way to do it, except for all those other forms that have been tried.

But what is peer review, and how does it work?

Traditionally, after a researcher writes up their work, the process of trying to get it published begins. While the decision of where to submit their work varies from researcher to researcher, a high impact factor is often considered a good indicator of a prestigious journal, and those with the highest aspirations might attempt to be published in the NEJM or Science.

On receiving the research, the journal’s editorial team will generally look over the paper to see if it’s appropriate to publish in their journal. This isn’t necessarily looking at the research behind the work, instead it is to see if it’s the kind of work that will create a splash.

If it’s deemed worthy, the manuscript will be sent to between two and four experts in the field considered able to evaluate the paper on technical grounds.

Each set of reviews may take months and months and that slows down the whole process

These peer reviewers look at things like the clinical impact and timeliness, as well as whether the writing is clear. They also evaluate the piece for the soundness of the methods, whether the conclusions are justified, whether it is likely to be reproducible, that the appropriate statistics were used and that the authors
have referenced prior work properly.

At this point the reviewers have one of three recommendations they can give the paper: to accept it, to reject it, or to reject it but request changes or further data.

“As academics, we’re rewarded on the impact factor of the journal. But, of course, the ones with the highest impact factor are selective and only choose a fraction of what’s sent to them,” said Associate Professor Lachlan Coin, in the Genomics and Computational Biology Division at the University of Queensland.

And the big-ticket journals are highly competitive. The NEJM, for example, publishes around only 5% of the 5000 submissions they receive each year.

“This means that we, as scientists, are encouraged to always send to the most prestigious journal and then typically get rejected, and move down the ladder,” he said. “Each set of reviews may take months and months and that slows down the whole process.”

It depends on the field of research, but Professor Coin said that two or three months is considered a rapid turnaround and that some papers can take up to a year to get through the peer review process.

Peer review has been criticised as a bottleneck in the publication of research. In the case of medicine, this delay can impact on real world health outcomes.

 

TICK TOCK

“One of the big reasons is that it’s hard to get people to do the review because it’s a voluntary process,” said Professor Coin.

“For big journals it’s not so much a problem because they have prestige and reviewers might hope that it will help them get accepted down the track. But for medium to lower impact journals, it’s said that you need to ask ten people to get two people to agree.”

Frustrated with the time and cost associated with peer review, Professor Coin launched a website called Academic Karma that provides reviewers with karma credits for each manuscript they review.

Researchers can have their own work reviewed by cashing in their karma, and the end goal is to incentivise more academics to review, and provide an external metric to show how much individuals are doing.

In the case of Academic Karma, the idea is for authors to take their revised manuscript and reviews to a journal to publish, thereby eliminating the need to resubmit and having to wait for reviews several times over.

Other models to pre-publication operate differently though.

Publons, a site dedicated only to giving academics credit for peer review, shows that some of the top reviewers are doing a review once every two or three days, and up to two per day.

At that rate, some question just how rigorous the evaluation can be.

I think the whole business of pre-publication peer review is flawed

Regardless, an analysis by cell biologist Stephen Royle  found that the time it took for an article to be published was on average 239 days after it was received for all journals on PubMed in 2013.

Editors from the NEJM have also acknowledged peer review is not perfect, but was the best way to go about publishing medical research.

“Peer review is labour-intensive and sometimes time-consuming, but without it physicians themselves would have to assess the validity of new medical research and decide when to introduce new treatments into practice,” they wrote.

Dr Richard Smith, former editor of the BMJ for 25 years, thinks differently.

“I think the whole business of pre-publication peer review [is flawed], there’s lots of evidence to show how it slows everything down, that it’s biased, that it doesn’t pick up errors, that it doesn’t pick up fraud. There are all kinds of problems with it.”

“All the research we’ve done shows that, and nobody really has been able to show the upside,” Dr Smith told The Medical Republic.

In 2007, the Cochrane Collaboration reviewed the evidence and concluded there was “little empirical evidence to support the use of editorial peer review as a mechanism to ensure quality of biomedical research, despite its widespread use and costs”.

They reported that blinding had little effect on the quality assessment process, checklists for reviewers had some evidence to support their use and training appeared to have no impact on reviewers.

Overall, more research on the effects of peer review was needed, the authors concluded.

Despite the labour and expense invested in it, peer review is “based on faith in its effects, rather than on facts”, they wrote.

 

A NEW WAY OF SCIENCE

To be published in a scholarly journal stamps a work with the “imprimatur of scientific authenticity”, physicist John Ziman once said. But it’s not clear this still is the best option for the scientific and medical communities.

Instead there is a growing demand for openness and transparency, and in a world where data space and distribution limitations are rapidly shrinking, the call is for all data from research to become available, warts and all.

As part of the open access movement, researchers are being encouraged to put their pre-publication manuscripts into a database that everyone can access even if they can’t (or don’t want to) pay for the subscription rates to the big, traditional publishers.

While this is taking off in a big way in disciplines like maths and physics, biomedical researchers are also slowly coming around.

Dr Smith is working as an advisor for F1000, an open access and open peer review publisher.

Where most journals peer review and then publish, they have adopted a post-publication peer review model.

After an initial screen by an editor, the article is made public and given an identifier to make it citable. Reviewers are then asked to look at the piece and their reports are published alongside the article on the site.

This decoupling of the peer review process and publishing removes the delay between article production and article access.

“Closed (and sometimes biased) review processes can often take many months, sometimes even years, and may allow competing papers to be published first. The F1000 research model removes the possibility of a paper being deliberately blocked or held up by a single editor or referee,” the F1000 website says.

Moreover, a scientific paper tends to be “frozen in time” in the publishing system we currently have, Dr Smith says.

“But that doesn’t need to be the case,” he says. “It could be a living document, and it could continue adding information and correcting things and bringing in different views.”

Unlike some other models, F1000 requires all reviewers to publicly list their name and their institute affiliation.

It also posts each review and version of the manuscript so that the community can read and engage with the modifications and understand how the final version came to be.

“I think [the traditional model of publishing] continues because there’s a huge vested interest in it continuing,” says Dr Smith. “If you abandoned pre-publication peer review tomorrow, a lot of people would be out of work and a lot of journals would be out of a job.”

 

BLINDING

Peer review can be double blind, single blind or open. As the name suggests, a double blind review means that neither the author nor the reviewers know each other’s identities. Single blind means that authors don’t know the names of the reviewers, and open means that the identities of all involved are known.

Across Wiley’s 1593 peer reviewed journals, two thirds are single blind and the other third are double blind. When it comes to health sciences however, 95% are single blind reviewed.

Bias is a big concern in the peer review process, with single blind studies being criticised for the potential for reviewers to treat manuscripts differently based on the author’s name, institute or geography.

While it seems like double blind reviewing would counteract that bias, studies suggest that it makes little difference.

In small fields it may not be too difficult for the reviewer to guess who the author is, and authors may in fact try to make it easier if they know their reputation may influence the reviewer.

Additionally, when the researcher and their reviewer do work in such a niche, highly competitive field, closed peer review leads to the potential for a reviewer to intentionally delay or even sabotage the manuscript in an attempt to get their own work out first.

One the positive side, anonymous reviewing and commenting is also a way for people to flag potential problems and fraud, “without revealing their own identity and drawing the ire from powerful scientists”, Professor Coin explains.

However, the type of anonymous reviewing allowed by PubPeer has led to abuse and trolling. Researchers aren’t above this type of behaviour, and in the highly competitive world of research they have additional motivations to get catty.

A former editor of the BMJ, Stephen Lock, decided to see if peer review was really that superior in assessing quality.

Based on his own judgement, he divided articles into those that he would publish and those he wouldn’t. When he compared it to the group of papers that were chosen after the peer review process, the differences were minimal.

Dr Smith is also critical of the peer review process, saying it is inconsistent and subject to variation across journals.

He gives an example whereby two researchers challenged him to publish an issue of the BMJ that included only papers that had failed peer review to see if anyone noticed.

His reply was “How do you know I haven’t already done it?”

And studies into the ability of peer review to sift out major errors don’t exactly inspire hope. One study shows that when errors were intentionally added to papers before peer review, only a quarter were flagged, on average, with some reviewers not catching a single one.

 

MORE OF AN ART THAN A SCIENCE

“I think that peer review will still have a place in the world of publishing,” says Virginia Barbour, chair of the Committee on Publishing Ethics (COPE).

We aren’t at the point where we can do away with pre-publication peer review just yet, and there is a need to be careful when it comes to clinical research that has a direct impact on patient safety, Dr Barbour said.

But peer review is not a panacea, Dr Barbour said.

“It can’t tell you everything you need to know about whether the research is right or wrong,” she says. “It’s simply a mechanism of telling you whether a paper has gone through a particular process.”

Readers should still use their own critical thinking skills to evaluate the research in the context of what they know, says Dr Barbour.

“Peer review is absolutely not a shortcut to establishing quality or truth.”

 

This is part three of a three part series on publishing. For more, check out

Part one: The paywall paradox

Part two: How the pressure of publish or perish affects us all

End of content

No more pages to load

Log In Register ×