Bold attempt at studying open peer review flops badly

Richard Smith

Personal commentary from Richard Smith, chair of Open Pharma

A good study has both internal validity (the conclusions reached are justified) and external validity (it tells you something useful in the real world), but unfortunately a recent study http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0179031, that concludes that open peer reviews are of poorer quality than conventional closed peer reviews, has neither.

I’m in some ways reluctant to be so critical of the study because I welcome studies of peer review. I’ve long argued that peer review, which is central to all science, is paradoxically faith not evidence based. We have many studies showing the downside of peer review (slow, expensive, inefficient, largely a lottery, doesn’t detect errors or fraud, biased, and anti-innovatory) but few showing benefit. It may, however, be an absence of evidence rather than evidence of absence of effect, which is why I welcome more studies.

The authors from the British Journal of Surgery set out to see whether “an innovative online peer review process” would bring benefit to the journal. Unfortunately they adopted a system that nobody has advocated and was bound to fail. They put the papers online (although accessible only to potential reviewers) and emailed 7000 reviewers asking if they might post reviews of the papers online. Everybody knows that if you email 7000 people asking them to do something you are going to get few responses–and the responses you do get are likely to be strange. In contrast, if you email one person asking them to do something he or she usually will. Journals or platforms that use open online peer review do not issue a general invitation but invite specific reviewers–like conventional journals.

In the non-randomised comparison between open and conventional review, the authors asked the authors of 265 papers if they were willing to have their papers reviewed online as well as in the usual way. A total of 112 (42%) declined, showing right away that the system would not be viable assuming you want to put papers through the same peer review process. Forty-three of the remaining papers were rejected by editors in the usual way, leaving 110 to be reviewed both openly and conventionally. As I would expect, only 44 of the 110 papers (40%) received online reviews. The total number of online reviews was 100, but, again as you would expect, one reviewer produced 15 reviews and one paper received 10 reviews while another received eight. Furthermore, the whole system fizzled out over time–with 11 of the final papers to be reviewed receiving only one review and the remaining two receiving two each.

It’s reasonable to conclude that the online open system was a failure, but I doubt that anybody would have expected it to succeed.

Then the comparison between the open and conventional reviews is clearly meaningless.

Randomised studies of open versus closed review, some of which I’ve been part of, compare reviews gathered in the same way except for the openness and usually conclude that there’s little difference between the quality of the reviews with a small tendency for open reviews to be better.

So for Scholarly Kitchen to report the study from the British Journal of Surgery with the headline “Study Reports Open Peer Review Attracts Fewer Reviews, Quality Suffers” is less than scholarly. https://scholarlykitchen.sspnet.org/2017/07/11/open-peer-review-attracts-fewer-lower-quality-reviews-study/

Competing interest: Richard Smith is a longstanding critic peer review and has received consultancy payments from F1000Research, which uses an open online peer review system (but not one anything like the one described in the study).

Disclaimer: This post is Richard Smith’s personal opinion, not necessarily that of the Open Pharma group or its participants. This post, our website and the project is for scientific exchange not commercial gain.