Monday, October 7, 2013

Failures and virtues of peer review

For a few days now, scientists have been discussing a small experiment conducted by a Science correspondent. He invented several variations on a bogus study in cancer research that contained several fundamental flaws and sent them out to 304 open access journals. 255 of them made a decision, and of those, 157 (or slightly more than 60%) accepted them.

Predictably, this experiment is being discussed along what we might call partisan lines. Those skeptical of the OA model see this as one more illustration of the problems of a system in which journals earn their money based on the number of accepted papers, pretty much regardless of their quality. Those advocating the OA model consider the affair a piece of propaganda for the traditional commercial publishing industry and stress that the experiment did not feature a control group of subscription based journals to see if they would have a lower acceptance rate.

I have already written about my concerns about the OA model so I will not repeat myself here. Note simply that I would very much prefer a system in which the rainbow press of science would be seen for what it is, in which all major journals are run as non-profit public utilities, and in which the results of scientific research are as available to the public as access to publishing venues is available to those who cannot afford $1,300 in article processing fees. I am merely worried about the incentives publishers operate under when using the OA model.

Instead, I wonder about the general issue of peer review. To express his opinion that the problem is not with OA but with peer review in general, Michael Eisen uses his linked blog post to call peer review a "joke", and similar sentiments are being expressed elsewhere. Others stress that this process is the gold standard that allows us to distinguish real scholarship from crankery. What gives?

I have to say, I have myself become a bit disillusioned with supposedly prestigious journals in the last two or so years. Two examples will serve to illustrate why that is so.

About two years ago I refereed a manuscript for a rather small journal. The authors clearly did not understand many of the methods they were using nor all the theory behind their science. Worse, the data they presented were quite simply insufficient to support any of their conclusions. There was no choice but to recommend rejection, and the editor, who would trust me in those matters, consequently rejected the manuscript. A few months later, the very same paper was published in a considerably more prestigious phylogenetics journal (impact factor about three times higher than the first) and, crucially, it was essentially unchanged. In other words, the editors and reviewers of a much "better" journal accepted an objectively totally unacceptable paper.

Only this year I have been, for the second time in my life, asked to review for what may be the most prestigious systematics journal of all (obviously not counting the scientific rainbow press, I am talking about specialist journals here). The manuscript in question presented something methodologically new so it was definitely worth publishing. There was just one snag: in one step, the authors used an analysis while violating its assumptions, quite openly actually. In my report, I pointed out that this was clearly unacceptable and that they would have to redo this step in the correct fashion. Ultimately, I was shocked to find that the editor did not make this re-analysis one of the conditions for acceptance. This is not a question of taste or of opinion; what they did was an absolute no, a deal-breaker, and on top of that the repetition would not have been hard to do.

So I have reasons to be frustrated. Sometimes things go wrong. Sometimes referees overlook a glaring problem. Sometimes editors don't care. Sometimes editors wave through a sub-par paper from a personal friend, and sometimes a referee trashes a good paper because it was written by an enemy. The process is run by humans, what do you bloody expect?

But peer review a joke? That is throwing out the baby with the bathwater. When I think about all the crap that would be published without peer review, I flinch. It does separate the wheat from the chaff, even if a bit of chaff sometimes makes it through the gate and some of the wheat needs three attempts to pass.

And do those who dismiss it have a better idea, an alternative? So far I have yet to hear one. The most radical proposal is to have post-publication review, but what incentive would the authors have to make changes, no matter how helpful the criticism, when they can already put the publication into their CV?

Open peer review sounds great at first because it makes referees more accountable, but well, what if postdoc Sandie Hoping-For-A-Grant has to tell Professor Frank McInfluential that his manuscript sucks? Do you think that would help Sandie's career? Do you think Frank's manuscripts would be treated as critically as the first work of a grad student? Ha. I would rather suggest to go double-blind instead so that the referee does not know the author's name. Really in an ideal world even the editor should not know it until they have decided whether to accept.

Yes, the current system has problems. But again: Baby, bathwater.

2 comments:

  1. I have always looked at it that the reviewer's job is to help colleagues publish better papers, or to save them the embarrassment of publishing a bad paper. The first paper I ever reviewed was based on an arithmetic mistake. It was rejected, on my recommendations, and saved a couple of colleagues and friends some embarassment.

    ReplyDelete
  2. The way you are putting it shows that you are a kinder person than I am. But when writing 'all the crap that would be published' I am not really thinking of otherwise competent and well-meaning colleagues who happen to make a blunder sometimes. We all do. It is more about the flood of unpublishably atrocious papers that some journals get inundated with, about cranks, and about pseudo-scientists.

    ReplyDelete