Peer review is essentially the weeding out of the ‘bad’ science from the ‘good’ – in an ideal world. It is also the encouraging of the former into the latter through constructive criticism and suggestions for improvement, often to the author’s dismay…
Nobody likes rejection, but I imagine it can be particularly vexing to see your almost sound, countless hours of work to effectively be deemed inferior to a perfectly fraudulent piece of almost-immediately-obvious junk. We’ve seen even seen it with a less-than-subtle fabrication of gold nanorod ‘chopsticks’ – so crudely fabricated in fact that I find it hard to believe that this wasn’t put out as an attempt to shine light on the shortcomings of peer review. But I think the common expectation we have for the first chronological step in peer review to be capable of policing fraud is an unnecessary one.
Even in the minority of cases surfaced of bad science, or non-science, passing through the apparently not-so-tough barricades of peer review, it’s only the rarer cases where failure hasn’t been spotted so quickly that have the potential to be more troublesome. The longer bad things go unnoticed the greater the chance that irretrievable sums of funding and lengths of hard work will have gone to waste, and in the worst cases, created more bad than good. Part of the problem is that once a paper is published, many take it as assured. Perhaps there has to be a greater distinction between published and generally accepted by the scientific community.
Though not perfect at its job (being a process controlled by humans), it is better than nothing at all. What’s important is whether peer review in its current state is the best approach possible. A question complicated by the variability and lack of transparency in standards/processes.
Reviews can be single blind (the current norm – only the reviewers’ identities hidden), double-blind (both authors’ and reviewers’ identities hidden) or open (all identities known). The common preference for double-blind to single blind makes sense – while not completely eliminating bias, it is a push in that direction. Sure, in cases, the high specialisation around a may hinder the anonymity of authors, but the alternative is no better in this regard.
Recently, post-publication peer review platforms, such as PubPeer, are becoming more widely used, going hand in hand with the movement from print publication to online publication, unrestricted by printing costs. Stating the obvious, here studies are reviewed after publication, voluntarily or by invited reviewers.
I favour a system in which trained reviewers act in the first phase to check the validity of the study, and if appropriate, to check the raw data itself. The second (voluntary) post-publication phase would allow further spotting of errors, and recording of repeated tests. When results have been reproduced, readers can think about giving them real value.
Is peer review working? Its working pretty well I would say, and though not at its optimum, its future seems brighter.
Image credit: Courtesy of James Yang, http://www.jamesyang.com