Peer review is a central pillar of modern science. The cumulative nature of scientific knowledge calls for the integrity of the peer review process. As the great Isaac Newton once told us, “If I have seen further it is only by standing on the shoulders of giants”. No respectable scientist would wish to base their research on a fallacious or inaccurate paper; therefore, the trust in the reliability of the work of others is crucial and the giant’s shoulders must be checked thoroughly before we are willing to stand on them. For this reason, peer review has been fundamental to scientific discourse for well over 300 years, and yet its functionality and legitimacy is now continually being called into question.
A system which has existed for as long as peer review must have its merits. Working at its best, peer review advances and improves academic fields, with scientists constructively criticising one other, providing their expert insight into how pieces of research can be enhanced. It delivers a strong incentive for authors to take heed of this feedback, bettering not only their own understanding but also benefiting science as a whole.
Any journal of worth uses peer review to help decide which papers to publish. Despite its many flaws peer review certainly saves editors a great deal of time and money; it is difficult to envisage the current system of scientific publication surviving without it in place. Even with reviewers working as volunteers, the large amount of secretarial work means a quality peer review process can be expensive. Nevertheless, the fact that peer review allows scientific publishing to prosper certainly doesn’t mean it is good for science in general.
Peer review’s vulnerability to fraud is well known. In 2012, the unofficial record for the highest number of faked articles was set by Japanese Anaesthesiologist, Yoshitaka Fujii. A startling 172 of his published papers were retracted because of the discovery of fabricated results. This research had been published over twenty years ago, and like many cases of fraud raised the question of how the misconduct went undetected for so long. Scandals such as this may highlight a deep cultural problem that science faces, but peer review is generally not expected to act as a fraud detection system. Data which have been manipulated carefully are often too difficult to spot during the review process. However, such forgery does not tend to stand up to the more intense scrutiny of the wider scientific community. Replication of research, another cornerstone of modern science, acts as back-up quality control to the leakiness of peer review.
Primarily in place to ensure the logic of a paper is sound, peer review checks for methodological errors and confirms the findings are of enough significance to warrant publication. Just how well does it do this? In 2008, Fiona Godlee, editor of the prestigious British Medical Journal, decided to test peer review’s ability in spotting scientific errors. She and her colleagues introduced nine major and five minor methodological mistakes into a paper that was ready for publication. It was then sent to 420 peer reviewers. The results were shocking – not one individual managed to spot more than 5 errors and 16% found nothing, altogether painting a rather bleak picture of the effectiveness of peer review. Along with the fact that reviewers seem powerless in policing fraudulent research, there seems to be little evidence that the process is improving the quality of papers at all. Godlee, in conclusion of the study, was herself accepting that the question on how best to fix peer review still looms.
The evidence clearly shows that peer review isn’t working; it must take a new approach if it is to continue as the gold standard of scientific publishing. The internet revolution has brought about the birth of post-publication review, a system which allows online readers, not just selected referees, to review and comment on a paper. The hope is that the crowd will be able to catch things the reviewers miss. This type of review can either be publisher-driven or separate from a formal review that may have already occurred. In the former, the publisher chooses the criteria which determine who can review the papers posted online. This can mean, for example, that reviews can only be submitted by scientists with a certain number of published articles to their name. On the other hand, some journals allow reviews to be left by any registered user.
Journal independent processes however, take place on blogs or third party sites. PubPeer allows anonymous comments on any articles with a DOI (a digital bar code), or those published as pre-prints on the website arXiv. Its goal is to create an environment where the problems of misleading, misconceived or fraudulent work can be discussed, adding another dimension to an article’s ‘impact’, independent of the name of the journal in which it was published. The founders themselves have chosen to remain anonymous, to, as they put it, “avoid disgruntled authors from pressuring us into removing comments on their papers.”
While these attempts to revolutionise peer review are admirable and perhaps even essential, there are serious flaws that plague the post-publication system as it stands. When Nature trialled a two-stage peer review system, allowing online users to comment on submitted papers, they discovered that “although most authors found at least some value in the comments they received, they were small in number, and editors did not think they contributed significantly to their decisions.” Then there is the issue of “trolls”. An online environment which allows anonymous posting, especially on controversial topics, often descends into irrelevant discourse and ad hominem attacks. A solution to both of these problems could be found in removing anonymity. This would motivate scientists to participate and be objective, writing high-quality reviews and well thought out comments would be a way of boosting the reviewer’s reputation.
Importantly, these systems mark a move away from the antiquated idea that the publication of a paper – particularly in a more esteemed journal – should mean the research is valid or relevant indefinitely. Even when peer review is devoid of errors, it only represents the opinions of a small number of people at a fixed point in time. The introduction of post-publication criticism brings to life an idea which seems to have been forgotten- peer review is the start of the scientific process, not the end.
The overhaul of peer review is not going to be easy – a deep change in scientific culture does not happen overnight. Despite the enormity of the task, it has to be confronted; it is the responsibility of scientists from every field to wake up and engage in peer review’s reconstruction.