Problems in Science.

Luke Holder.

Science has its problems. It is not the idealistic discipline we wish it could be. These problems are rooted deeply within the publishing system. From minor things, such as the way individuals perceive data to the bias that exists in and towards specific institutions. There are problems in science which we must address and not sweep under the rug. This article aims to provide an insight into the problems facing science today and possible explanations and solutions.

Science has come a long way. Many ground-breaking discoveries have revolutionised the way we live, think and work. In the modern day, the hunt for the next big revelation leads to many papers being published and a lot of competition to be the first. As a result, things can slip through that aren’t as well thought about as they should be. We must consider the way data is perceived by individuals and how that effects their conclusions. Does this then go on to lead to assumptions which may be wrong? Perception of information is controlled by the brain. Our brains are different and can interpret information in different ways. There are many different situations in which information can be perceived in a different way, from having a different type of brain to someone else or pressure to find something you expect to be there.

A well-known example of people having different interpretations of information is the dress. The premise is that to some people the dress is blue and black and to others the drees is white and gold. The colour seen by your eyes depends on the way your brain interprets the image. Some brains discount the blue bit, whilst others try to get rid of the gold part [1]. This difference in interpretation is something that could impact any area of science and suggests that this may be something that occurs in other areas. Although at first glance this may seem like a trivial thing to look at but the fact that people have seen something as commonplace as a dress in different ways suggests that this could happen elsewhere without our knowing. Only by this being prominent on the news and social media has it been brought to light to a wider audience.

Nanoscale images require interpretation to understand and different interpretations result in different conclusions. Being under pressure can result in interpretations being made to fit what is wanted to be found. In 2013 images were released which had been interpreted as hydrogen bonds [2]. There is doubt as to whether the images that show hydrogen bonds is real or an artefact. The reasoning behind this is due to similar features being present between two nitrogen atoms [3]. No hydrogen bonding should occur between them so the similar features suggests that the features cannot be interpreted as intermolecular bonds. This shows that we must be very careful in how we interpret data, especially images. A solution to this kind of problem is to have experts read papers and see if they agree with the interpretation given in the paper. This is already a process which is in place called peer review, which has problems of its own.

On paper peer review sounds like a good idea. A paper is sent off to experts in the field who read through it to make sure it is acceptable for publishing. But it is not so simple, as different people have different views on what makes a quality paper. Even with different views on quality, surely the worst papers with terrible evidence are caught by this process? Unfortunately, this is not the case. In June 2013, a paper was published in Nano Letters, one of the highest regarded journals for nanoscience research. This paper contained images of nanorods in ‘chopstick’ shapes [4]. Upon looking at the images in the paper it is obvious that they have been made using editing software. The paper was withdrawn in August of the same year. The fact that this was published in the first place shows that there is something wrong with the peer review process. Peer reviewers must be experts in the field to be able to verify the contents of the paper. They also do not get paid for doing this. Experts have a lot of work to do and it is possible that sometimes something can slip through the net. It is possible for people to miss mistakes and even if multiple people review a paper there is a small chance that something like this can be missed. In this case, it only took two months for the flaws to be pointed out and the article withdrawn.

Peer review is supposed to be an unbiased review of the work, unfortunately bias exists in the system, sometimes held by the reviewers themselves. Sometimes sexist views influence the result of peer review. A study of gender based differences in the progression of 244 researchers from PhD programs into postdoctoral jobs was conducted by Fiona Ingleby and Megan Head [5]. They submitted their work to a journal which is operated by the online publisher PLOS ONE. They then choose the anonymous peer reviewers. The work was reviewed by a single reviewer who said that the article would be better if it had a male co-author. More sexist comments were made in the anonymous review and the paper was rejected. By publicizing this the problem has been brought to light and should not be looked over. Fortunately, the review has since been removed from the record, the paper sent for another review and the writer of the review has been asked to step down from the editorial board.

This is not the only type of bias present. Institutional bias is another form of bias which sometimes rears its ugly head. This type of bias is where more ‘prestigious’ institutions are seen as producing better quality work and therefore must be better. A study by DP Peters and SJ Ceci [6] shows some compelling evidence for this type of bias in the field of psychology. Twelve previously published studies from prestigious institutions were taken, retyped and the title, abstract and introduction had minor changes made to them. They then changed the authors’ names and the institutions. The names of the institutions were changed to sound less prestigious like the Tri-Valley Center for Human Potential. They then resubmitted the papers to the same journals that originally published them. Eight of them were rejected due to poor quality. The fact that the papers had been previously submitted along with the feedback that the papers were of poor quality strongly suggests that the papers in question are biased towards more prestigious institutions. This is bad for science as it shouldn’t matter where a paper was written but the quality of the research should be the main factor in publishing. Of course, quality is subjective. What one person deems to be of good quality may be perceived differently by another. An actual example of drastically differing views on a paper is the following [7]. Reviewer A: `I found this paper an extremely muddled paper with a large number of deficits’. Reviewer B: `It is written in a clear style and would be understood by any reader’. Discrepancies such as this are so inconsistent that it makes getting the reviewer who will get the paper published a game of chance. It takes a long time to re edit the paper, to then send it off again and to see it published. In the competitive environment of these cutting edge fields this could make all the difference in being the first to get their results out there. This type of bias does not exist only towards institutions but also towards individuals. A reviewer had to consider a paper submitted to the BMJ with Karl Popper’s name on the paper [7]. The reviewer was unimpressed and did not want to publish the paper but it was published. The influence of big names and institutions has a lot more of an influence than is deserved as it is not always guaranteed that quality will come from them.

A method of assessing the scientific output of a researcher is known as the h-index. It is an attempt to factor in the number of citations of papers and the number of papers into one number. One of the biggest flaws in this is that papers in niche fields will not be cited as more popular fields. This means that the cutting-edge papers which are the most ahead in their field will not get as many citations due to very few people needing to cite these papers right now. Combining this with the fact that a paper which has had more time to be written and researched is more likely to be of better quality suggests that a researcher who writes papers at the cutting-edge of their field and takes a long time between papers will be given a low h-index. An example of this is Don Eigler. The first person in history to move and control a single atom, making the first quantum corrals and making nanoscale logic circuits using individual carbon monoxide molecules. His work has had a massive impact in nanoscience. His h-index is only 24 [8]. This is shockingly low considering his impact on the field of nanoscience. This shows that the h-index is not an accurate way of showing a researcher’s scientific output. Sadly, a lot of institutions use this number to use for use in tenure and promotion decisions [9].

To summarise, there are a few glaring problems at the heart of scientific research. Bias from peer reviewers in the examples shown have the strongest impact when they are the sole reviewer. Most of the time there are multiple reviewers but sometimes only a single reviewer is given the paper. Along with the inconsistency of reviewers themselves this can really delay publishing. These inconsistencies arise due to the variation of interpretations of different people. From how we perceive information to the beliefs we hold, there are a lot of factors which effect how we view things. Wanting to have an easy to understand method of assessing scientific productivity has led to a number which misses the point of scientific productivity. This attempt to cut corners instead of seeing for themselves if the work is of high quality has resulted in a world where someone with a low h-index is not even looked at because they did not get a lot of citations. The h-index problem arises from a desire to quantify a combination of quantity and quality of research, where quantity is quantitative and quality more subjective and not always relatable to the number of citations. These problems are not easily solved but addressing them is important.

The problems lie closely within human nature itself. The effect that peoples’ own views has on reviews is something that can affect any of us. Our bias to see the best institutions as always producing consistently high quality work is harmful to the less prestigious institutions that put out just as high quality research. Our desire to have a simple system to ‘rank’ researchers has led to a system that does not truly consider the most important factor: the quality of the research. Only by having people be careful and setting aside their personal beliefs and being more sceptical of the system in place can the problems be addressed correctly.

[1] –

[2] – J Zhang et al, Real-Space Identification of Intermolecular Bonding with Atomic Force Microscopy, Science  01 Nov 2013

[3] – Intermolecular Contrast in Atomic Force Microscopy Images without Intermolecular Bonds, Sampsa K. Hämäläinen et al., Phys Rev. Lett. 113, 186102 31st October 2014

[4] – Chopstick Nanorods: Tuning the Angle between Pairs with High Yield. Anumolu R. et al Nano Lett. 2013 (Withdrawn)

[5] –

[6] – Peters D, Ceci S. Peer-review practices of psychological journals: the fate of submitted articles, submitted again. Behav Brain Sci 1982;5: 187-255

[7] – Peer review: a flawed process at the heart of science and journals. Richard Smith, J R Soc Med 2006 Apr

[8] –

[9] –


Problems in Science.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s