DO WE KNOW ANYTHING FOR CERTAIN?

Tania Jane LaGambina

What if the scientific facts we ‘know’ are not true? As absurd as this sounds, it is not technically impossible. This may be due to how scientific conclusions are reached, falsification by scientists we trust, or even due to the fact that science is constantly changing. I will entertain this seemingly sensational idea, and offer reasons why we should keep our minds open to what we believe.

SHOULD WE QUESTION OUR SCIENTIFIC PHILOSOPHY?

Scientific conclusions are not set in stone. Conclusions are drawn when something is very likely to be the case, and then this conclusion can be passed as evidence by scientific consensus for a particular hypothesis. How do we logically reach these conclusions? The majority of science is conducted using inductive reasoning. Inductive reasoning is when we move from the conclusion made on a limited set of data to a more general one1. This is because when we conduct an experiment exploring a hypothesis only a limited set of data is available to study. Unfortunately, we cannot test a theory on the entire Universe. Scottish philosopher, David Hume, suggested that we place so much faith in inductive reasoning because we presuppose a ‘uniformity of nature’. Consequently we presuppose the ‘uniformity of nature’ because of inductive reasoning, as we have observed laws to be followed by the Universe fairly consistently up to now. This is how humans naturally perceive the world. However this ‘uniformity’ cannot be proven as we have only observed the Universe in a limited time and space.  This is known as Hume’s problem of induction, and it suggests that the foundations of science are not as solid as they seem1.

We see inductive reasoning used from when we perform medical studies on a certain set of patients to when we test the theory of gravity. For example, we can only test a drug on a small number of subjects, where we try our best to provide the most representative data set possible. However, we cannot completely represent the entire human race. Our tests of gravity on Earth are another example. We cannot confirm without a doubt that the laws are the same across the entire Universe, as we do not have the means to test the entire Universe. We just try to simulate this the best we can. Our inductions have worked well so far to describe the Universe around us making it very unlikely that they are wrong. Even assuming that the sun will rise tomorrow is inductive reasoning. It has every day before now, so we assume it will again tomorrow. Questioning it seems ridiculous. We are just not able to call the possibility of it not rising impossible.

Our favourable assumption of the ‘uniformity of nature’ illustrates how we like to perceive the Universe as eloquently simple, and this also applies to how we choose which conclusions are correct. We consider ‘inference to the best explanation’ (IBE). We use IBE to assume that the ‘best explanation’ is the simplest one. But do we actually have any reason to believe that the Universe is simple rather than complex? We don’t. It just makes the science neater. Darwin’s theory of evolution is a good example of this. This one fairly simple theory provides an easy explanation for the development of all species, instead of explaining how all the different species were created individually. Having this one theory, which explains it all rather eloquently leads us to assume it is correct. If we didn’t take this approach, our understanding of the Universe could be entirely different.

SHOULD WE TRUST SCIENTIFIC RESEARCH?

When we consider how we conduct science as humans, considering what conclusions we should place our faith in becomes more interesting. Human nature is naturally biased, and we should consider this in the scientific method. As sterile and devoid of emotion science as a body seems, human thoughts and feelings are behind the entire process. We need to think about why the data was taken, and by whom?

When we perform an experiment, we are acting to prove a hypothesis. This is the scientific method. We are trying to describe the world around us, to prove ideas we already have. With this in mind, how can the experiments possibly be unbiased?  Results are usually expected to match the hypotheses after all. Of course there are exceptions; if results are consistently unexpected, groundbreaking discoveries may be made. Occasionally though there is a danger that if a result does not match the hypothesis it may be discounted as wrong. Scientists may work and work to reach the required results to prove their theories.

Millikan’s oil drop experiment illustrates this effect, which we call confirmation bias. Confirmation bias describes the situation where decision makers actively seek out and assign more weight to evidence that confirms their hypothesis whilst under weighing evidence that would disconfirm it2. In this experiment, the charge on an electron was measured with falling oil drops. Millikan achieved an answer of 1.5924×10−19 C, where we now know the accepted value for elementary charge is 1.602×10−19 C. This slightly incorrect value was blamed on the fact that Millikan used the wrong value for the viscosity of air. However, upon inspection of his lab books, it was also discovered that Millikan excluded various parts of his data without valid excuse, presumably to manipulate the results. Upon presenting his findings, he stated ‘no single drop was omitted’, but it was found certain drops were omitted if they veered from the expected values. Due to this, the error around Millikan’s charge value was smaller than it should have been, and consequently further research in this area was affected3.

Feynman described the effect of this in his Caltech commencement address in 1974. He described how the accepted value for electron charge increased gradually as a function of time, as repeats of his experiments were conducted, until the values settled around the final accepted value we use today4.  This illustrates a flaw in the scientific method due to the bias of human nature to fit results to what was expected. These results were Nobel Prize winning after all, so scientists may have been reluctant to question them. When researchers got a value above Millikan’s, they assumed they were wrong, leading to elimination of values within their results which they deemed too high. However if their results were close to the original value they would consider their results as correct4. This meant a series of slightly experimentally inaccurate results were published following Millikan’s initial research.

But why would a researcher ever give up solid scientific integrity?

In a modern day world, reaching an experimental result that matches a hypothesis may have high stakes. This causes more pressure for results to be falsified if they go against the expected. Publication companies may be to blame for the high pressure, due to the common practice called publication bias. This is when the results of published studies are systematically different from results of unpublished studies5. Around 80% of published papers present positive results to a hypothesis, when instead it is estimated that no more than 10% of hypotheses should actually be supported6. A hypothesis may be tested over and over until a positive result is reached6, and therefore published. This is important as publication is largely connected to a researchers scientific career. This is because scientists are often measured by their h-index, a single number, which is a measure of the number of high impact papers a scientist has published7. With a greater h-index comes more research and grant opportunities. It is clear that it may be within scientists best interests to achieve the positive results they require, unfortunately providing reasons that may lead to falsification. A flaw in the system is evident here. Objectification of a scientist’s ability by a single number could not possibly work accurately across the whole scientific world.

Publication bias can have a large impact on drug companies and drug studies. Here the pressure to reach a positive hypothesis test would push a product, making profit for the company. However in this particular situation this may be extremely dangerous. The Lorcainide drug case illustrates these dangers. In the development stages of this anti-arrhythmic drug, used to restore normal heart rhythm after a heart attack, 95 patients were tested. 9 out of the 48 given Lorcainide died, compared to 1 out of 47 given the placebo alternative8. The drug was found to be a failure and it’s development ceased. However the negative results were never published, due to the fact that positive results are preferred in publication. A few years later, other companies took up the idea of anti-arrhythmic drugs. These were heavily prescribed by doctors who were unaware of these unpublished dangers. When the dangers of the drug finally surfaced, 100,000 deaths had occurred because of it. Hauntingly, without publication bias, this probably could have been avoided8.

We should also consider what kind of tests need to be passed for a result to be given as ‘evidence’ for a hypothesis. A definitive test usually comes from the p-value. The p-value is the ‘level of marginal significance within a statistical hypothesis test representing the probability of the occurrence of a given event’9. The conventional level of significance is 0.05 or less, which indicates strong evidence against the null hypothesis, where the null hypothesis describes the scenario where there is no effect. Using this, the validity of research can be accepted or discarded by a single number, which can be problematic. When statistician Ronald Fisher introduced the p value in the 1920s, it was never intended as a definite test, simply a guideline on whether to judge the significance of the results10. In the same way that h-index may have its problems, scientific research ideally should not be objectified by a single number to validate its success. If this was the case, perhaps lower quality research could be filtered out of the system.

WHAT IF OUR VIEW OF SCIENCE IS CONSTANTLY CHANGING?

More abstractly, we can question what we believe by considering how science is constantly changing. It is likely that what we currently believe may not be the case in a few hundred years. There is no reason for us to believe our generation holds the correct scientific facts. We should remain open minded to this in our quest for understanding.

Take gravity for example. Currently, ‘if an otherwise well-executed argument contradicts the principles of gravity, the argument is inevitably altered to make sure that it does not’11. However, in the words of theoretical physicist Brian Greene of Columbia University, ‘there is a very, very good chance that our understanding of gravity will not be the same in five hundred years’11.  Our understanding of gravity has already been seen to change dramatically over time. The first theories came from Aristotle. His law of gravity stated that ‘any material body falls from above to below because it seeks its correct place in space’, where this correct place is the ‘bottom’12. Aristotle considered the ‘bottom’ to be the centre of the Universe, which was also assumed to be the Earth. Hence, any material body would fall towards the Earth, as this is where it belonged. This was left unchallenged for almost 2000 years. Isaac Newton then came along and formulated an explicit law that logically explained Galileo’s discovery of the gravitational acceleration constant g alongside Kepler’s discoveries of planetary motion12. Newton suggested that instead of the Earth’s existence defining reality, there was instead an invisible force field that ruled how objects moved around in the universe11. This again was the accepted theory for another 200 years.

Then, little over 100 years ago, Einstein suggested that gravity is not only a force, but also a warping of space and time. Our perception of this mysterious force that ruled our lives changed again. It seems bizarre that mankind believed that Aristotle’s false idea of gravity was true for 2000 years. Why shouldn’t we believe that in another 2000 years our Newtonian-Einstein idea of gravity wouldn’t appear equally as strange? Can we assume that our more refined scientific method has reached the correct understanding of gravity? We shouldn’t, and science should keep an open mind to progress forward without clinging unnecessarily onto old hypotheses.

CAN WE EVER HOPE TO ACHIEVE CERTAINTY? 

The answer to this is no. But we try the best we can to achieve as close to certainty as possible. However, the scientific method is obviously with its flaws.

How could we overcome these flaws? American psychologist, Jonathan Schooler, put forward an idea I found particularly appealing in Nature in 2011. He suggested the introduction of an open-access repository for all research findings. Scientists would add hypotheses and methodologies before an experiment and results afterwards, regardless of outcome13. The ‘successful’ studies, more likely to be published, could then be compared to a larger set of conducted studies. This would introduce more transparency in the scientific community, and perhaps publication bias and objectification of the validity of results and scientific ability through the use of p-value and h-index respectively would not be so much of a problem. Despite this, perhaps our current scientific method can’t really be improved much more, for now. And despite its flaws, it is probably the best method to place our trust in. As long as scientists remain open minded to new ideas, science will continue to progress as it should.

We have only observed the ways of the Universe in a very limited time and very limited space. When we consider this, how can we expect to understand the Universe completely? Instead we should keep an open mind to explore the Universe without bias. After all, it’s impossible to understand the world of today until it has become tomorrow11.

REFERENCES

1 – Okasha, S. (2016) Philosophy of science: Very short introduction. Oxford, United Kingdom: Oxford University Press.

2Confirmation bias (2017) Available at: https://www.sciencedaily.com/terms/confirmation_bias.htm (Accessed: 15 January 2017).

3 – Inglis-Arkell, E. (2014) Did a case of scientific misconduct win the Nobel prize for physics? Available at: http://io9.gizmodo.com/did-a-case-of-scientific2-misconduct-win-the-nobel-prize-1565949589 (Accessed: 15 January 2017).

4 – Feynman, R. (no date) Cargo cult science. Available at: http://www.lhup.edu/~DSIMANEK/cargocul.htm (Accessed: 15 January 2017).

5 – Song, F., Hooper and Loke, Y. (2013) ‘Publication bias: What is it? How do we measure it? How do we avoid it?’, Open Access Journal of Clinical Trials, , p. 71. doi: 10.2147/oajct.s34419.

6Germ, E. (2015) 6 reasons you can’t trust science anymore. Available at: http://www.cracked.com/article_22712_6-ways-modern-science-has-turned-into-giant-scam.html (Accessed: 15 January 2017).

7– Downing, K. and Ganotice, F.A. (eds.) (2016) World university rankings and the future of higher education. United States: Information Science Reference.

8 – krish (2013) The case of publication bias in evidence based medicine – science and technology. Available at: https://blogs.jobs.ac.uk/science-and-technology/2013/12/19/the-case-of-publication-bias/ (Accessed: 15 January 2017).

9 – Investopedia.com (2008) ‘P-value’, in Available at: http://www.investopedia.com/terms/p/p-value.asp (Accessed: 16 January 2017).

10 – Nuzzo, R. (2014a) ‘Scientific method: Statistical errors’, News Feature (February), p. 150.

11 – Klosterman, C. (2016) But what if we’re wrong? Thinking about the present as if it were the past. United States: Blue Rider Press.

12– Sachs, M. (2004) Quantum mechanics and gravity (the frontiers collection). Berlin: Springer-Verlag Berlin and Heidelberg GmbH & Co. K.

13 – Schooler, J. (2011) ‘Unpublished results hide the decline effect’, Nature News (February), p. 437.

 

 

DO WE KNOW ANYTHING FOR CERTAIN?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s