What role do social media play in science?

Jorge Poole

The quarterly results announced by Facebook on 4th November 2015 revealed something truly staggering: on average, 1.1 billion people use the social media site per day. To put that incomprehensibly large number into context, that is more than 1/7th of the entire population on earth. Twitter, Instagram and the social network for professionals, LinkedIn can all boast hundreds of millions of subscribers. These people are doing everything from gossiping about celebrities to following breaking global news stories on these social media platforms. With these sites now inexorably interwoven in our daily lives, controlling the way in which we interact with each other and consume content, it is becoming increasingly important for science to exploit these tools to remain part of the broader social discourse.

The sheer speed at which social media has grown in the past decade is frightening. While the origins of social networking can be traced back as far as the 1970s and Bulletin Board Systems (BBS), online meeting places accessed over telephone lines via a modem, it was not until the dawn of the internet that the social network was able to become the omnipresent cultural phenomenon that it is today. Facebook and Myspace were both launched in 2004 to a great fanfare, with the two sharing a modest user base of over 300 million within 5 years. This explosion in engagement has produced an endless fascination about the ways in which social media is shaping our society. It has connected friends and family across the world and blurred geographical borders. It has revolutionised journalism and the way in which people acquire news. It is changing the battleground for political debate. It has even become a tool for terrorist recruitment and propaganda. In 2011, the influence was evident in Egypt where interim Prime Minister Ahmed Shafiq resigned from his post, with the announcement made initially by the Supreme Council of the Armed Forces on its official Facebook page, allowing for instantaneous communication to its more than 700,000 “friends”.

The drastic transformations of journalism and politics have been firmly in the public eye, but another, somewhat quieter revolution has been brooding in the arena of social media; the birth of social networks geared towards scientists. Labroots, Quora, ResearchGate, Academia.edu and Mendeley have all entered the market with the basic premise of building a global network of scientists who are sharing, collaborating and building a new generation of scientific research. This idea of international collaboration is nothing new to the world of scientific research. Contrary to the popular depiction in traditional media of solitary scientists toiling in their labs, virtually all scientific discoveries have been made by groups working together.

The revolutionary aspect of scientific social networks, however, has been that where in the past collaborative networks may have been confined within a geographical area or educational institution, today there is simply no limit to their reach. Collaborations can span continents and can even be done without the need to meet face to face. So much of modern scientific research requires cross-disciplinary expertise that these social networking sites have begun to demonstrate their importance. Research teams that traditionally were made up of a large number of on-site scientists with varying skills, can now be replaced by small core teams supported virtually by experts in their field from across the globe. Social networking collaboration is likely to appeal to funding panels as it provides a far more cost-effective method of carrying out high level research.

The startup that seems to be spearheading this social movement among scientists is ResearchGate. With over 8 million users, the platform has grand goals to become the fundamental venue for collaborative discussion and peer review. ResearchGate has gained such traction that investors have been lining up to pump money into the company, including the man crowned richest individual in the world, Bill Gates. His high profile financing of $35 million comes on top of two previous rounds of undisclosed investment. It is plain to see why researchers are rushing to join the platform. Emmanuel Nnaemeka Nnadi was a microbiology PhD student based at the University of Jos, Nigeria. His area of study was mutations in candida fungal species, but he lacked the funds or sponsorship to allow him to perform molecular analyses. Using ResearchGate, he was able to collaborate with Orazio Romeo, a pathogen researcher from Italy who could provide the equipment and expertise that was required.

The explosion in social media for science, however, has come as a surprise to some and not without reason. The list of failed attempts to create a Facebook for scientific collaboration runs into the dozens, including SciLinks, 2collab, the well-backed Nature Network and Epernicus. The wariness of researchers to share papers and data online stemming from fears from competitive research groups seemed the obvious explanation. In particular, early career academics worry that exposing research too early in a public domain such as a social network, will open them up to ridicule and perhaps stunt their career progression. In the case of Nnadi, the opposite was in fact the case. Following the exposure received from the publication of his collaborative work, he gained several other research invitations from all over the globe. Success stories such as that of Nnadi continue to draw scientists to the networks, providing them with a burgeoning collection of research papers. The growth of such platforms threatens to upend the pay wall model that has sustained the multi-billion dollar scholastic publishing industry.

In today’s consumerist world, the rate at which modern technologies are rushed from the lab to the marketplace mean that media outlets are extremely important for building public awareness around new breakthroughs in science and technology. The major problem for science is that very often these news reports do not distinguish between well-founded and illegitimate scientific findings. There is a tendency in the media to sensationalise and embellish in order to gain audience, with little thought given to the resulting misapprehensions of the public. However, with the rise of social media, the way in which we access the news is changing rapidly.

A new study, conducted by the non-partisan American think-tank Pew Research Center, found that the share of Americans for whom Twitter and Facebook serve as a news source is seeing a dramatic rise. The clear majority of Twitter (63%) and Facebook users (63%) now turn to the platforms to to find out what is going on in the world. This is a substantial increase from the 52% and 47% respectively of users questioned in 2013. In particular, Twitter has shown its strength in news distribution. For example, the US Geological Survey (USGS) turned to Twitter after a 8.0 magnitude earthquake struck Sichuan, China. They analysed the millions of tweets from users reporting the quake and using the data were able to pick up on an aftershock in Chile within one minute and 20 seconds, taking only 14 tweets from the filtered stream to trigger an alert. The news dissemination powers of social media present a golden opportunity for scientists to seize control from media outlets as to how scientific news is broken and exercise their responsibility of accurately informing the public.

Many scientists have already taken full advantage of social media to develop their public profiles, with some even gaining celebrity status through their virtual following. Science Publishing Group have put together a list of “The top 50 science stars of Twitter”, topped by Neil deGrasse Tyson, the American astrophysicist whose musings reach an audience of four and a half million. There was a time when many in the scientific community dismissed social media as being superfluous and merely a tool to distract ourselves from real events and discussions. However, it is becoming increasingly apparent that social media has the power to make their craft more pertinent than ever before. It can empower debate and collaboration; communicate advancements; engage and inspire a new generation of researchers. In light of this, scientists must overcome any preconceptions and join the rest of the world on social media in order to facilitate scientific progress.

What role do social media play in science?

Scientific Language in Public

Tom Peterken asks if it matters if
 we mix up our words?

In December 2011, Professor Brian Cox presented a lecture on TV which looked at a number of different concepts in quantum physics. Although he is a lecturer for physics undergraduates at Manchester University, and he is renowned for his presenting of science to a public audience, this particular case stirred up some controversy.

In “A Night with the Stars”, Cox presented us with a diamond. The diamond itself is not relevant, but using a stick of chalk or a ping-pong ball as another lecturer might do would seemingly not be up to the VIP attendees’ glamourous expectations. Anyway, the Professor professed that by rubbing the diamond between his hands, he is heating up the electrons within the diamond. Therefore, according to the Pauli Exclusion Principle, the changing energy states of our glittery VIP electrons there in Manchester “all the electrons across the universe instantly but imperceptibly change their energy levels”.

This instantaneously (relatively speaking) generated some healthy discussion and unhealthy Twitter sniping among the physicists on social media about whether the energy states of all electrons change at all, and if so, whether that has anything to do with the Pauli Exclusion Principle anyway. Other than some glowing embers on Reddit (see r/Physics/), the general consensus appears to be that no, Brian Cox got it sort of wrong here. The most popular explanation of why this is so was written by Sean Carroll on Discover Magazine’s blog (http://bit.ly/1MUXgtA).

However, this “error” was probably just due to Cox’s attempts to simplify the truth to be more accessible to his primarily non-scientist audience. This skill of being able to relate this famously complicated topic to the lay person is something Brian Cox is undeniably good at, whether you like him or not.

Regardless of whether he was right, wrong, or somewhere in between in his simplification, this author believes no harm was really done. It would be surprising if anyone went home and propagated this exact half-truth to their friends, family, or the Daily Mail. The likelihood is that whether Cox was right or wrong, most of the guests at this lecture (or at least, the non-scientists among them) took home the message “wow, physics is really cool!”

We can hope.

“It’s just a theory”

As with many things, what is more worrying is when these errors are made not on TV by a celebrity but during serious discussions. Particularly concerning is when politicians use unfamiliarity of scientific terminology to their own end, be it unintentionally or maliciously.

Those of us who are keeping an eye on political developments occurring across the Atlantic would have been especially horrified when one of the multitude of candidates for the President of the USA claimed that despite his background as a highly successful neurosurgeon, he does not believe evolution to occur in biology.

We hear frequently from his particular wing of his party that evolution, climate change, and other concepts which do not benefit their worldview are “just theories”.

As members of the scientific community, we can (and often do) explain why evolution and climate change are real and well-documented effects, and discuss the evidence we have obtained and what implications that has to the “debate”. This is seen as a futile exercise by many, possibly rightly so.

But I imagine most of us would agree that we sadly can not tell deniers what to think. Doing so will only bolster the belief that we, as keepers of “the truth”, are suppressing contradictory views for our own means, further entrenching their own views.

However, what we can (and often do) is explain that to dismiss scientific evidence on the basis that it is a theory is ludicrous.

As most readers will be aware, a scientific theory does not have the same implications as the word “theory” has in everyday usage. A scientific idea describing the particular mechanisms by which various phenomena occur (be it the evolution of species from common ancestors, the recent trends in global climate patterns, or how planets move around the Sun) can only be called a theory when it is substantiated by evidence enough for it to be accepted as accurate at predicting and explaining those phenomena.

This goes against what many people use the word to describe in everyday usage as a general idea, often on a whim, before it has been tested. In many areas of science, what many people think of as a “theory” would closer fit the definition of a hypothesis.

Let’s be clear here though. We can’t change how people use a particular word. Not in the immediate future at least.

What we can do though is to try to ensure that non-scientists are aware that certain words have a different (or perhaps more specific) meaning when used in scientific concepts. We can not and should not argue against evolution or climate change being theories. What we can do is question their concept of a theory.

This is not something we can do immediately, but should be introduced in general science education at a young age.

Creative advertising, or just plain lies?

It is not a secret that the art of advertising is not always the most truthful. This is another part of everyday life in which the lack of scientific knowledge can be exploited by those who should know better.

An obvious example is something which variously irritates or infuriates many scientists is the frequency with which products are labelled “chemical-free” when in fact such products are invariably made using atomic and molecular substances.

This particular case of advertising is therefore plainly wrong. However, for the sake of the argument, if we for a moment give the manufacturers of these products and their advertisers the benefit of the doubt, we could perhaps assume that their intentions towards whatever message they are trying to convey are honest. Like in Brian Cox’s quantum confusion, we might accept that the difficult act of conveying a scientific message to a non-scientific audience has resulted in some inaccuracies.

So although on the surface of it, it doesn’t seem hugely likely that such advertising would have a huge impact on anything significant (at least until chemtrail conspiracy theorists hold significant public office), the question arises as to where to draw the line. If we are to grudgingly accept that chemical-based substances can be advertised as being “chemical-free”, can we then accept the use of physics to justify pseudoscientific concepts such as quantum healing? Is this a creative or permissible form of simplification for the purpose of advertisement?

I think not.

If the former case can be assumed to be a bending of the truth (and I’m sure many readers will not take kindly to such an assumption), the latter is clearly an outright lie, preying on the understandable general confusion about the implications of quantum physics. Neither of these cases is the correct use of scientific language or concepts, and so such statements will hinder the scientific community’s outreach work, undermining Brian Cox’s aforementioned attempts at bringing scientific interest to public attention for example.

We must therefore not accept the false appropriation of scientific terminology and concepts in advertising, be it a well-meaning conveyance of a difficult message, or intentional dogma surrounding certain scientific concepts. This should be done again through better education of scientific ideas and terminology to teach how to spot the bull. However, it is this author’s opinion that the scientific community should lobby for stricter controls on the truthfulness required in advertising, particularly when complicated science is involved.

So let’s accept some mistakes sometimes, but…

We must accept that sometimes, when people like Brian Cox bring a complicated concept into the public eye, some simplifications are going to occur. It should not be an ambition of anybody’s to prevent the online scientific community to point out areas where the simplifications break down. However, it is when people and corporations of real authority prey on a lack of scientific understanding that we must criticise sharply.

As a community, we should work hard to ensure confusion of vocabulary is kept to an absolute minimum, and that the theft of scientific ideas and terminology for advertising is sharply constrained.

But the next time your grandma asks how your astrology studies are going, let’s try to remember that there are more important issues to focus on.

Scientific Language in Public

Scientific scepticism

Information overload has led to a dramatic rise in false information. Nathan Goodey asks whether we should be sceptical of the information we read.

Major breakthroughs seem like a common occurrence these days. In the last two months alone, NASA has revealed that water exists on Mars, the WHO have shown that processed meats can increase the likelihood of cancer, and scientists have developed a self-healing, flexible sensor that mimics the properties of human skin. These publications don’t even cover a fraction of the thousands of papers published on sites across the globe. Some boast works that Einstein would be proud of, others provide evidence to back up theories – but how many contain incorrect data? Motives and opinions differ everywhere you look, knowing what to believe can be difficult.

We live in a society where information is easily accessible, at any place, any time. We read other people’s work, other people’s opinions and often we take their story to be true. It is easy to forget that this is not necessarily the case. When it comes to science, we must put every ounce of trust in the people who have collected the data. Without having done the research ourselves, we must blindly judge between what we agree and disagree with. The truth is, most of the time, we don’t know any better. We take scientific results to be justified – we have no reason to believe otherwise. In all honesty, to believe these ‘professionals’ is the very essence of faith.

Take yourself back to 2011, disbelief quickly spread around the scientific community when OPERA scientists released results that appeared to show neutrinos travelling faster than the speed of light[1]. General relativity was suddenly under question, with no definitive proof that the data was correct. Unsurprisingly, the results were found to be wrong due to equipment failure. What was simply an honest mistake led to the circulation of false information and unrequired media attention. I believe that the information should have been reviewed before being announced.

As well as misconceptions, fraudulent data is a serious problem in science. In June 2014, two stem cell papers were retracted, when several other groups failed to replicate the results[2]. Furthermore, a paper on ‘chopstick nanorods’ was removed in August 2013 because an image showed rods copy and pasted into position, using computer software[3]. Justice was served as the PHD student who had tampered with the results had his doctorate degree removed, but questions remain on how the information was released in the first place. Problems exist throughout the publishing process that allows these erroneous papers to be printed. For instance, people voluntarily read and edit these papers through peer review, sometimes with very little relevant background of the topic and understanding of the science. There is no way (unless unwittingly obvious) that someone reading the paper could be expected to know if it’s forged without having time to replicate the experiment – which they cannot be expected to do. Sometimes, if it is a major topic or discovery, a scientific journal may go ahead and publish it anyway, even if there are questions surrounding it. They would not want to reject a paper that could enhance its reputation, reap financial reward, or aid a rival magazine if turned away.


Who to trust? Flawed nanorod images[3]

All or nothing

Making up data seems irresponsible, mindless and dangerous. Why do some people go to such extreme measures? Money, reputation and personal gain all play major roles. Projects need financing, which is more likely with ‘successful’ results. For some people it is worth the risk to manipulate data, if it means that they can get the funding to continue their research. Standing out for investment is also difficult when you are up against research groups from across the globe. This leads to the additional pressure of publishing conclusions by specific deadlines. Results can often end up getting neglected, with only those deemed important being selected. With time constraints, it may also be impossible to double check raw data before publication.

The Schön scandal is a good example of just how easily data can be created, manipulated and dispersed around the world. Jan Hendrik Schön’s field of research was condensed matter physics and he was hired by Bell Labs in 1997[4]. His findings, which looked at semiconductor materials, were published in Science and Nature journals and received worldwide attention – yet no one could reproduce his results. By 2001, he was publishing one paper every eight days; at this rate, alarm bells should have been ringing. Eventually, his results were found to be forged when someone compared graphs from two of his papers, and discovered that they were the same. Following a formal investigation, the majority of his papers were retracted and his doctoral degree revoked due to dishonourable conduct. This remains one of the biggest known data frauds to date. Although the scandal did bring to light that peer review required a reassessment, can the publication of dozens of completely fraudulent papers be justified? One can only guess that they were printed because of his reputation, rather than the work he had actually written.

Concluding that fake information is only the fault of scientists is naïve, however. Large corporations are just as likely to manipulate data for their own benefit and greed. In 2006, P&G, who create beauty products, were in dispute with Aubrey Blumsohn. He was refused access to drug trial data to verify conclusions that were published in his name[5]. They had concealed evidence and created conclusions that they wanted rather than found. Of course, they only want to have a positive persona. Any negatives are swept under the rug. It is more cost efficient for them to alter their outcomes than to adjust their products.

Judgement day

It is a relief that faulty data has been discovered and rectified – but the worry is that it happened at all. While some results are just mistakes, falsification leads to wasted time, misconceptions and a difficulty understanding the truth. It should not be so easy to publish results that are fake and avoiding such circumstances should be attempted at all costs.

With the vast amount of information that we see every day, be it through education, television, the internet etc., it will only become more difficult to know what to believe. Unlike the small few who know the ins and outs of a specific subject, the vast majority of us are left with the decision to trust the data and conclusions of those who undertook the project. It is easy to be sceptical of what we read, and to an extent it is a good thing. Asking why, how, and not always taking something to be true can enhance our understanding, but being suspicious of everything we read is counterproductive – common sense should be used when possible.

I expect that only more cases of incorrect, fraudulent and manipulated data will arise in the near future and that attempts to fix it, most likely through the re-evaluation of peer review, will be undertaken. However, I cannot see a simple method of removing the problem. It is inevitable that mistakes are going to be made. What is important is that we are careful, we stay vigilant and we don’t believe everything we read.


[1] https://en.wikipedia.org/wiki/Faster-than-light_neutrino_anomaly

[2] Papers on ‘stress-induced’ stem cells are retracted, David Cyranoski, doi:10.1038/nature.2014.15501

[3] http://cen.acs.org/articles/92/web/2014/11/University-Utah-Concludes-Investigation-Controversial.html

[4] https://en.wikipedia.org/wiki/Schön_scandal

[5] https://www.timeshighereducation.com/news/payout-in-pg-drug-data-row/202393.article

Scientific scepticism

Overcoming The Gender Gap in Physics

Samantha Flavell

It is a sad fact that in 2010, only 21% of Physics Bachelor degree students in the UK were female1, a statistic which would undoubtedly come as little surprise to the majority of the population. Tracing this gender imbalance one step further back reveals a very similar statistic; in 2011, 20% of physics A level students were female2. This certainly has some role to play in the UK’s damaging deficiency of women working in STEM, which, according to a government publication on women in scientific careers3, could have adverse effects on the country’s economy. Our economy requires more skilled scientists and engineers, and this demand cannot be met without increasing the proportion of women in these careers from the dismal 13% that it stands at today4. Clearly, in order to address the lack of female physicists, efforts need to be directed at an early stage in education, namely towards those students in the impressionable years of adolescence.

An Institute of Physics publication, ‘Closing Doors’5, draws attention to the difference in A level take-up between females in co-educational and single-sex schools. My experience in a single-sex secondary school gives me an insight into this particular area; any notions of physics as a ‘boys only’ subject had no effect when I made the choice to continue studying physics at A level. In my opinion, our school felt sheltered from any stereotypes which could dissuade us from physics, and the all-female staff in our well-stocked physics department further buffered us from this. However, the classroom still felt a little bare, particularly compared to my A level English class. The Institute of Physics revealed that such a situation is common throughout the UK, as shown in Figure 15. In 2011, only 4.3% of female pupils in single-sex schools went on to study physics A level, compared with 18.7% in biology. While this number is small, it is more than twice as large as the percentage of female pupils who chose physics in co-educational schools, which stood at 1.8% in 2011. This difference indicates that the mixed gender environment is having an effect. This could be because there is indeed gender stereotyping in schools, or because some schools aren’t doing enough to eradicate any of the gender biases that pupils could be experiencing in other areas of their life.


Figure 1 Percentage of girls and boys who went on to take science A levels in 2011 from maintained co-ed and single-sex schools 5

An OFSTED report6 into the career aspirations of female pupils reveals that children are aware of gender stereotypes from a young age, and already perceive certain occupations as either ‘girl’s jobs’ or ‘boy’s jobs’. Secondary schools, therefore, must actively counteract these misconceptions, rather than simply expressing a view of gender equality. The report continues to explain that while career guidance is given in schools, there is not enough being done to challenge these stereotypes and promote STEM careers to female pupils. One particular shortfall in career guidance is the lack of insight into specific careers. It was noted that those girls defying the trend and choosing to pursue male dominated fields had gained first-hand experience of a STEM career, through either work experience or conversations with a professional.

The statistic quoted in the introduction, that the national average of female A level physics students is 20%, encompasses a wide range of individual school performances in this area. There are schools that are managing to soar above the national average, and are heading towards a gender balance in this subject, while, at the other end of the scale, a shocking 49% of state-funded, co-educational schools sent no girls on to study A level physics in 20115. Clearly, there is an approach to solving this problem. In a 2012 Guardian article7, a teacher from Lampton School in Hounslow revealed how they achieved an above average intake of female A level physics students. She describes the school’s efforts to change the perception of physics, by inviting speakers to come to the school and talk to the pupils. Another school defying the trend is Cheney School in Oxford8. This school also actively encourages participation in physics, and, through their WISE (Women in Science and Engineering) club, explicitly addresses the perceptions of physics being less accessible to women. Another significant contribution to their success is the proportion of female teachers that make up their science department. Having these role-models, in the form of teachers and professionals who come into a school to inform and inspire the pupils, is vital to encouraging a gender balance in A level physics.

The influence of enthusiastic, strong teachers with a proactive attitude towards addressing this issue should not be understated, and these teachers don’t necessarily have to be female. This is why the shortfall in the number of physics teachers beginning their teacher training, shown in Figure 29, combined with the large numbers leaving the profession, will stunt progress towards a gender balance. How can pupils be inspired to learn physics, without the teachers to inspire them? In a survey by the National Science Learning Network, which involved 1,200 science teachers, 61% had considered a career change9. Many cited the excessive paperwork and unrealistic expectations as reasons for this. This begs for changes to be made to the education system to lighten the workload on teachers and keep these valuable professionals happy in their career choice.


Figure 2 Graph to show the shortfall in the number of trainee teachers recruited in September 2015 9

The government’s strategies to address the lack of teacher trainees involves some very tempting bursaries for those beginning training in 201610. While it gives me great comfort to know that the year of teacher training awaiting me upon graduation will be cushioned by a potential £30,000 of ‘free money’, I can’t help but be a little sceptical about the need for such a large sum. The money I will be receiving in my training year will exceed the salary that I expect to earn in my first year in the profession, which will be around £22,24411. Surely there are better ways to invest at least a portion of that money. Those of us who have the enthusiasm and motivation to pursue teaching do not require such a hefty pay-out to persuade us into training. In my opinion, those who choose to enter a teacher training programme based on this money are unlikely to spend as long in the career, or have as much of an impact on their pupils.

The gender imbalance in physics is a crucial issue that all schools need to prioritise. More attention and funding needs to be directed towards promoting physics as an interesting subject that can lead to a wide range of careers. Secondary school pupils are required to make A level choices that will determine their career path, and they need to be equipped with enough knowledge and forethought to ensure they are making the right decisions.



  1. Table 7 in https://www.iop.org/publications/iop/2012/file_54949.pdf
  2. Institude of physics publication, ‘It’s Different For Girls’, October 2012, http://www.iop.org/education/teacher/support/girls_physics/file_58196.pdf
  3. ‘Women in Scientific Careers’, published by the House of Commons in January 2014, http://www.publications.parliament.uk/pa/cm201314/cmselect/cmsctech/701/70105.htm#a1
  4. Publication by WISE (Women in Science and Engineering), July 2015, https://www.wisecampaign.org.uk/uploads/wise/files/WISE_UK_Statistics_2014.pdf
  5. IOP publication, ‘Closing Doors’, December 2013 http://www.iop.org/publications/iop/2013/file_62083.pdf
  6. OFSTED publication, ‘Girls’ career aspirations’, April 2011, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/413603/Girls__career_aspirations.pdf
  7. http://www.theguardian.com/education/2012/dec/30/why-dont-girls-study-physics
  8. http://www.bbc.co.uk/news/science-environment-25243275
  9. http://www.theguardian.com/education/2015/sep/15/not-enough-teachers-science-shortage-teaching-jobs
  10. https://getintoteaching.education.gov.uk/bursaries-and-funding
  11. https://getintoteaching.education.gov.uk/why-teach/competitive-salary-and-great-benefits


Overcoming The Gender Gap in Physics

Context and Short Papers

Background and context often take a back seat in scientific writing. Matthew Blindt explains why they shouldn’t and suggests a way to add them to papers without increasing strain on journals.


More scientific papers are being written than ever before. Science blogger Duncan Hull of the University of Manchester calculated that roughly 1.29 papers are published per minute – so it should come as no surprise that high impact journals simply don’t have enough pages to keep up. This lack of space encourages editors to only publish shorter papers and articles so that they can include more of them per issue.

Because of the pressure on academics to have a high rate of publication in reputable journals like Nature or Science, they tend to write papers whose lengths are chosen to maximise their chance of being published – not to maximise the amount of relevant information they contain. To pamper to editors, scientists have to cut sections from their papers, and context is an obvious choice.

By context I do not only mean prerequisites to understanding the topic being discussed, but also recent results from relevant studies, the setting in which the phenomenon occurs, and the areas in which findings could be applied. Context is a vital part of scientific writing that can help readers make sense of complicated findings, but it is all too frequently absent in modern scientific literature.

It is harder to write concisely than to be verbose, and it makes sense that writers are asked to put in a little additional effort to make their information succinct. However, imposing word limits will mean that on occasions, scientists are forced to sacrifice some content (and worse – clarity) in order to be published. For instance, Nature states that articles should contain around 3000 words. But what can be explained adequately in 3000 words can often be explained well in 5000. Why should we settle for adequate? We can do better.

What if journals continued to print the same length of paper, but offered writers the opportunity to submit a supplementary article to be made available exclusively on the journal’s website? This would allow writers who are particularly keen on conveying their research to a wider audience to include background, context and even discussion about what results mean and the hidden complexities in the method. In addition to putting the context back into scientific writing, this would increase the transparency and the accessibility of the research.

I’m going to refer to the two parts of the paper as the (published) core paper and the (online) supplementary information. The core paper would be no different to those which are currently published by journals and is the first point of contact for readers. By keeping the same short format in print, articles may maintain their impact and punch to initially captivate the audience. The supplementary information is for people who wish to better understand the topic or find more details and background.

For this idea to work, it is important that the core paper and the supplementary information are both hosted by the same journal. It will already have the readership, the editors and the online infrastructure to easily and expertly publish both parts – and vitally, it will have the ability and incentive to advertise the supplementary piece to readers of the core paper. This is because journals will now be able to receive traffic from readers who previously would have had to look elsewhere for extra information to help them to understand a paper. This will benefit the journals, as more traffic on their websites will generate more revenue for them.

You may be wondering why writers don’t ignore print journals altogether and instead submit long papers online. The problem with doing this is that online journals care as much about impact as paper journals do. Readers are put off by longer papers, and if people don’t read them, then online journals won’t publish them – it’s that simple. The short paper format is still crucial to draw in readers in the first place.

Many writers will not feel like they need to use additional online space to fully make their points – some topics can be covered well within a word limit. However, by giving writers this choice we avoid punishing scientists who study more complex phenomena. At present, scientists who have performed more involved research or obtained richer results will in general appear to convey their information less clearly, as they have the same amount of page space to convey a greater amount of information. This no longer needs to be the case. Now they are able to clarify and offer more detailed explanations to interested readers online without having to direct them away from the jurisdiction of the journal in which their core paper is published.

Both parts of a paper will still almost always be written by scientists, for scientists. One big difference, however, is that where presently a cosmology paper would rarely be understood by a scientist outside of the field, now the supplementary information might help the reader to understand it without having to perform a large amount of external research – and hence previously unlikely collaborations are made possible.

An obvious drawback to this idea is the increased write-up and editing times (and of course since both scientists and editors are on a salary, this will increase the write-up and editing costs). This is a valid criticism. However, I believe that this is a small price that many writers and journals will be willing to pay in order to better inform their readership. Journals may even see a long term increase in traffic revenue that offsets the editing cost. Either way, for a fairly insubstantial cost, we get the best of both worlds – we get more information in our journals and we avoid making them any fuller than they already are.

At a conference celebrating the 350th anniversary of the world’s oldest journal (Philosophical Transactions), Stuart Taylor, Publishing Director at the Royal Society, argued that “we are limiting ourselves in the present model by thinking of science journals as products purchased by a consumer.” Cameron Neylon, Advocacy Director at the Public Library of Science, added, “Scientific communication is a means of dissemination, it is not a product.” Perhaps after 350 years of scientific journals of the same format it is time for a change. If we do as Neylon suggests and view papers solely as a mechanism for distributing scientific information, then investing in passing on more information to the reader is clearly worth it. A system that permits scientists to include context and tell the whole story rather than forcing them to adhere to an arbitrary and restrictive word limit is long overdue.

Matthew Blindt is a fourth year student studying mathematical physics at the University of Nottingham. Comic by Mari-Anne Copeland.
Context and Short Papers

Exams, Students, and The Invisible Hand

Joseph Shankland

The thought of an invisible hand aiding students during an exam is sure to fill the sternest of invigilators with terror. Yet with the exam boards across the UK being governed largely by free market economics, it is something that has been occurring and distorting the UK’s education system for over the last decade. Further, there is a clear chasm between the content of current science GCEs and those taken by the current generation, particularly in the case of A-level Physics, whereby the very language physics is expressed in, calculus, is missing. This lack of integrity in current GCEs has led to a plethora of questions; for example, how can top universities determine the calibre of students when one in four attain A*-A grades? and how can undergraduate ‘physics students’ be prepared to delve into the realms of physics without ever being previously exposed to its true language and context? The antinomy of a system consistently awarding higher grades to students, to the detriment of a meritocracy and their ability; is, I am sure, not lost on many. Perhaps the rhetoric of Education, Education, Education, has not given people a hand up, but instead pulled the bar down?

In fairness, to identify the inflation of grades as a political by-product of here today, gone tomorrow politics is, at best, flippant. Instead it is caused by a much more intrinsic problem which stems from the way qualifications are administered within the UK. The way GCEs are awarded within the UK differs widely from other countries throughout the world, in that it does not have a single unified body that awards GCEs. Instead it allows private companies to design and create courses based on a government stipulated curriculum. Subsequently, these companies are allowed to compete with one another in order to solicit contracts with schools and colleges across the UK – an industry estimated to be worth £300 million. Naturally, being private profit-driven businesses competing for a market share, market forces begin to operate. The forces operating and the mechanism through which grade inflation has occurred are relatively simple to understand. For example, if a product is intrinsically better (i.e. an awarding body statistically gives higher grades) then the demand for that product increases. Factoring in the desperation of schools trying to climb the league tables, it is no surprise that exam bodies have continually (until recently) been awarding higher grades. Moreover, there have been several public spats between awarding bodies that have accused one another of having exams that are too easy, exemplifying that the issue exists, and how free market economics is still influencing the education system.


This acknowledgment of an increase in higher grades being awarded is not new and has consistently been brought up year upon year when the new set of GCE results come in. However in 2012, and for the first time ever, Ofqual admitted that there was a systematic and continued inflation in grades throughout GCEs in England. One of the solutions mooted during the circus of events surrounding Michael Gove’s tumultuous reign as minister for education, was to create a single nationalized exam board. This would supposedly stem the ‘race to the bottom’ by removing the competition between different awarding bodies. A Tory minister advocating for the dissolution of a free-market industry, abolition of competition and a new nationalised body?

It is no wonder that, like many of Gove’s policies that were either swiftly implemented or quickly forgotten, this notion was swept, yet again, under the rug. Instead, Gove – who was once called a ‘demented Dalek’ in response to his suggested policy of punishing pupils with ‘community service’ (incidentally now the minster for justice), instructed Ofqal to make GCEs harder. In fairness to his directives, this is exactly what has happened. The initial set of GCEs released under Gove in 2012 showed for the first time ever a reduction in the amount of A*- A grades being awarded. Further, since 2012 there has been a consistent trend of deflation in higher grades being awarded. If this trend should continue, or at least come to some sort of respectable equilibrium whereby there is sufficient distinction between the calibre of students, then would it be fair to accept the current system as adequate?

Unfortunately it isn’t so simple. Even if the variation of grades awarded are ‘correct’, the issue of having several awarding bodies still remains when it comes to upholding a meritocracy. As, even if grade deflation has occurred, it is still possible for awarding bodies to be divergent in respect to the spectrum of grades they award. For example, in 2015 the difference between the amount of grades awarded A*- A for AQA and Edexcel in Physics A-level was 1%. Hence, a student would not be disadvantaged in regards to the exam he sat. However, compare this to A-level mathematics whereby 4% more people attained A*- A grades at Edexcel than AQA. On the surface 4% doesn’t seem substantially high, but when considering the bell curve nature of mark distributions and the amount of students who potentially missed out on their offer – which they may have met if they sat an exam from a different board — is surely unjust.

I understand to some reading this that I may sound bitter, that I may have missed a university offer by the slimmest of margins, which I would have received had my school have opted for a different awarding body. In actuality, my A-levels of Economics, Politics, History and Psychology have had next to nothing to do with my University course. Instead I opted for a foundation year at the University of Nottingham, where the course itself was essentially tailored to the corresponding undergraduate physics degree. The relevance of this is important, because during first year I found myself substantially more prepared to tackle the course than my peers – the vast majority of which had come straight from A-levels. We had been, albeit minimally, exposed to the use of calculus in physics and understood the context that physics had to be approached in – mathematically. This was in stark contrast to some peers, who became wide eyed at the realization that a degree in physics was not as wondrous and mesmerizingly philosophical as Brian Cox would have us believe. Therein lies yet another way the education system has failed some students, as the very course itself, in the most innate way possible, has failed to prepare students for their future endeavors, which is surely the most intrinsic point of education?

Regardless, it seems incredibly unlikely that both calculus will be re-incorporated into A-level physics and that the government would consider a single exam board, despite Nick Gibb recently saying this in August. Further, I’m sure the Old Etonian lobbyists working at the behest of the awarding bodies would prove to be incredibly influential with old boys Dave and George. Thankfully, grade inflation has started to reverse and the regulation introduced has begun to curtail the free market behavior of awarding bodies. Yet, looking at the graph of grade inflation one cannot help but think about how the supposed invisible hand theorized by Adam Smith has developed a very physical touch.


Exams, Students, and The Invisible Hand