Do social media have a role to play in the scientific process?

Is social media a tool to shape the future of science? 

Ryan Jones

Images like the Facebook thumbs up and the Twitter bird are everywhere, online and offline. There is now a practically seamless join between social media and the rest of the internet through things like ‘share’ buttons, and therefore any device from which  we access the internet . In this article, I want to discuss the following questions: do social media have a relevant impact on science, and should we be encouraging this relationship?

The ‘edutainment’ revolution

In its mere nine years of existence, YouTube has created a mind-bendingly large catalogue of user-generated video content, available for view on an expanding number of platforms. It’s now easier than ever for us to share everything – our ideas, experiences, and even our unfocused ramblings – with the internet. Members of the scientific community have contributed to this platform in a variety of ways. For me, the most important way in which YouTube aids the scientific process is in the distribution of knowledge, whether new or existing. The vast majority of content is free, so for the first time in history people have the opportunity to learn about science whenever they want, wherever they want. Video content is available at pretty much any level, from the concise, digestible explanations of Minute Physics (, to the introductory courses offered by the Khan Academy (, and right up to the entire university level lecture series that are published by MIT through their OpenCourseWare initiative (

What I find really appealing about this new genre of ‘edutainment’ is how accessible it is. YouTube has permeated our society enough to have the ability to reach so many people, importantly those who would otherwise be left without access to a good science education. A good education is important to the scientific process – how can we continue science if we can’t inspire the next generation to get involved? If YouTube inspires even only a small number of otherwise uninterested people to become scientists, I think that is a success.

I could rave on about the successes of YouTube for a long time, but it does have its flaws. One disadvantage of this method of information transfer is that there is very little control as to how far it travels. In the case of YouTube, building up a large number of subscribers can provide guarantee of viewership, but building up this base in the first place is by no means an exact science. It’s hard to get noticed in such an expansive platform, especially now large companies have established an online presence, with recognisability and brand loyalty on their side. It’s definitely easier to get your content shared around social media circles if people are already viewing your content via other means, and for this reason I think that social media could contribute to the idea that brand image is more important than content.

Another problem is that in a learning environment, misinformation and information don’t necessarily travel at the same speed. Consider the following example: some exciting new discovery in science is talked about in a YouTube video. For whatever reason, the video gains a lot of traction. Later, it transpires that the discovery is a hoax. There is no real guarantee that this new information will reach the same number of people, or even the same people. All we can do really is put the news out there and hope for the best. This is not something which is characteristic of social media, but rather of media in general. If we wish to try using YouTube as a method of communication, clearly we have to also deal with the consequences of doing so.

The Importance of Presence

Like many other industries, science is also developing its presence on social media sites. Stephen Hawking created a Facebook page last month. ATLAS has its own Twitter Account. This presence means that keeping up with science can now be as natural a process as keeping up with other people on social media. What does that mean for science?

Social media has created a means of direct contact between researchers and the public. In general, platforms like Facebook and Twitter are more informal and personal than other media, and by putting themselves out there, I believe that researchers are breaking down any perceived social barriers between themselves and the general public. As well as generating a huge amount of exposure for science, use of these sites creates a largely open forum. Communication like this was not impossible before social media, but it has never been more casual, convenient and widespread. By existing in the same social media sphere as everyone else, researchers have opened their doors to pretty much anybody who wants to express their concerns, ask questions, or just have a conversation. Obviously, this has constructive and destructive consequences, as anyone who has invested enough of their time in internet debates will know!

Maybe this type of interaction could be beneficial to the scientists too. Social media produces a feedback like no other, in terms of its sheer volume and diversity. It is arguably one of the most genuine and unfiltered platforms, and I think scientists should embrace this. Millions of people use social websites every day anyway, so why not take advantage of this and get feedback on your thoughts and ideas, and hear directly what the public thinks is important, rather than relying on facts and statistics?

Thanks to social media, the gap between science and the public  has never been thinner. I hope that this is a trend which will continue, in order to develop a society which is more scientifically informed. A mutually beneficial relationship is formed, as the public can become better informed about science through direct interaction with the people doing the research, and researchers can easily take on board the ideas and concerns of the public. This interaction should lead to a scientifically informed society, and an informed society is one which in general makes better decisions about its future, after all.

Do social media have a role to play in the scientific process?

Is science in popular media damaging to academic science?

Rebecca Andrews

Enhance. Enhance. Enhance it some more!

…say the impossibly beautiful characters in white lab coats while poring over some grainy CCTV footage. An instruction to “increase the resolution” yields a perfect image from a reflection in someone’s eyeball. From CCTV footage. (I’m not kidding, this actually happened on an episode of CSI: NY)

There’s also a scene in the latest Star Trek movie where the Starship Enterprise spectacularly crashes due to Earth’s sudden gravity, which can be proved illogical using some fairly simple physics.

TV and films are undoubtedly the worst culprits for representing something actually contained within a scientific discipline as just plain wrong. There are countless examples of this (admittedly of varying severity) in the countless episodes and films that are churned out into the public consciousness every year. And on a ‘scientific’ show like any of the CSI franchise, an audience’s attitude could well be “well, they wouldn’t get the science wrong on a science show!”

The “fiction” part of science-fiction should of course always be borne in mind – artistic licenceartistic licence can be invoked to move the plot along, whatever the cost. But if the audience comes away with false ideas about what is being or could be achieved, is that not damaging to the perception of science as a whole? The so called ‘CSI effect’ has meant that juries in America place too much importance on forensic evidence, and have higher expectations for what it can do.

However, I think we assume a large amount of naivety in the audience here – while watching a TV show like CSI or a film like Star Trek, no one other than the most pedantic scientist is going to be looking for errors. Any high-tech or futuristic content is assumed (if thought about at all) to either be based on actual science, or is so outrageous that it clearly isn’t possible; in which case all is forgiven in the name of suspension of disbelief.

So while an infinitely slow-moving laser or an infinitely resolvable CCTV image may rile up scientists no end, most people dismiss it in order to keep watching these plucky lab geeks catch their next serial killer.


So let’s look at a slightly more rigorous format of displaying science to the public: documentaries and other educational TV programmes. Television is arguably one of the best ways of educating the public at large, so how does it fare against fiction?

The most obvious example of an educational science show to choose is perhaps Brian Cox’s ‘Wonders of the Solar System’ and its later spin-offs. The first series attracted over 6 million viewers, and many people (myself included) can at least partially credit his programmes, and others like them, for getting them excited about science and astronomy in particular. I will go out on a limb and say that his shows probably didn’t contain any glaring scientific errors (although I’m sure someone will correct me), and on the whole did a great deal to educate and interest the general BBC-viewing public.

But Brian Cox has long been accused of ‘dumbing down’ the science to the point where it is, at worst, just plain wrong. A video on the University of Nottingham’s ‘Sixty Symbols’ YouTube channel (found here) discusses a misleading sentence in his televised lecture on quantum theory. But, as is discussed in the video, he got the general message across, and only the most pedantic scientists even picked up on his choice of wording.

Whatever your opinions on Brian Cox, it’s hard to argue that he hasn’t done his bit for educating the public.

And what about other TV programmes? ‘Bang Goes the Theory’ is another obviously educational programme, whereas ‘Braniac’ and its American counterpart ‘Mythbusters’ are slightly more roguish and playful. And it’s in looking at these that we find the real problem that scientists have with their discipline being represented in popular media – lack of rigour. The ‘Mythbusters’ team are constantly performing experiments, jumping up and down, and excitedly claiming “this is real science, folks!

Except it’s not. ‘Real Science’ quotes prior research, involves lots of reading, and is peer-reviewed and made to jump through publishing hoops before anyone can see it. At no point does ‘Real Science’ blow something up just to see what would happen (at least without having a good idea of what would happen first).

But it’s exciting, and relatable, and gives the viewers a feeling of “anyone could do this!” And that’s the trade-off that shows of that kind have to make.

WeekScienceThere are many other ways of simplifying science and presenting it in bite-size chunks. This is really well illustrated in the hugely popular ‘I F***ing Love Science’ Facebook page which has over 1.85 million viewers, and the ‘This Week In Science’ images which regularly make the rounds on sites like Facebook and reddit. Both of these attempt to showcase recent scientific breakthroughs in short and snappy articles, or with just a picture and a single sentence. Seeing “prostate cancer could be ‘switched off’” next to a picture of a man in a white coat may well be misleading, and maybe inevitably so, but these social media quips do keep people up to date with the latest in science, and may well spur them on to finding a longer article to explain it better.

And what’s the alternative to this headline-grabbing, ‘click-bait’ approach in the blink-and-you’ll-miss-it world of the internet? Somehow I doubt that posting a link to a scientific paper is likely to spark quite as much excitement.

With all this excitement in the air, I’d like to showcase a webcomic that gets trotted out (by some quite condescending scientists) when spectacular scientific breakthroughs happen and some member of the public gets a bit more animated than usual.

This highlights the issue of lack of rigour again, and a feeling that ‘proper science’ shouldn’t be all about the fun parts.

But hang on a second. Are we really lamenting the fact that the general public don’t find the minutiae, the self-labelled “boring bits” of what is essentially just another day job, exciting enough? Surely we would say the same for lawyers, doctors, teachers or politicians? We can’t expect non-scientists to enjoy the boring bits of science – if they did, they’d be called scientists.

This interest about the cool and exciting bits of science is exactly what we need to persuade more people to take up the sciences at A-level and degree level, which can only help academic science as a whole. It seems to have worked so far: recent data shows that more students than ever are going into the sciences (from a Guardian article here)

Another thing that could be said for the recent trend of science coming into the public domain through popular media is that is has done something towards changing the stereotype of ‘scientist’. To generalise hopelessly, it has gone from requiring a mass of crazy white hair and a blackboard, through the CSI lab technicians with perfectly coiffed hair, to a more normal, relatable figure. The control room pictures which emerged after Curiosity’s Mars landing and Philae’s comet landing have most certainly helped to push towards a view that scientists are normal people, usually doing awesome stuff – many comments were made about Bobak Ferdowski’s punky Mohawk, and of course Dr Matt Taylor and his questionable shirt choices are a long way from the old stereotypes.

Maybe it’s just my dislike of the show, but the ever-popular The Big Bang Theory doesn’t help the effort to dispel a harmful stereotype. Yes, it has brought a science-based show into topical culture, and yes it is a sitcom where the majority of characters are scientists. But it has only cemented the ‘geek’ stereotype which, as much as it shouldn’t, will push children away from taking science, especially physics. The jokes in the show are at the expense of these nerdy scientist characters – we are expected to laugh at them, not with them.

On the whole, although having outlandish sci-fi technology isn’t particularly hindering to mainstream science, any damage it has done can be easily mitigated by persuading people that science is not some immensely complex and highly demanding elite clique, and by persuading more young people to go into the sciences. Science is, and should be, accessible to all, so any effort to get people excited about it (however fictional the setting) is just an excellent thing to do.

Is science in popular media damaging to academic science?

Should scientists have to justify their research in terms of socio-economic impact?

Piyumu Athulathmudali

LHCWelfare, adjustment to the changing environment, food security, global threats to security, cultural heritage, transportation, the digital economy and a myriad more. The mix of social and economic factors that might come into play when assessing the worth of scientific research is extensive, and a lot of it depends on who you ask.  Who decides what these factors are, and which should come at the forefront of decisions? How much does public opinion have to do with any of this, and how much should it?

The UK spends 1.79% of its GDP on research and development, falling below the EU average of just over 2%. With public R&D funding limited as ever, these questions continue to be important.

How should public funding be prioritised? Science for science’s sake doesn’t do well in the popularity contest of driving forces. It is hard to argue in any compelling way for its case without appealing to emotions at the expense of logic and absence of bias.

Much of the science that inspires us falls under the category of…” ‘blue-skies research’ – curiosity-driven, fundamental research. Often the ideas underlying works of science fiction, featuring topics such as unexplained phenomena, the ultimate fate of the universe, and interstellar travel, do a lot for the image of the scientific field as a desirable one to be involved in. This is an easy idea to overlook or dismiss when considering impact of research, but inspiring future generations is vital. Exactly how vital is difficult to quantify.

A popular belief is that the government ought to seek out research that is more focused on pressing matters, with more apparent goals or potential outcomes. Should anything else be considered a luxury that we cannot afford? Or a luxury on which we can afford to allocate a tiny sum (of the already tiny sum available)? Could the money spent on the Large Hadron Collider have been better spent?

There is another way of looking at this. Private enterprises are usually concerned with funding those projects for which the likelihood of a profitable outcome is high enough, and obviously so. This is easier to do when the outcome is a specific one. Perhaps then, scientific progress would be optimal if the government more readily focused on those other projects, for which the consequences are much less certain – less obviously likely, but not less likely, to have beneficial outcomes.

It is difficult to quantify the direct socio-economic effects of scientific research, and things are further complicated when considering indirect effects. An effective way to measure or quantify socio-economic impact of research undertaken has yet to be demonstrated. Perhaps it would be easier to demonstrate how the nature of scientific advancement itself implies that coming up with any reasonable evidence-based approach to measuring impact would be unfeasible, than to actually come up with one.

Unlike prediction as a stage in the scientific method itself, prediction of the consequences and impact of a piece of research is vastly more complicated. When we are dealing with the complex causality involved, and the heterogeneity of the innovation process, this complexity should be expected, but is it? We’ve heard of unexpected, even accidental discoveries, penicillin being a notable example. But does the general public underestimate this as a few rare occurrences, or do we really understand the nature of how it can all pan out?

“The fewer the facts, the stronger the opinion”, as homourist Arnold H. Glasow put it, is an idea often apparent in views surrounding science. Should greater effort be put into informing the public of the unpredictably of outcomes of and shed light on worries about wasted taxpayers’ pounds? NASA has made a point of doing this with its Spinoff publication. A NASA ‘spinoff’ refers to a technology, once developed specially for NASA’s mission requirements, that has been implemented in day-to-day life through commercial products and services. Memory foam, tumor-fighting light-emitting diodes, invisible braces, and enriched baby food are just some examples.

It can often take a long time to commercialise research results, and time is money. But again, how are these costs and benefits to be assessed? When can we say that the investment is justified?

The Haldane Principle

One of the principles behind British research policy is the Haldane Principle – the idea that researchers, rather than politicians, should make the decisions about how to dispense research funding. Its effect seems to be to boost curiosity-driven research over goal-driven research. Sometimes accused of being out-of-date, some think it ought to be abandoned. Others agree with the Royal Society’s view that “Rather than a debate about what Haldane meant in 1918, we need a better understanding about the way in which the Government now interprets the Haldane Principle”. It should continue to be implemented, but also should be extended so as not to impede on more goal-driven research.

 Public opinion

Various topics shine through as hot topics when it comes to public opinion and what the consensus is of yay or nay when it comes scientific research methods and directions. Animal experimentation yields particularly strong opinions. Generally morality-driven opinions are those that are less likely changed.  An important consideration, especially under the Haldane Principle, would be how representative the scientific community are of the general public, in terms of moral values. Is there any reason why views should diverge in this regard? If not, where would differences occur, in the views of scientists as a sample of the population, compared to the average view of the population? Would these differences be eliminated if the public were to be better informed about certain things? It would give them a better basis to assess the benefits and risks, and make their own informed decisions about controversial issue such as GM crops. The more familiar the public with the facts, the more beneficial it would be to heighten their involvement in policy-making decisions.

A new balance   

Curiosity driven research drives curiosity itself, It also has the potential to bring about many unplanned benefits as we have seen in the past. Perhaps in some cases this potential seems less likely: The Ig Nobel prize honours “achievements that make people laugh, and then think.” Last year a prize went to a group in Japan for “for assessing the effect of listening to opera, on heart transplant patients who are mice.” Would Ig Nobel prize-worthy research qualify as something the government should be willing to fund? Is this a step too far?

Mission-driven research is no doubt important in conjunction with curiosity-driven research. Will we find an ideal balance between the two, one that works best for us (if such a thing exists)?

Should scientists have to justify their research in terms of socio-economic impact?


Owen Letts


Research within science is often done to benefit society. Sometimes, however, research is done without forethought to the possible consequences. Is responsible research and innovation (RRI) a possible solution?

What is RRI?

Responsible research and innovation (RRI) is not a new concept.Significant development of RRI has only recently, within the last few years, been explored within the EU [1]. Because the discussion surrounding RRI is so recent an established definition is not available. However, the general concept is that there should be a responsibility on science and technology researchers to help achieve social or environmental benefits, whilst also considering the future positive and negative impacts of their research.

This is particularly applicable to nanotechnologies, genomics, synthetic biology and geoengineering, due to the possible safety risks involved in these emerging technologies. RRI is a much broader issue than these particular sub-fields, however. For example, EPSRC is investigating  how they can embed responsible innovation into their funding schemes [2].

Another aspect of RRI is to create a stronger link between the public and science. It has been suggested to do this through greater transparency on what is being researched and also more engagement with the public.

Why do we need it?

The obvious reason for RRI to be employed is the risks posed by certain areas of research. In the past, technologies such as asbestos, CFCs etc have shown to be large problems in terms of their safety to the general public as well as their damage to the environment. By applying RRI the idea is that these negative implications would be dealt with before research into new technologies is even carried out, both saving time and preventing unintended consequences.

In a related point, RRI would also allow research into the most beneficial technologies for current global issues to be a priority. This could allow specific areas of research to be highlighted for the benefit of society such as fuel cell science or forms of renewable energy and carbon capture [3]. This won’t just benefit society as a whole but also could help stimulate certain areas of research and thus strengthen scientific research, by giving more of a political mandate, as the areas of research would be chosen via public institutions.

This leads onto the final advantage, restoring faith in the government. Although that may not sound like an advantage to many it most certainly is for research science. With a lack of public trust in government it leads to a lack of public trust into where government is investing its money, i.e. research science. Using RRI to engage with the public and show that plenty of thought has been put into where their hard earned taxes are going, allows a level of confidence in the research, as well as allowing a greater understanding of the research. This could result in not just research benefiting but also in inspiring many of the public to get more involved in the development of innovation in science.

How would RRI be implemented?

No set structure for RRI has been set yet and development is definitely still an ongoing process. A major attempt at setting a framework has, however, been accomplished. In 2011 a ‘stage-gate’ review was set up for the Stratospheric Particle Injection for Climate Engineering (SPICE) project. ‘Stage-gating’ is used to split research and development into specific stages. In this situation one of these gates was the decision gate where the continuation of the research would occur when criteria were fulfilled (social, environmental and technical). A panel of relevant experts, ranging from a social scientist to an aerospace engineer, was compiled to develop this ‘stage-gate’. The criteria that had to be fulfilled, , included identifying risks, complying with government regulations, communicating the reasons for the project to stakeholders (people and organisations affected by the research), reflecting on the applications and impacts that the research could create in the future, i.e. a much broader vision of the project’s possibilities, and attempting to understand stakeholder’s views on the research.[2].

What harm can it do?

The SPICE project case study, however, shows one of the possible major flaws that RRI would impose upon research scientists: the amount of added work to starting up any new field of research would be huge. Out of the five criteria outlined for the stage-gate only two of them were deemed successfully achieved by the independent panel and those were the initial risk assessment and compliancewith the already-in-place government regulations. The other three still required work and thus the project was postponed. From a scientist’s point of view this type of process would be extremely frustrating considering the fact that applying for a research grant from the EPSRC already requires a 34 page guide [4].

Another, perhaps more left-field, disadvantage of RRI is what it could mean for traditional science. As in the science for science sake; this type of research is very hard to justify in terms of its short-term benefits to society or the environment as a whole as it is done simply because “it is there”, rather than to solve a problem. This would mean any RRI applied to the distribution of funding would probably dismiss such blue-skies research. this in turn could cause a lack of fundamental theories being produced and thus science being less innovative as it will be constrained to the only the fundamental theory we have already discovered.

Good intentions

Going over the respective arguments for both sides it seems as though ethically and morally RRI would be the correct thing to do: Why should research science be free of scrutiny? However it seems that current solutions would harm research and innovation due to their bureaucratic nature and rigidness to develop in areas for which we have no understanding of. So it seems that RRI shall require more thought into how it can be applied before it can truly be instigated into the various research councils funding requirements.


  1. Owen, R., P. Macnaghten, and J. Stilgoe, Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 2012. 39(6): p. 751-760.
  2. Stilgoe, J., R. Owen, and P. Macnaghten, Developing a framework for responsible innovation. Research Policy, 2013. 42(9): p. 1568-1580.
  3. Sutcliffe, H. and M. Director, A report on responsible research & innovation.
  4. EPSRC, Funding Guide – Arrangements and procedures for research grants and research fellowships. 2014, EPSRC: Polaris House, Swindon.



Should Scientists Have To Justify Their Research In Terms Of Its Socioeconomic Impact?

Myles Selvey

What has this world we live in become?

Does everything have a value?

Even knowledge?

It appears that in the age that we live in everything has to have some form of commercial value. TV shows are all about improving your home to sell it for more, or selling your old antiques. But how, over the last few decades, has this idea managed to transfer to ideas? Ideas are becoming products.

Back when modern science was still young we, as a human race, had a completely different view of the world. In the 15th and 16th century the work of Copernicus revolutionised some of science with the idea of heliocentrism. This was later built on by many astronomers and researchers, yet at the time it was kept relatively quiet. His work would not bring a further income to his country, and was generally completed because he wanted to understand the world around him and explain what he saw. In the 17th century Galileo was not funded by a large organisation, he was not set to make any major financial gain from his work.He simply pursued knowledge.  On several occasions he was brought forward to renounce his beliefs and accept the ideas in the bible. Yet he persevered..

Yet what have we done with his work since? This is where his legacy lies. Almost every advancement in astronomy since has been based on the fundamental ideas that Galileo and Copernicus put forward. If we still believed ourselves to be at the centre of the universe would we even consider the possibility of life on other planets, or test ourselves with what we can get into space. Maybe a dog, a monkey, a human? The scientific advancements that are believed to be spin offs from NASA are claimed to number over 1,500[1]. Even up to quite recently in 2005 a system used to clean the ground after a rocket launch, or clean specific parts provided a solution to dealing with underground pollution and increasing the safety of local fresh water. New safety devices, new materials, new software even new medicine has all come from this initial drive to explore. Yet it is possible to ask someone on the street “what is the point of exploring space?” and they will not have an answer for you.

People would classify this type of work as research with the sole reason of it just being research. There is no clear end goal that benefits the wellbeing of mankind and it will not even give us a new app for our phones. From what I see around me, most people want something they can see, or touch, or understand. Yet a lot of work being done, especially in theoretical fields, doesn’t provide that. So it is easy to see why this type of research seems to be in an ever shrinking market. And that is just the word for it: market.

One of the biggest pieces of news lately is the possible creation of the world’s first quantum computer[2]. This, if true, is one of the largest jumps in technology we have seen in a long time. A quantum computer has an obvious socioeconomic impact on the world, and it has received equal media attention. Media attention  drives interest, which  drives investment, which drives results. From this the work into quantum computing, the development will gain even more momentum and thus more media attention and so on. This is not to say that the development of quantum computing is a bad thing, in fact it is a good example of where having an immediate socioeconomic impact is a good thing. It could be argued that any attention and money directed towards the sciences is a good thing. But there is only a limited amount of money, so what field will it be taken from?

We are reducing our ability to see the use of research for the point of research. Or what is at least on the surface only for the point of research. The fundamental problem is a lack of forsight. Everything needs an instant return. A recent example of this is just last week I saw on Facebook somebody had posted a status asking what is the point of landing on a comet when we should be using the money to cure ebola or curing cancer. This does raise some interesting points. The output of research does not have to be economic. It could be something health related, which can be interpreted as falling under the “social” heading.

It has to be taken into consideration that when a cure for a disease is discovered and produced this drug will not be free, and in some cases not be cheap. But the overall reasoning behind the research is not economic, it is to save lives. And this work is what is most respected and justified by the public. There is a clear, good willed, aim and this is one of the few areas of research that is funded by charities. You do not see someone collecting in the street for researching black holes. And I think this points towards the real meaning behind the socioeconomic justification of science: does it fulfil the selfish act of helping yourself or the selfless act of helping others. If not, is it justifiable?

As stated earlier, with regards to the work done by NASA and what it created as spin offs, it could be. At the time, Copernicus and Galileo could never have known what their work would lead to, and neither do we. The search for dark matter may just appear to be an attempt at finishing a puzzle, but what if decades, centuries or millennia down the line we discover a use for it? We do not know. We cannot know. It is impossible to disregard any piece of research as pointless as one advancement in a field leads to another, which leads to another which may lead to something that will fulfil the aforementioned naïve description of a socioeconomic impact in the eyes of the public



Should Scientists Have To Justify Their Research In Terms Of Its Socioeconomic Impact?

The Perception of Scientists

A response to Isabel Clarke’s blog post: ‘Have social media improved the perception of science?

Mitchell Guest

Ask a primary school age child to draw you a picture of a scientist, and most of us know exactly what they will draw. Inevitably, they will sketch out a white, middle aged man with unkept hair, in a white lab coat and glasses. This impression is one that many scientists have tried to dispel, using a variety of mediums and concepts. In Isabel Clarke’s blog post ‘Have social media improved the perception of science?’, she argues that by making science more accessible, by simplifying world-leading research articles, the barrier between scientists and the general population can be destroyed. There are many people and organisations attempting to do just that, and Isabel points to the likes of Henry Reich, creator of MinutePhysics (YouTube Subscribers – 2.58million) and Elise Andrew, mastermind of the IFLS site (Facebook followers – 18.5million). Science corporations have also caught the social media hype, with the likes of NASA, CERN and New Scientist all running official twitter accounts, with the number of followers in the millions. But are the new ways of presenting ground-breaking scientific discoveries really doing the scientists justice, and are people’s perceptions of scientists really being challenged?

Historically scientists have kept themselves to themselves, creating a perceived blockade between themselves and the general public. A perfect illustration of this is the literature which is published by research groups. The language used is incredibly technical, and in some cases obstructive. Now, I recognise that science is not trivial. Not all theories can be simplified so that the general public can understand it, and in fact I think that the continued popularisation of science is damaging the image of scientists, by making the work done seem almost childlike. However, when fellow researchers also struggle to understand the language used in papers, then there must be an issue. I am in no way placing all the blame on the individual authors. As Jeremy Miles articulately writes in his blog, the target audience of scientific papers is not the general public. It is not even, he argues, aimed at researchers in the field. The real target audience is the peer reviewers of the journal, for if you do not please them, your work will not get published. Consequently, in the publish-or-die world of modern science, the only real option is to please the journals referees. That is, however, an issue for a whole new article!

This hindrance has created a whole new area of science, science communicators. Examine some of the many fantastic YouTube science channels, and you will likely be fascinated by the diversity of science topics, as well as the filmmaking ability of what is, in the most part, amateur productions. Brady Haran is responsible for a variety of fantastic channels, based here in Nottingham, which covers a wide range of science topics. To name just a few: Numberphile concentrates on explaining mathematical concepts, Sixty Symbols on ideas in Physics and Periodic Videos, which has made a short video on every element in the periodic table. The language used in the videos is non-technical and accessible. The concepts are explained using helpful analogies and the amount of mathematics, equations and graphs presented are kept to an absolute minimum. Does it work? I would argue yes, especially after reading this recent blog post from Brady (a really reassuring read!).

PoliakoffHowever, if I refer back to my five year old’s typical drawing of a scientist I find that, in the case of the Sixty Symbols contributors, the illustration is almost entirely accurate. Of the 17 physicists appearing regularly in the videos, 17 of them are white, 15 are male and most would fit the public’s perception of a typical physicist. Now of course, all the Sixty Symbol scientists are part of the University of Nottingham physics department, and it has long been recognised that there is an issue promoting science careers to young girls in school, so maybe we shouldn’t be surprised at the gender inequality. However, other examples of science communicators outside the world of the internet seem to confirm the stereotype. Carl Sagan, Sean Carrol, Brian Cox, Bill Nye, Matt Taylor (of recent shirtgate “fame”), even Richard Dawkins all also fit. There are, of course, important exceptions to the rule, and it would be remiss of me not mention them. Neil deGrass Tyson has done wonders to encourage ethnic minorities in America, and around the world, to consider physics as a career choice.

But I think it is fair to say that the people we have chosen to represent the work that scientists undertake is not diverse enough. The names above show a perfect example of the obstacle young viewers must ignore. Female secondary school students watching the Sixty Symbols channel see video after video of male physics educators, and must at some level seem disheartened at the lack of female participation in the productions.

There is not a simple solution to this conundrum. I think modern science communicators do a fantastic job at making science more interesting and accessible. But they  fall down in encouraging a wide diversity of students to chase jobs in STEM fields, not through what they are saying, but purely through the lack of diversity of the people presenting the compelling arguments. In my opinion, we need to see more women, ethnic minorities and even non-scientists showcasing the most amazing discoveries to the public, and only then will we make science truly accessible. Perhaps then scientists’ views will become more respected, and their findings carry more weight on issues such as climate change, anti-vaccine campaigns and evolution. In the end, that can only be a good thing for society.

The Perception of Scientists


Matthew Cherukara

Last year, the UK government gave £2.88 billion[1] to the seven Research Councils for distribution to universities and other research institutions in the form of “grants, studentships and fellowships”, as well as £1.5 billion[2] for “quality-related research” purposes. For a university like Nottingham, this comes to around £64 million in research grants, which makes up 57% of total research-related income (of the remainder, 15% came from EU grants, and the remaining  28%  from foreign and domestic charities and corporations)[3].

This public spending on research is set to decrease in the years to come, in line with the Coalition’s austerity programme, and as commentators from the left and right agree, such a loss of funding would be devastating, both to university staff who risk losing their jobs, to the science sector as a whole, and to Britain’s long-held status as a world leader in science and innovation.

Is there a reasonable solution to this crisis of funding? There are those who would argue that only large-scale, government-enforced wealth redistribution could possibly rescue curiosity-based research from the death-grip of fiscal conservatism, and while such a solution may work, it still does not answer the question of how exactly funding is to be distributed between the tens of thousands of scientists and hundreds of institutions who would all claim that theirs is the cause most worthy of cash.

Since 1918, the British government has funded research according to the Haldane principle[4], which states that “decisions on individual research proposals are best taken by researchers themselves, through peer review.”[5] This was implemented to detach basic science research from political pressures, giving scientists (as a community) the freedom to direct their work in the direction they (collectively) desired. In practical terms, this has resulted in the creation of seven Research Councils (corresponding to seven broad areas of research) which are responsible for distributing the funding they have been allocated across individuals and institutions within their field.

The problem with such a system was originally pointed out by communist crystallographer J.D. Bernal in his 1939 treatise The Social Function of Science[6] where he argued that science ought only to be done in order to support society, for the common good. Bernal contended that research should be planned with clear goals in mind, and that this focus was the only way to consistently improve the quality of life for everyone in a society.

It is clear that these two approaches are overly simplistic. Basic science for science’s sake, and applied science for society’s sake are not mutually exclusive, as history has shown us again and again. The development of X-ray imaging by Wilhelm Röntgen in 1895 was largely the accidental result of curiosity-based experiments on cathode rays[7]. On the other hand, early advances in computational methods by Richard Feynman and his students were the direct result of the highly applied Manhattan Project[8].

Understandably, the vague hope that one day some aspect of science research may find its way into everyday life is a poor motivator for public spending. Governments and the people they represent are becoming more and more accustomed to rapid advances in technology and they expect immediate returns on their investments into science. These expectations are identified formally in the six “priority areas” the UK Research Councils are aiming at[9], which include adapting to climate change, developing sustainable energy and ensuring global food security. All of these are noble goals, but none of them come close to the purity of Vannevar Bush’s Endless Frontier of “creating new scientific knowledge.”[10]

While Bernal argued that public benefit must be the result of all research, Bush held that science could only progress if it was free from public accountability or scrutiny. A solution which may actually keep both sides happy is crowd funding, where researchers present their ideas online, and members of the public are able to contribute to causes that matter to them. This allows researchers to continue to seek funding for projects that they are genuinely interested in, while giving the people direct control over what their money is spent on.

Crowd-funded science is still in its infancy, although the technique has been hugely successful in raising money for projects in the arts, with $1.4 billion being raised for 70000 projects by crowd funding site Kickstarter since its inception in 2009[11]. The quantities of money being raised for science and technology related projects (through sites like and are miniscule compared with public spending and donations from charities and corporations, but the potential is there for massive public involvement in the sciences, and this is an opportunity that should not be missed.

One of the strongest arguments against large-scale crowd funding of science is the risk that it would become an entirely results-driven enterprise, and the expense of free-minded exploratory science that may or may not yield results. While this is a real concern, it is also shared by government controlled funding bodies, and by charities’ and companies’ research and development branches. Ultimately, any expenditure of money by an individual, or organisation, or society, must be justified in terms of its benefit to the spender or their interests.

A major advantage of crowd funding is the necessity of accessibility, which is to say that scientists looking to raise money must make their work appealing to a broad enough audience. People are certainly interested in tangible outcomes for themselves (many crowd funded projects offer small rewards to those who contribute) or society at large, but we also have a desire for knowledge of the world about us, and the direct engagement between scientists and the people who support them makes an excellent conduit for disseminating interesting information.

Today, researchers vying for funding often saturate their proposals with technical jargon to sound impressive to those in their field, and unintelligibly pioneering to those outside, in order to cajole boards into forking over money. And most journals present results in a way undiscernible to anyone who isn’t already an expert. The average crowd-funding donor may be impressed by enough big words, but they probably won’t convince him to part with his (or her) hard-earned cash: for that, the ideas (s)he is being asked to support must be explained clearly and concisely.

Part of what will be required in order to effectively crowd fund any research venture is publicity. This is sorely lacking in today’s science sector, which appears (for the most part) content to leave the mainstream media to sensationalise a few unsensational findings here and there, while most actual news of scientific advancements is circulated only within the academic community. If scientists are truly interested in expanding human knowledge, they should be acutely concerned with broadcasting their work as widely as possible. Crowd funding would force this into the forefront of researchers’ minds.

An increased awareness of science in society will surely lead to an increased willingness to participate though donations on the part of the general public, and will ensure that academia retains the trust of society for years to come. It will also give members of the public direct influence on the subjects being researched. While the argument for adhering strictly to the Haldane principle (for the sake of protecting curiosity-based research) is strong, it is a fundamental principle of democracy and the free market that people be given at least some degree of choice as to what their money is spent on.

This may seem like an inappropriate time to expect individuals to donate money to causes unlikely to affect them directly, but if marketed properly, the opportunity to participate in genuine science will be relished by some. Generosity towards academia is on the rise, too, as alumni donations to universities increase year on year.[12]

To conclude, crowd funding presents an excellent opportunity for researchers to engage directly with the public in order to secure monetary stability in the face of public spending cuts. Increasing public interest and involvement in science can only be a good thing, and donating to a specific cause or project gives individuals a sense of collaboration and unity of purpose that is often lacking. 

[1] Table 3,

[2] Table 1,

[3] Table 3, p.27,

[4] For an overview, see:


[6] Reviewed by Nature here:



[9] pp7-8