Delving too deep?

Bryn Hazard.

It may only be a mere decade before we find ourselves being served by steely faced AIs as we buy our weekly groceries, loading up our self-driving cars as hundreds of whirring drones swarm overhead. Switching on the radio only to hear reports that American scientists have leaked a deadly super-virus, while the UK government issues pain guns to local police. Will we only then begin to realize that our world of advanced technologies has spiralled out of control? The whole dark circus recorded from the safety of satellites, a tragic film left behind for celestial forms of life; documenting the demise of humankind.

Although the problems above may sound like the products of science fiction, they represent only a small number of the developing technologies and the ethical questions they raise included in the John J Reilly Center ‘List of Emerging Ethical Dilemmas and Policy Issues in Science and Technology Topics of 2015’ [1]. It is not my intention to speculate on these emerging technologies and the issues they comprise, but rather to highlight a remarkable flaw in modern society’s treatment of new technologies evidenced by current trends in online sharing, security, genetics, and weaponry.

Although the development of the internet has paved the way for global connectivity and the sharing of ideas, it has also become a host for the circulation of darker content ranging from the accessibility of uncensored pornography, paedophilic images, and illegal trading. In a similar vein, high tech surveillance has helped keep us safe and assist in the conviction of criminals, but is now beginning to raise issues of personal rights to privacy. Every technological advancement appears to be dual in nature. Yet this technological ‘revolution’ is of a different format to those of the past. Where the French Revolution was a mobilisation of the people, our current age of technology has become a movement of corporations, conducting their own research and distributing life-altering products worldwide in quick succession. In essence, technology is fundamentally changing the way we live. But are our international governments capable of regulating current and future technologies to ensure their benefits to humanity far outweigh the risks?


Legislative progress

Fortunately, governments do take the potentially detrimental effects of technology seriously. Consider the Chemical Weapons Convention of 1997 (CWC), a legislative body ‘Determined for the sake of all mankind, to exclude completely the possibility of the use of chemical weapons,’ and which ‘Considers that achievements in the field of chemistry should be used exclusively for the benefit of mankind’ [2]. The success of the CWC is visible with an astounding 188 of 195 sovereign states agreeing to the legislation and a destruction of 79.9% of the world’s category 1 chemical weapon stockpiles as of March 2013 [3]. The CWC thus highlights an ideal model in which governments can collaborate on matters of technology influential on an international scale.

General awareness of future technologies and how they may affect us has continued to increase in recent years. The expanding capabilities and potentials of AI are a hot topic amongst academics, with individuals such as Elon Musk and Stephen Hawking offering insights into the potential risks. Hawking has even gone so far as to say ‘The development of full artificial intelligence could spell the end of the human race.’ Such warnings have not fallen on deaf ears, with a pre-emptive £1.4 million research project tasked with exploring the ethics regarding AI being devised at the universities of Sheffield, Liverpool, and Bristol set to last until 2018 [4]. Professor Michael Fisher, principal investigator at Liverpool, said the project will ‘develop formal verification techniques for tackling questions of safety, ethics, legality and reliability across a range of autonomous systems.’

Pre-emptive legislation has also been put in place in regards to genetics, evidenced by the creation of the Genetic Non Discrimination Act of 2008 (GINA) [5]. Introduced to protect individuals from genetic based discrimination in matters such as health insurance and employment. With biological advances such as embryonic augmentation and genetic testing, GINA gives hope to those fearing the possibility of an eventual social hierarchy founded on genetics. Furthermore the FAA (Federal Aviation Administration) and EASA (European Aviation Safety Agency) are advising laws regarding drone regulations as well as 16 US states imposing legislation regarding self-driving cars in preparation for when such technologies become commercially available [6][7][8].


Limited control

The efforts of governments across the globe mentioned above are promising. However, recent events such as the chemical weapons unleashed in Syria emphasize their limitations [9]. Whilst 98% of the world’s population belongs to a state that has signed the CWC, 8 states (North Korea included) have not. It is thus difficult to assess the effectiveness of such legislation in regards to dangerous technologies on a global scale for it only takes one person to light a fire.

Unfortunately, even acts such as GINA are not yet fully developed, comprising severe limitations as to what advancements they cover. Privacy laws also remain under constant interpretation as evidenced by the Snowden scandal and revealing the government’s scope of access in regards to private conversations over social media [10]. Perhaps then current laws and general consensus are not yet developed enough to ensure the safe practice of sensitive technologies.

The hacker group Anonymous and their use of the internet to achieve destructive means further underlines yet another failing of current legislations. Despite numerous cybercrime laws the group still persists in cyber-attacks targeting corporations, individuals and governments all whilst eluding top cyber investigators and the FBI. The capability of Anonymous, and their manipulation of a technology to achieve their own brings to the forefront a vast problem. That even with sound universal laws regulating technology we remain unable to guarantee and to enforce that rogue groups and individuals respect them. One can only imagine the destructive potential if groups such as Anonymous are capable of ‘hacking’ into AI.

If governments are unable to guarantee and enforce laws and protocols regulating potentially devastating innovations perhaps the question becomes; whether we should be pursuing them at all? After all, it takes immense collaboration and investment to bring technologies such as AI and drones to fruition, it would be very difficult indeed for a group to conduct the research undetected. Has humanity really learnt the crucial lessons that Hiroshima, and the Cold War had to teach? We are on the precipice of some potentially catastrophic innovations, but are we sure we are prepared to pursue them? Perhaps instead it is time we began to nurture a newfound scepticism towards scientific and technological advances.







[5] The Genetic Information Nondiscrimination Act of 2008 Information for Researchers and Health Care Professionals, April 6, 2009, Department of Health and Human Services (HHS),




[9] Weapons of Mass Destruction: The Cases of Iran, Syria, and Libya KR Timmerman – 1992





Delving too deep?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s