As technology continues to improve and the development of artificial intelligence hurtles towards progress, it’s no wonder we find ourselves asking whether we should be worried. What will artificial intelligence be like? Will we retain control or will something go wrong? Will it help us or hinder us? There is certainly enough science fiction to suggest the latter. Books and movies are littered with examples of rogue robots and AIs gone bad – machines that take over the world and enslave the human population or worse, kill us all off; but that’s just fiction. And besides, we’re a long way from walking, talking AIs who are part of our everyday lives, right?

 A Present Problem

Wrong. We already encounter artificial intelligence in much of the technology that we use every single day. Apple’s Siri and Microsoft’s Cortana are, after all, forms of AI – albeit more basic than those we see roaming the streets in the latest sci-fi movies. Then there’s smart cars that can drive themselves, iRobot Roomba vacuum cleaners that guide themselves around your room and then return themselves to their charging stations, and security surveillance that can follow potential crime without human control. There are fraud detectors, predictors for retailers, recommendation services like those you find on Amazon, and even automated online customer service support[1].

And that’s just the beginning of it. In Japan, the tech firm SoftBank has released a best-selling humanoid-style robot named Pepper who can recognize emotions and responds accordingly[2]. There are sex robots currently in production – alongside a sweep of controversy, automated weapons are being developed, and more and more AIs are taking the jobs of human beings. So what seems like a future problem is actually something that we are very much embroiled in already.

 The Not-So Distant Future

What’s more, experts say that those walking, talking AIs from the movies are not that far off either. A recent conference between nine scientists and experts concluded that superhuman artificial intelligence is not only an inevitable eventuality but that it’s a lot closer than we think too[3]. Even the founders of internet giant Google wrote that “the computer itself – whatever its form factor – will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world”[4] and talked excitedly about the prospect of an affordable, ethereal-type presence that will never leave your side. With the future being right on our doorstep, it’s no wonder we are beginning to question whether we should be afraid.

The Fearful Future

Elon Musk, founder and CEO of SpaceX and others, has proclaimed perhaps the most famous warnings against the development of AIs, despite being heavily embroiled in this sort of tech-based development himself. He donated $10million to projects designed to control AIs and warned that we were “summoning the devil”, that eventually we as human beings will become pets to our AI masters[5]. Jerry Kaplan, author of Humans Need Not Apply, similarly warned that we would soon be the inhabitants of a human zoo. “Will they enslave us?” he asks. “Not really. More like farm us or keep us on a reserve, making life there so pleasant and convenient that there’s little motivation to venture beyond its boundaries”[6].

Blind panic aside, however, and there are certainly questions we need to ask. How will the law work, for example? Ryan Calo, a law professor at the University of Washington examines practical issues like whether we would allow robots to reproduce autonomously, potentially leading to an explosion in the AI population[7]. How can we be sure, moreover, that AIs will interpret our requests as intended? Philosopher Nick Bostrom suggests that “we may set the AI a goal to make humans smile and the super-intelligence may decide that the best way to do this would be to take control of the world and stick electrodes in the cheeks of all humans”[8], which is not quite what the originally request intended. In fact, many argue that, as in the majority of science-fiction, it’s actually this potential for unforeseen or unintended consequences that should be feared, rather than the AIs themselves.

The Beauty of Unintended Consequences

That fear, although not unfounded, is not necessarily as scary as it seems though. As writer Andrew Keith Miller points out, it’s not just us laymen who fear those unintended consequences – and it’s not just us who have read the sci-fi novels or watched the movies. Scientists and engineers have too, and they’re a smart bunch of people. They are aware of the dangers, and they already have potential consequences mapped out and thought through by hundreds or possibly thousands of philosophers, creators, writers, and directors as well as other scientists and engineers in a way that rarely, if ever, happens in other schools of science and development[9]. What’s even more thought-provoking is that unintended consequences are not always bad. Miller points out that Penicillin is the result of a science experiment gone wrong, for example, and non-stick frying pans were an unintended consequence of the space program[10].

A Fearless Frontier

Cognitive psychologist and computer scientist Geoff Hinton, moreover, argues that despite what others say, we’re still along way off and it makes no sense to guess at potential consequences yet. “Even the biggest current neural networks,” he says, “are hundreds of times smaller than the human brain,”[11] and while AIs can process information in a way that human beings can’t, they certainly don’t have conscious, independent thought. As scientist Andrew Ng points out, there is a massive difference between a machine being intelligent and it being sentient, and we have no reason to believe that making something more intelligent will ever give it sentient thought[12].

A Lack of Humanity

In essence, we’re making-believe that robots will be like us and that they will have the same drives and desires as us (if they have drives and desires at all), which is not necessarily true. Neil DeGrasse Tyson said in a HuffPost, “seems to me, as long as we don’t program emotions into robots, there’s no reason to fear them taking over the world,”[13] and although someone is bound to program emotions into an AI one day, these machines are unlikely to have mass-market appeal and as a result, are unlikely to be mass-produced[14]. Even if they are, what makes us believe that they will work together as a single unit to overpower humanity? Even the most peaceful groups of sentient beings are divided to an extent and chances are, the same would happen with AIs[15].

Feeling the Fear but Doing It Anyway

Of course there are dangers, as there are with any new development – technological or otherwise, but isn’t one of the traits of human nature that we regularly fear the new, the different, the unknown? As Peter Diamandis points out in a HuffPost article, we consistently overreact to new tech as fear is our automatic response[16], but that doesn’t mean we shouldn’t do it, especially when the potential benefits far outweigh the potential dangers. AIs may eventually become autonomous, but they will also still be programmable and predictable[17], just like any other machine, and as philosopher Nick Bostrom points out, we can simply learn to control them before we create them. Besides, perhaps the most fearful outcome of all this is not the AIs themselves but the potential that the technology could easily be misused – it’s not rogue robots we need to fear, but rogue humans who use the technology to carry out their less than desirable actions, and that’s not so different to where we are today.


[1] Dann Albright, 2016, 10 Examples of Artificial Intelligence You’re Using in Daily Life (online). Available at:, accessed 11.16.2017

[2] Jane Wakefield, 2015, Intelligent Machines: The truth behind AI fiction (online) Available at:, accessed 11.11.2017

[3] Kevin Kelly, 2017, The Myth of a Superhuman AI (online) Available at:, accessed 11.11.2017

[4] Cited by Navneet Alang, 2016, Who's Afraid of Artificial Intelligence? (online) Available at:, accessed 11.11.2017

[5] Jane Wakefield, op. cit.

[6] Cited by Jane Wakefield, op. cit.

[7] Ibid.

[8] Cited by Jane Wakefield, op. cit.

[9] Andrew Keith Walker, 2017, 5 Really Obvious Reasons Why We Shouldn't Fear AIs (online) Available at:, accessed 11.11.2017

[10] Ibid.

[11] Cited by Jane Wakefield, op. cit.

[12] Ibid.

[13] Self-Aware Patterns, 2015, Should we fear AI? Neil DeGrasse Tyson's answer is the right one (online) Available at:, accessed 11.11.2017

[14] Ibid.

[15] Andrew Keith Miller, op. cit.

[16] Peter Diamandis, 2017, Why I Don't Fear Artificial Intelligence (online) Available at:, accessed 11.11.2017

[17] Karla Lant, 2017, 7 Reasons You Should Embrace, Not Fear, Artificial Intelligence (online) Available at:, accessed 11.11.2017