World domination: is an AI takeover imminent?
‘Artificial intelligence will either be the best or the worst thing ever to happen humanity.’
Stephen Hawking warned the world about the dangers of AI during his speech at the opening of the Leverhulme Centre for the Future of Intelligence with these dire words: ‘AI could mean the end of mankind.’ Couple this with innovators stressing the importance of ensuring AI ‘goes the right way’ and Institutes like AI Now keeping a close eye on developing technologies, and you’ve got a powder keg of nerves.
Artificial intelligence is nothing new: it plays chess, writes news articles, folds proteins and decides who gets a mortgage. The AI of today is known as ‘narrow AI’, meaning it surpasses human capabilities but only in specific, predetermined domains like playing games or identifying images. In other areas, like translation, comprehension and driving, they’re getting closer to human ability but are yet to surpass us. The pace at which AI is being developed is, for many people, frightening.
‘General AI’ or AGI systems are the next-generation technologies that will have human-level problem-solving abilities across many domains—this is the AI that will be able to ‘think’ for itself.
Artificial intelligence is nothing new.
When AI is discussed or portrayed in popular culture and media, it often leads to scary descriptions of war between machines and humanity—think Terminator. And yes, while machines certainly will outperform us, that doesn’t necessarily mean world machine domination and the obsolescence of mankind. We can coexist with AI. After all, if it wasn’t going to be better than us, we wouldn’t be making it.
Peter-Paul Verbeek, Professor of Philosophy of Technology at the University of Twente has an interesting view on human nature: ‘The whole point of technology is to outdate us. Humans are often seen as animals with something extra, but you could also view us as animals that lack something. Our default setting is that something is missing, but we have our brain, which allows us to add something. Our nature is to be outdated by technology. We are artificial by nature.’
In this point of view, AI could be seen as a mere extension of humanity—something completely natural. We don’t need to feel threatened by artificial intelligence, because all technology comes from us. Our intelligence extends far beyond our brain and computers are a part of that—AI is not an enemy, but an extension of human collective intelligence.
‘We are artificial by nature.’
Lakmal Seneviratne, Director of the Khalifa University Centre for Autonomous Robotic Systems, points out we have little to be concerned about in the physical realm. ‘Take a moment to reflect on your own motor skills. Things that are easy for humans are extremely difficult for robots—and a robot is simply a machine with intelligence—picking up a pen and then touching it to the paper is the easiest thing in the world to us; but practically impossible to a robot.’
Lots of things are out of grasp for an AI system—literally.
Developing a sense of touch is a real engineering challenge for roboticists. The superior abilities of humans to interact with unstructured and often uncertain environments rely on our sensing and perceptual capabilities. Mimicking the way the human finger experiences compression and tension would allow robots to respond to multiple stimuli and better interact with the world around them. Few people would immediately recognize the skin as one of the body’s most important organs but the constant and instant reports of temperature, pressure, and pain mean we can navigate the world with precision and dexterity.
Developing a sense of touch is a real engineering challenge for roboticists.
Even if a robot were able to feel, understand and respond to touch at human levels of dexterity, autonomy still is out of the realms of possibility for AI systems. One of the grand challenges in robotics is the coffee cup challenge: a robot should be able to move into any kitchen anywhere in the world and make a cup of coffee. Simple for a human: walk in, locate kettle, boil water, find a mug, add coffee, add water. But our human ability to decompose a situation is something AI can only dream of; our human way of sequential information processing is one of the most difficult things to teach a robot. When a robot has to perform multiple tasks at once in a busy environment, it struggles to make sense of the situation.
If an AI system were to run amok now, just take refuge in a kitchen cupboard.
But despite these shortcomings, AI remains a potential existential threat to humanity and advances in machine learning have given us a more concrete understanding of what AI can do and how much we still don’t know.
When a robot has to perform multiple tasks at once in a busy environment, it struggles to make sense of the situation.
Breakthroughs can often surprise even other researchers in the field. ‘Some have argued there is no conceivable risk to humanity from AI for centuries to come,’ wrote UC Berkeley professor Stuart Russell, ‘perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilard’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.’
One scenario bothering researchers stems from the relentless pursuit of goals an AI system displays: if we develop a sophisticated AI system with the goal of estimating a number with high confidence, what could stop it from realizing it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and that the only way to access all that hardware would be to first exterminate humanity? Mankind evaporates and the AI system calculates the number with higher confidence.
Or as Stephen Hawking put it: ‘You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project, and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.’
The pace at which AI is being developed is, for many people, frightening.
A goal-driven AI won’t suddenly wake up one day and decide humanity needs to go, but it might take actions to achieve its goals, even if we’d find those actions problematic or destructive.
For that reason, now is the time to think about the checks and balances that need investment for safe AI.
We need to engage with technology and take safety measures but reducing all AI ethics to the danger of becoming extinct is pretty redundant. Rather, we need to think about our ideals and values and mitigate the disruption caused by technology. AI comes with a societal impact, and the AI revolution has parallels with the industrial revolution: AI is taking over more and more tasks, and that presents deep challenges to society. The impact AI will have depends on how people react to it: AI could threaten jobs, but it could also create jobs.
The global management consulting firm McKinsey concluded robots could take over half the work people do by 2055—but the researchers distinguished between ‘jobs’ and ‘tasks’. It’s not jobs that are disappearing; it’s that tasks within jobs disappear.
The impact AI will have depends on how people react to it.
The AI Now Institute at New York University is an interdisciplinary research centre dedicated to understanding the social implications of artificial intelligence. Their work focuses on protecting rights and liberties, investigating bias and inclusion, and recommending safe and responsible AI integration. Their 2018 Report examines the cascading scandals surrounding AI last year and asks: ‘who is responsible when AI systems harm us?’
The accountability gap is growing in AI, especially as artificial intelligence is introduced to core infrastructures and used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, education and hiring.
AI is a huge market worth billions of dollars: we’ve got robots at home, in hospitals, and on the roads. We’ve got robots in food production and civil infrastructure inspection. Frost & Sullivan think that by 2022 there will be at least one robot per house; whether that’s a Roomba cleaning your floors or a household butler managing all manner of tasks is down to how much human-robot interaction you’re comfortable with. Either way: artificial intelligence is here to stay.
We’ve got robots all over the place.
Should we be afraid of artificial intelligence? The simple answer is no, but that doesn’t mean we can be blasé about it. Humans tend to be afraid of what they don’t understand—and not even Stephen Hawking was immune to fear of the unknown—and technologies that seemed impossible just a short while ago naturally inspire some trepidation. But artificial intelligence, machines, and robots are tools that can be used in one way or another, for right or wrong, like everything else. It is the way humans train and use them that should concern us, not the systems themselves.
If artificial intelligence can be thought of as an extension of humanity, we need to ask ourselves where a human stops. If technology is part of us then it is subject to our design, and if we can instill values and morals into a toddler, we can raise our AI the same way. ■
Private industry has come to dominate the institutions of science and research—and there are growing calls for open access to information.