Fears that the development of artificial intelligence poses a threat to humanity are misguided, according to AI expert Sir Nigel Shadbolt.
“It's not artificial intelligence that worries me. It's human stupidity,” Shadbolt told Techworld in an interview.
Artificial intelligence is already widely used in systems to assist decision-making in medicine, insurance and finance and in consumer devices, for example Apple's intelligent AI assistant Siri, he explained.
“We are a very, very long way away from self-aware or even generally intelligent computers,” he said.
Professor Stephen Hawking, PayPal cofounder Elon Musk and Microsoft cofounder Bill Gates are just a few of the prominent figures to warn that computers could 'overtake' humans in terms of intelligence.
Last December Hawking warned: "The development of full artificial intelligence could spell the end of the human race”.
There are lots of misconceptions about the technology, which is largely about using algorithms to identify patterns and gain insights from large amounts of data, rather than Hollywood's portrayal of “conniving self-aware robot armies”, he said.
“AI is an aid to augment our intelligence. It's making us smarter and quicker at what we do and in some areas can take over quite a lot of the operation, be it flying planes or driving cars,” Shadbolt explained.
“What we'll see is a world of what I call 'micro intelligences': limited, task-achieving programmes that are great for achieving x, y and z,” he said.
“But the thing the machine can't do but a three year old child can is take his knowledge about one board game and translate it over to the next one. Task transfer is very elusive, very hard to understand, like lots of aspects of human problem solving,” he added.
Instead of worrying about theoretical risks, we should focus on being “smart about what control we give over to our systems”, Shadbolt said.
“We've got to think ethically about how technology is being used – in drones, or even in financial markets. You have to put in a set of rules of behaviour to bring technology into line,” he explained.
As we hand over more tasks to AI, there are also serious questions to ask about accountability, according to Shadbolt.
“When there's a crash between those two first driverless cars, what will the legislation and judgements be? Who's responsible?” he said.
“If we're stupid and decide to put lots of AI technology into seek and destroy robots, and we don't have a way of reinserting ourselves in the loop or deciding when they should or shouldn't hit the kill button, then we have been really stupid,” he added.
Like nuclear technology, biological science and chemistry, AI is not morally 'good' or 'bad' in and of itself, Shadbolt said.
“People can use AI to do great things, or it can be used on the battlefield or as a gas agent. And that's our decision. That's not the machines',” he added.
Jobs destroyed and created
There have been apocalyptic headlines about AI's potential to destroy various areas of employment. Almost half of existing jobs in the US could become obsolete within two or three decades, according to a 2013 paper by Oxford University academics.
However Shadbolt disagrees. “I don't think we will see large-scale mass destruction of jobs in the way people imagine.”
Although it will cause a lot of upheaval, Shadbolt believes AI will help to create as well as remove jobs. It has already led to new, previously unimagined job titles like 'database custodian', he said.
“There are a whole bunch of knowledge-intensive jobs nowadays that exist that wouldn’t have existed, editing online books or online content, for example.
“Look at the overall balance. Some professions where relatively routine knowledge is involved will come under more automation. But as soon as it gets complex, as soon as you need to know the limits of your understanding, that's what people are able to do that machines can't,” he said.
Despite his generally upbeat tone about the future of his field, there is one trend within the AI sector that does worry Shadbolt: the fact a handful of US tech giants regularly buy AI startups as soon as they start to flourish.
Google acquired British AI startup Deepmind last year, plus two Oxford university spin-off companies: Dark Blue Labs and Vision Factory, to name just a few examples.
“I'm worried about so few people buying these [startups] up, that is an issue I think. You don't want dominant monopolies. You want a vibrant competitive landscape.,” Shadbolt said.