General artificial intelligence, a machine that is capable of human-level expertise in multiple tasks, was the hot topic during the morning of the Rework Deep Learning Summit in London yesterday, with two of the UK’s best AI companies Google DeepMind and Swifkey weighing in on the advances being made, and how far we are from a truly human AI.

In his seminal piece about DeepMind for Wired magazine in June 2015, David Rowan wrote: "[DeepMind] showed that their artificial agent had learned to play 49 Atari 2600 video games when given only minimal background information. The deep Q-network had mastered everything from a martial-arts game to boxing and 3D car-racing games, often outscoring a professional (human) games tester."

artificial intelligence 2
© iStock

What this obfuscated was that the deep neural network was learning how to master each game one at a time. The same neural network couldn't, for example, flick between two different games and maintain its skill like a human would.

Read next: Everything you need to know about deep learning and neural networks

DeepMind

Speaking yesterday morning DeepMind research scientist Raia Hadsell explained why this is such a challenge as the company works towards creating a general artificial intelligence.

Read next: Five times AI has beaten humans in competitions: AlphaGo, Chinook, IBM Watson and more: Computers vs humans

Traditionally a deep learning network will be trained through deep reinforcement learning (DeepRL) by being fed huge amounts of data and given time to learn how to perform a task, such as recognising the elements of an image, mastering Space Invaders, or beating Lee Sedol at Go.

"These are each really powerful and each of these can achieve a superhuman level at those tasks," says Hadsell, "but each network is separate. There is no neural network in the world yet that can be trained to both identify images, play Space Invaders and listen to music."

Read next: What is artificial general intelligence? And does Kimera Systems' 'Nigel' qualify?

"We can’t even learn multiple games. Let's make it easier and say we want one neural network that can learn 10 different Atari games, as probably any self respecting 10 year old can do. This is extremely hard. If you try to learn all of them at once, the rules for playing Pong or Qubert interfere with each other. If you try to learn one at a time that's fine, but you forget the ones you learned first."

Continual deep learning

So, unlike riding a bike, neural networks don't retain the ability to perform tasks once they are taught a new one. This is where Hadsell's area of expertise (pdf) in continual deep learning comes in.

Hadsell laid out what her research aims to achieve: "We would like to start with a task, get to expert performance on it then move to sequential tasks using the same neural network to get to expert performance on all of these tasks without catastrophic forgetting of the earlier ones and including transfer from task to task. I want task one to have a positive transfer to task four if they are similar. I would like to play one, know that it is safely encoded in my neural network and move to the next one."

Hadsell and her team at DeepMind have been working on progressive neural networks to try and get closer to this version of AI than they have been able to previously.

Read next: Google’s DeepMind promises openness as it begins public consultation over healthcare plans

The key to progressive neural networks is how they are architected. Instead of a single neural network that can perform a single function, DeepMind wants to be able to link these together.

Hadsell calls each neural network a column, and these are linked together "laterally at each layer and I am also going to freeze the weights [parameters of the model] so that when I train the second column I am going to learn how to use the features of column one but I'm not going to overwrite them."

It's all pretty technical, but the result would be a cluster of linked-up neural networks which would resemble the way a human brain learns and retains information.

Read next: A brief history of robotics - a timeline of key achievements in the fields of robotics and AI, from Azimov to AlphaGo

In terms of early results Handsell and her team have been training a single column to teach a simulated robotic arm to reach for objects, then catch falling objects and finally track a moving object. Using simulated data, so a CGI animated version of a Jaco robotic arm, DeepMind was able to train a robot to perform these tasks in just one day, whereas "if this had been done on a real robot it would have taken 55 days to work", Handsell says.

Drawbacks

The limitations of progressive neural networks comes down to scaling. Hadsell explained, in fairly technical terms: "As I keep on adding columns and adding these lateral connections then I have a problem of scaling and I will quickly end up with something that will be too large to be tractable because the parameter growth is quadratic."

The nature of the system does mean that the scaling issue kind of solves itself though. Hadsell said: "Our analysis shows that the new columns you learn, so say the fifth column, or game, it has learnt, actually uses very little of that new column because so many of the features have already been learned and transferred as being useful to this game (task)."

Oceans apart

Later in the day Ben Medlock, CTO at UK AI startup Swiftkey - which designs a machine learning powered keyboard and was acquired by Microsoft earlier this year - weighed in on the impediments to general AI. He said that fundamentally deep learning "has always been supervised pattern recognition" and that this is "oceans apart" from how the human brain learns.

He said: "The recent advances in DeepRL (AlphaGo and IBM Watson, for example) are a different model and it feels like a step in the right direction, but we still require learning from vast quantities of data and fundamentally the trained human brain learns from very few data samples."

So, how close is DeepMind to creating a general artificial intelligence? Naturally they were coy on timeline, with Hadsell telling Techworld: "As we have accomplishments then we keep on raising the bar and changing the target that we are trying to get to.

"I think we are doing an extremely good job within the same domain, so being able to have multiple skills within a grasping robot but, for instance, starting to integrate in visual perception and auditory perception and different types of domains into the same network is still further out."