One of the pioneers in deep learning and artificial intelligence, Google's DeepMind, has said that general artificial intelligence that can truly think like a human being is still "a long way off", but dismissed ethical concerns surrounding research.

Speaking at TechCrunch Disrupt in London, DeepMind cofounder Mustafa Suleyman said: "I think when we say that [general artificial intelligence] is 20 years out, or decades away, what we are saying is that it is so far out that we can't really measure it.


"So anything beyond that sort of time horizon means it is very difficult for me to say the difference between 20 years and 50 years. So I think it is definitely a long way off, it's just very far from the sort of practical things we can make today."

Read next: Google DeepMind: What is it, how does it work and should you be scared?

General artificial intelligence

This is despite the fact that DeepMind is working hard on creating general artificial intelligence.

Read next: Inside DeepMind's latest attempts to achieve a general artificial intelligence: What are progressive neural nets?

DeepMind research scientist Raia Hadsell told Techworld earlier this year that the company is making good progress in teaching machines to master multiple tasks, instead of just one game, such as chess or Go: "As we have accomplishments then we keep on raising the bar and changing the target that we are trying to get to," Hadsell said at the time.

She was similarly reluctant to provide a timeline: "I think we are doing an extremely good job within the same domain," she said. "[But] starting to integrate in visual perception and auditory perception and different types of domains into the same network is still further out."

The company published an academic paper earlier this week that appears to mark a breakthrough in this area though. The paper, which Hadsell helped to write, contends with the issue of "catastrophic forgetting", where machines cannot retain multiple skills as humans do, a severe barrier to general artificial intelligence.

The paper states: "We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time."


Suleyman also returned to the issue of the AI ethics board at DeepMind, with the company continuing to refuse to name the people sat on its internal ethics board.

"We've always said that it's going to be very much focused on full general purpose learning systems and I think that's very, very, far away," he said. "We're decades and decades away from the kind of risks that the board initially envisaged."

Read next: Google's DeepMind promises openness as it begins public consultation over healthcare plans

This runs contrary to Suleyman's earlier comments around transparency. When speaking about DeepMind's recent collaborations with health organisations, the cofounder reiterated: "We want to be as innovative and progressive and open with our governance as we are with our technology" and that "trust is a function of control and transparency."

Suleyman has said in the past that he would like to reveal the makeup of the ethics board but the decision appears to still be above his pay grade. He did state that the recent "partnership on AI" with other tech giants like IBM, Amazon and Facebook is a step in the right direction though.

Read next: Google must consider ‘unintended consequences' of AI says machine learning expert

Lastly, Suleyman said that DeepMind is currently working on a technical capacity to log data movements in a way that can't be tampered with based on blockchain technology.

He explained: "We are pioneering a general transparency architecture, this is a distributed, verifiable logging architecture that describes where data has moved when.

"We believe we can build a system like that in a very distributed and ultimately un-tamperable way. We can generate a log of who has interacted with that data and where it has moved in a way that is mathematically proven to be un-tamperable."