Ever since IBM Watson won the hugely popular US quiz show ‘Jeopardy’ back in 2011, cognitive computing has been a hot topic for discussion. 

Those who have had a glimpse of this new technology recognise its importance and the potential it has to change the way we look at computers in the future. However, there exists a degree of confusion over what it is, how it works and why it’s different. That’s entirely understandable – it’s truly cutting edge and requires a significant mind-shift to comprehend what ‘cognitive’ means in practical terms.

Cognitive computing girl laptop
Girl using laptop

Turing – setting the scene

Computing pioneer Alan Turing neatly summed up the theory of cognitive computing in his 1950 paper ‘Computing Machinery and Intelligence’:

‘Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.

Like a child, cognitive (‘thinking’ or ‘learning’) computers need to be taught rather than programmed. They use what they are taught, absorbing information from both structured(documentation, manuals, product, customer information) and unstructured (blogs, review sites, social feeds) data, to adapt and evolve, and they understand context. This allows them to select the best answer to a question rather than simply choosing the correct one, much like humans do.

From a practical perspective, this means products using cognitive computing get better the more we use them. Instead of purchasing a finished item crafted to the last detail by developers, users take some of the responsibility themselves for teaching the product to meet their needs. This allows them to keep pace with a complex world and adapt to unforeseen changes which weren’t covered by the original programming.

Learning from the past

Historical cognitive projects including Carnegie Mellon’s Speech Understanding Research programme, LISP machines developed by MIT and Deep Blue (another IBM project which famously beat Garry Kasparov at chess) have paved the way for today’s ‘thinking’ computers.

More recent developments that build on these foundations include:

  • Wolfram Alpha, a ‘computational knowledge engine’ taking externally-sourced ‘curated’ data to provide answers to users’ questions rather than lists of documents or web pages where the answer might be found
  • The Numenta Platform for Intelligent Computing (NuPIC) open source project, which encourages developers to use learning algorithms which replicate the way in which the brain’s neurons learn
  • IBM Watson represents a new era of computing with a billion-dollar investment from IBM. Watson processes information more like a human than a computer—by understanding natural language, generating hypotheses based on evidence and learning as it goes.

Why cognitive computing is important

Until recently, the major functions of computers - GUIs, spreadsheets, search engines etc –  all required us to learn their highly-specialised language, something that needs dedication and at least a passing interest in the technology behind it.

Computers that can learn and speak our language represent a revolution in the way we work with them. Making interaction and systems development more natural and intuitive means more people can use them, more often. Businesses that train hundreds of employees a month can use the same methods to train their cognitive computers. Their ability to process unstructured data in the form of millions of documents, then use the information to provide contextually relevant answers, offers unprecedented opportunities across almost all industries – diagnosis (health), product, sales and tariff information (retail) and textbooks, papers and academic material (education) to name just three.

Natural language processing – the future of cognitive

While the ability to analyse documents and other content to provide more ‘human’ responses to written questions is undoubtedly a terrific development, there’s a reason why every computer appearing in sci-fi uses speech as its primary method of communication. Natural language processing via the spoken word is the next logical step towards parity of intelligence – or at least a plausible imitation.

Intelligent personal assistants Siri and Cortana have capitalised on this idea, using voice recognition software to retrieve information and personal details when requested by the user. Though not strictly ‘cognitive’, they present a good-enough facsimile, sufficiently ‘futuristic’ to satisfy the market.

Truly cognitive computers such as Watson, on the other hand, have the power to process human speech and use it to deliver ‘human’ answers. Much of the meaning of language comes from context, subtlety and nuance. For example, a sentence like ‘which is the best tablet for me?’ means different things depending on whether you’re in Boots or the Apple Store. This is where contextual capabilities come into play – educating cognitive computers to interpret meaning correctly based on appropriate data.

The rise of the machines?

At its core, cognitive computingis the democratisation of development – ‘supercomputers’ are no longer the sole preserve of or computer scientists. Watson, for example, is available through the cloud and hundreds of partners are building apps with it - cognitive is out there and available to anyone who wants to put the next important phase of computing to the test.

Of course, there are some fears among the general population that once computers become ‘intelligent’, surely it’s only a matter of time before Skynet becomes self-aware and replaces us all with machines? On this point, Watson couldn’t be clearer about its aims – it is designed to be ‘a natural extension of what humans can do at their best’ - a commitment to assisting humanity rather than replacing it. We can rest easy knowing that cognitive computing is set to make our lives easier – nothing more sinister than that.