Given the ubiquity of the term 'artificial intelligence', you might assume everyone knows what it means. Not so.  

The Oxford English Dictionary defines artificial intelligence as: "the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages."

© iStock/wigglestick
© iStock/wigglestick

As a field within technology, AI is over-hyped yet simultaneously changing the world as we know it.

Perhaps the word 'intelligence' is misleading. Computers are not able to think for themselves. When people hear the phrase artificial intelligence they may equate this with some form of sentience – which machines most definitely do not possess. This remains confined to the realms of science fiction.

Weak and strong AI

Arguably, AI is such a broad term as to be almost meaningless. It may be worth narrowing it down to two categories: weak AI and strong AI.

Weak AI is the only one that really exists. It basically means AI that specialises in a specific area. For example, an AI program has beaten a human being at the game Go, but it can't take that knowledge and apply it to different situations. This process is called 'task transfer', and it's a something AI is not very good at.

Strong AI, or 'artificial general intelligence', refers to AI that is as clever as a human and able to perform any task a human can. But here's the catch: it doesn't exist. It might do in the future, but that remains a reasonably distant prospect. So let's not worry about that here.

History of AI

Artificial intelligence has its roots in the second world war when fields like computing and neuroscience started to emerge. One notable contributor to the field of AI is mathematician Alan Turing, who helped to break German naval codes during the war. We have him to thank for the 'Turing Test', a bar set for machines that can trick someone into thinking they were talking to a human. However, the term artificial intelligence itself was first coined by American computer and cognitive scientist John McCarthy at a summer conference at Dartmouth University in 1956.

Despite a lot of hype around AI in the 1960s, excitement receded in the 1970s when millions had been spent on development with little concrete return. Interest picked up again in the 1980s when companies started to focus less on 'general intelligence' and more on AI's ability to help solve narrow tasks. In 2008, Google launched a speech recognition app powered by AI, and as a field its applications have become wider and wider ever since.

Real-world examples of AI

Real-world examples of AI are out there everywhere you look, from your Gmail account and Google Maps to Siri, Cortana and Alexa, Amazon's recommendation engine, parking-assist features in cars and Facebook's facial recognition technology.

AI is used to assist all sorts of everyday decisions, from parsing legal contracts to electronic trading to medical diagnoses. Cutting-edge applications of AI include driverless cars, detecting emotions from people's faces and autonomous delivery systems.

Subsets of AI

Once you go deeper than surface-level detail within a field as technical as AI, things can get very complex, very quickly.

There are a number of potential AI subsets but broadly they can be categorised as machine learning and deep learning. Machine learning is a subset of AI, while deep learning is a subset of machine learning (got it?).

Machine learning solves a task by learning from data. It is the practice of using algorithms to analyse data, learn from it and then make a determination or prediction about something in the world. Basically it involves 'training' computers to perform tasks.

Deep learning is one of the main techniques fuelling the current AI boom. It involves passing data through 'neural networks', webs of algorithms that loosely copy the brain's structures.

The first successful use for deep learning was large scale automatic speech recognition. Other applications include image recognition and computer vision, natural language processing, recommendation systems and fraud detection.

Future of AI

Given the breakneck pace of recent innovations, it seems risky – indeed foolish – to set too much store by predictions of how artificial intelligence will develop.

Predictions of 'super-intelligent' AI, while widely popularised in the mainstream media, seem unlikely to come true any time soon.

However, it is probable that AI will continue to develop and enhance research in fields like automated transportation, 'cyborg' or biohacking technology, virtual assistants and robotics.

AI, like all technologies, is at its root neutral, like roads or electricity. It is how it is being applied that will dictate any moral judgement. Worryingly, AI is being increasingly adopted by military forces around the world, with a number of commentators even dubbing this 'the new space race'.

On a more positive note, the World Economic Forum has suggested AI could save the world. AI could help to address climate change, one of our biggest problems.

Ultimately, it's down to humanity and how we choose to use it.

As AI expert Sir Nigel Shadbolt put it: "It's not artificial intelligence that worries me. It's human stupidity".