Almost every business today seems to be deploying artificial intelligence. Analyst house Gartner believes that by 2020, it will be pervasive in almost every new software product and service. But how many companies have an effective AI system in production?

According to Satalia CEO Daniel Hulme, the answer is none.

Image credit: WCIT
Image credit: WCIT

"Nobody’s doing AI," Hulme argued at the WCIT conference in Yerevan, Armenia this week. "People are using machine learning to find patterns from data. Machine learning is not AI."

Hulme is far from the first to dispute the industry's pervasive flaunting of the term. The prevalence of its misuse has spawned the term "AI washing" to refer to companies that falsely imply a certain level of intelligence in their technology stack.

Research by MMC Ventures shows how prevalent AI washing has become. The London-based investment firm found that two-fifths of 2,830 European companies that call themselves AI startups don't actually use the technology at all.

But even this figure could be grossly inflated. The MMC study deemed machine learning a subset of AI. Hulme argues that this definition is inaccurate, as all machine learning does is find patterns in data.

"The definition of intelligence is goal-directed adaptive behaviour," he said. "If you build a system that's able to make a decision, learn about whether that decision is good or bad so it makes a better decision tomorrow, I would argue that's AI. I haven't seen a single successful system in production in my life that does that yet."

The MMC study also illustrated the appeal of AI-washing. It found that AI startups had median funding rounds that were around 15 percent higher than other software companies.

Hulme argues that the roots of the problem run deeper than business, into the academic institutes where the techniques first grow.

Read next: What is catastrophic forgetting and how does it affect AI development?

He has first hand experience of this on both sides of academia. Hulme is currently UCL's sitting computer science entrepreneur in residence and previously earned a Masters and a Doctorate in AI from the same university.

"There's confusion in academia about what AI is," he said. "You see a lot of academics jumping on the bandwagon because they've got machine learning experience and because over the past two decades, machine learning has been very exciting, they have rebranded themselves as AI. But I think that actually AI is a combination of technologies. Within academia they're starting to realise that."

This limitation also means that machine learning doesn't always adapt in predictable or intended ways. Microsoft found this out the hard way in 2016 when the company launched Tay, a chatbot designed to learn from its interactions with users of the social network Twitter. Bad actors promptly embarked on a campaign to teach the nascent chatbot racist and sexist language. Within a day, Microsoft had suspended the account. If the same situation occurs with autonomous weapons, the risks could grow from causing offence to causing mass destruction.

Read next: DeepMind researcher says AI agents should cooperate for social good

Hulme founded Satalia in 2007 to reduce this risk through a method called optimisation, which applies explicit constraints to an objective function to find safe and efficient solutions that can self-adapt from their decisions. 

For example, one customer, the furniture retailer DFS, used this method to improve its delivery schedules. These were previously managed by human operators, who would call customers to arrange deliveries and manually create a scheduled after all the orders were arranged, which meant that customers couldn't be offered specific time slots.

Satalia replaced this system with a custom-built algorithm that continuously re-optimises routes and schedules whenever a new order is placed. The algorithm accounts for constraints including vehicle restrictions, loading times, driver shifts, product types and locational data to predict how long each delivery will take to arrive. These predictions are are then fed back into the algorithm, allowing it to adjust over time.

Hulme believes this approach of constantly iterating can overcome the inherent limitations of traditional machine learning techniques.

"It's a completely different set of skills to make decisions. These are very hard problems to solve, and once you've made a decision you need to learn about whether that decision is good or bad. Then you have to adapt your own model of the world," he said.