Humans are handing over ever more autonomy to machines, but our faith in their objective efficiency may be misplaced. Artificial intelligence still needs our help. It needs skilled people to oversee the data and develop and train the systems that process it.
"In the enterprise, I have yet to see any scenario where a non-human assisted AI process is showing a lot of success," says Praful Saklani, the CEO and founder of Pramata. His company uses human-assisted AI to help customers including CenturyLink and Hewlett Packard Enterprise to identify where they're losing on revenue or profit in their commercial relationships.
The root of the challenge is that the data is inherently unstructured. The contacts are written in an assortment of ways, and pricing models can vary immensely.
"AI may only be able to give you 60 to 70 percent of the answer, and you're going to need humans to fill that remaining 30 percent. It's not an either/or, it's a combination of both and mastering how that combination works is the difference between success and failure."
He points to contract renewals as an example of the problem. In around 80 percent of the customer relationships that Pramata analyses, the contracts include no renewal date. They instead contain a renewal term, such as two years from the date of signature with a 90-day advance notification period. AI can identify that more information is needed to understand the renewal data, but it needs help to make information actionable.
"That's where you can have a skilled human come in to play, to say we have to connect this piece of data with this piece of data, and now we actually have something that's useful for the end customer.
"Just using an AI algorithm alone won't work. Just using people alone would be very time-intensive and perhaps error-prone. Combine both things and you can get a very actionable result for the end customer."
Getting the balance right
The human input is essential for almost all AI to perform successfully, from Google auto-suggestions that improve as people supply it with information that trains its algorithms, to Facebook photo-tagging that refines its accuracy through the corrections made by users.
Each has strengths that complement those of the other. People can't compete with the statistical analysis done by computers, who have their own cognitive limitations that come more naturally to humans, in areas such as communication, emotional intelligence, formulating hypotheses and thinking outside the (black) box.
"In the enterprise, one of the big issues is the datasets are very unclean to begin with, so even curating the initial dataset for training is something that requires a lot of human intervention," says Saklani.
"But then even as you get the data out of the set and continue to tune the algorithms to make sure that you're getting better and better results, the results you get invariably need to be augmented with some kind of analysis before action is taken.
"The issue that I see hear in the Valley and in the industry in general, is AI is being portrayed as some kind of silver bullet, and not enough attention is being paid to the various curation processes both around the input and the output to make this powerful set of technologies useful."
If enterprises can't standardise unstructured data as it arrives they should instead transform that into structured data. Saklani recommends businesses identify the problem that needs solving and then structure relevant data around that – bucking the trend of gathering as much data as possible.
The "actionability threshold"
Search technologies that scrape through documents to find specific information show the risks of brute force automation, as the chance of an omission without a large amount of post-processing is very high.
"You're going to get a lot of static or stuff that shouldn't be in there, and you're going to miss a lot of stuff that should be in there without that nuance of those extra processing steps that require human intervention," says Saklani. "That's a big risk."
There is a tendency to believe the more data the better, but businesses would often be better served by prioritising the quality of the data. To achieve this, companies need to define the business problem as precisely as possible, and then collect only data that could provide the solution.
The objective is to identify what Saklani calls "the actionability threshold" - the point at which the information is accurate, timely and becomes complete enough to take action.
"For each different type of business problem, the actionability threshold is slightly different," he says. "I think that that's really what determines the balance of what you can do with the machine versus where does the interplay with human analysis or processing steps come into play."
An equal footing for the future
In 2005, eight years after DeepBlue's seminal victory over chess world champion Gary Kasparov, amateur chess players Steven Cramton and Zackary Stephen and their personal computers entered an advanced chess tournament comprised of teams that paired humans and machines.
The friends from New England defeated grand masters using supercomputers on their way to winning the title. The decisive factor in their victory was their understanding of when to use the computers and when to use human judgement.
They proved that despite the advantage of AI in a head-to-head showdown, the strongest chess players remain human-computer combinations. Enterprises that have shown the most effective uses of AI have a similar faith in this model of human-computer symbiosis.