Public fears over the control of AI are mirrored by the companies that deploy it. Their worries are growing as AI becomes more complex and difficult to explain if not understand.

Their concerns extend to the very top of the tech world. Elon Musk, an investor AI companies including DeepMind and OpenAI, has called for regulation to stop going AI rogue and developing into a "fundamental risk to the existence of human civilization." AI already determines life-changing judgments around sentencing, welfare benefits and insurance claims, and those affected understandably want to know how these decisions were made.

Image: iStock/Just Super

The impending General Data Protection Regulation (GDPR) will make it mandatory for organisations operating in the EU to do this.

Transparency could provide a solution for them all. But if companies want to communicate the workings of their AI to the public, they first need to understand how these models function.

Clarifying opaque AI

AI can be divided into two broad categories: transparent and opaque AI. The former uses self-learning algorithms that can be audited to reveal the route they take to their decisions. The latter lacks the constraints of these algorithmic parameters in its black box reasoning. It is a truer form of artificial intelligence, as the systems don't follow a programmed path but instead work things out for themselves.

Opaque AI is common in more sophisticated forms of AI such as deep learning, which mimics the activity of neurons in the human brain. Like its progenitor, it follows complex pathways to the decisions that it makes. You and I would struggle to explain how our minds distinguish one person's face from another's, and so do our artificial imitators.

"When Facebook does facial recognition, or Google does speech-to-text recognition, those algorithms are pretty accurate, but they're unable to explain why, for example, this picture is a picture of somebody we know,” explains Don Schuerman, CTO of Pegasystems, a software company that uses AI to improve customer engagement.

"Organisations don't always want opaque. They want justification."

The simpler forms of automated decision-making, such as predictive analytics, tend to follow fairly transparent models. Opaque AI can form deep insights beyond the imagination of its developers, but in exchange it takes from them a measure of control.

Microsoft's chatbot Tay showed the dangers of companies ceding their authority. Shortly after the bot was released on Twitter in March 2016, it began to spout a stream of racist invective in response to the provocation of other Twitter users.

This unpredictable behaviour can damage a company's reputation. The risks are greater when AI is used to make important decisions. Banks now use it to flag fraudulent transactions, but a customer may claim that its verdict is incorrect. If an opaque AI model can't demonstrate how the decision was made, it's difficult to justify any actions based on its conclusion.

The control issue can ruin lives when AI is applied to public policy. It can lead to welfare claims being refused, or criminals receiving unjust sentences based on built-in racial discrimination.

"What is then happening is public policy is encoded in the algorithm and you no longer have the ability to challenge it, because someone decides that it's a trade secret," says Suman Nambiar, the head of AI at IT consulting firm Mindtree.

"The data needs to be more transparent so that people can understand what bias is getting enshrined in the data, because otherwise they're going to think this is an impartial, infallible oracle when it's not. It's basically preserving our biases and hard-coding them."

Both the organisations that deploy these systems and the individuals they affect need evidence to prove that it made accurate and fair decisions.

Different needs for transparency

Businesses need to balance the benefits of opaque AI against its risks and a growing need to gain the public's trust.

The level of transparency required will depend on the purpose of the AI. Decisions made on loans, insurance, or criminal sentencing need high levels of transparency to justify the outcomes, whereas image recognition used to flag stock carrying outdated branding would be safe using a more opaque model.

In some cases, companies may want to simultaneously run multiple AI models all with different levels of transparency.

"They might be running a risk calculation model to find out what the risk level of a customer is, and at the same time they'll run a product selection algorithm," says Schuerman. "The two should be complementary. If the algorithm can explain itself to you, you can scan the predictors for bias. If the algorithm can't, you might need to put in a more rigorous testing model to look for bias, not in the algorithm itself but in the outcomes."

There's a growing trend for organisations to make their algorithms public. OpenAI, a non-profit AI research company supported by the aforementioned Musk communicates its work online. Apple, a company renowned for its secrecy, has also started releasing its AI research, while DeepMind made the freedom to publish its research papers a condition of its sale to Google.

Transparent AI by definition reveals the inner workings of a system that generates unique benefits for a business, but Nambiar argues that the true value lies elsewhere.

"What's going to be your competitive edge is the data and the way you train the models," he says. "Google can publish every algorithm it uses, but nobody on earth is going to have the mass of data that they have."

Schuerman agrees, adding the importance of AI strategy to the mix.

"What's becoming differentiated isn't the algorithms but how the business chooses to wrap a strategy around it," he says. "For example, how a company chooses to balance its cost of service against customer satisfaction. These are strategic choices a company makes, and it's where a company differentiates.

"Opaque AI will become more commoditised. Putting facial recognition – opaque AI –  in my banking mobile app is fine, but I can use transparent AI that feels very personalised, real and meaningful to that client – and that's something that's very differentiated."

Making AI more transparent

Software companies are creating ways for their clients to improve AI transparency. In September, Pegasystems announced a capability designed to reduce these risks by opening up the black box of opaque AI.

The system, known as T-switch, lets organisations set thresholds for transparency or opaqueness for each AI model they use along a sliding scale. It combines the algorithms they use with their own business rules and strategies and the ethical values they want to be associated with their brand. They can then deploy AI algorithms based on the requirements of their business.

"They can dial up how much transparency they want in their decisions," says Schuerman.

"For example, with risk [assessment] I probably want to be very transparent. But with algorithms doing text analytics maybe I'm willing to accept a little more accuracy in exchange for being less transparent. And I'll be knowing full well that the more opaque I get, the more I'll need to do more bias-testing."

Governments and regulatory bodies have been slower to respond. Counterintuitive as it may seem, the calls for AI regulation are primarily coming from within the industry through voices of tech luminaries such as Musk.

"If you look at the people banging the drum for these issues, it's coming from universities and technologists saying that someone needs to wake up and appreciate the consequences for society as a whole," says Nambiar.

Pioneering computer scientist Ben Shneiderman used his 2017 Turing Lecture to call for a National Algorithm Safety Board to provide the necessary independent governance to ensure algorithmic accountability.

Suman believes the oversight will need to be provided by a combination of the academics researching AI, the companies that commercialise it, and public interest groups monitoring the deployment. This would help ensure that any resulting regulation and legislation is fair and effective.

"It has to be some kind of organisation in which everyone can have some faith, but without compromising commercial interests," he says.

Educating the public, says Schuerman, is another piece to the puzzle.

"The more we build data science education, the more we have a better understanding of what's going on inside these devices and applications we use every day."

Find your next job with techworld jobs