We live in the age of the algorithm, where processes and rules are increasingly determined by computers. But we might be on the cusp of a backlash.

Public trust in algorithms has been damaged in recent months following a spate of scandals around them propagating biases, manipulating public opinion, and increasing inequalities. They've been used to determine who gets job interviews or loans, to falsely flag black defendants as future criminals in court sentencing and to misinform the public in political campaigns.

Image: Alchetron
Image: Alchetron

Renowned computer scientist Ben Shneiderman has a plan on how to ensure algorithmic accountability. The University of Maryland professor and founder of its Human-Computer Interaction Lab outlined his strategy at the 2017 Turing Lecture on Tuesday.

"What I’m proposing is a National Algorithm Safety Board," Shneiderman told the audience in London’s British Library.

The board would provide three forms of independent oversight: planning, continuous monitoring, and retrospective analysis. Combined they provide a basis to ensure the correct system is selected then supervised and lessons can be learnt to make better algorithms in future.

Growing data concerns

We disseminate and consume ever-growing sets of data, governed by algorithms. They are in our smartphones, IoT devices and our use of social media. For many of us, most of what we do is now online.

Fears over privacy bias and inequality and the dangers of artificial intelligence have become public concerns now as stories such as the Trump campaign's use of social media to provide targeting advertising are receiving mainstream media coverage.

Shneiderman wants to promote the positive power of data and algorithms, and the role they can play in improving lives around the world.

"I want to see big data [and] powerful technologies put to work to promote positive outcomes for people," he said.

"We have serious problems in our time from healthcare delivery and energy sustainability, from environmental destruction, of computer safety, of cyber security, and of course the key thing - British Airways planes flying. All those things are really important and we need national and effective ways of dealing with big data."

The Obama administration recognised its potential in the "Big Data Initiative" of 2012, and the challenges of developing scalable algorithms for imperfect data and effective human-computer interaction tools. The White House's 2016 "Strategic Plan for Big Data Research and Development" added concerns over the trustworthiness of data, and the importance of transparent use and an auditable pathway to its results.

[Read next: Developing a data ethics framework in the age of AI]

The dangers were further analysed in a book by Cathy O'Neil released later that year, titled "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy".

She described the opacity, scale, and damage that can be inherent in algorithms, through examples including a single way of assessing credit scores or job performance that unfairly judges individuals based on unclear and insufficient data.

Schneiderman believes the risks are even worse when data guides life-critical systems such as healthcare, air traffic control, and nuclear control rooms.

"I think that the algorithms can be biased, harmful, and even deadly," he said.

The solution

Shneiderman's research has led to the invention of the touch screen keyboards used on smartphones, photo tagging and the blue internet hypertext links that we still use today.

But the work that he most frequently returns to when deliberating algorithmic accountability is his seminal tome on human-computer interaction/information visualisation, "Designing the User Interface".

In the first edition the authors called for "balancing automation and human control", but by the sixth and most recent it had moved to "ensuring human control while increasing automation."

We have become too reliant and trusting on the data and the systems that analyse it. Shneiderman wants the humans to take back control.

Apple appears to agree. The company's "iOS Human Interface Guidelines" states "people – not apps – are in control." It's a claim that Shneiderman thinks they fulfil, by providing users with fine-grained control over their data. Your Facebook news feed, however, does not.

"[There's] no level of certainty what's going on," he said. "You don't have control over it. It’s not predictable in its behaviour.

"When you go to systems which are richer complexity, you have to adopt a new philosophy of design."

The design of algorithms, therefore, "requires independent oversight, to mitigate the dangers of biased, faulty or malicious algorithms."

The independent oversight he wants would involve open adversarial reviews, advisory boards, planning commissions, and internal and external audits. Transparency would be provided by opening the black box of algorithms, accountability through open failure reporting, and liability by ending "hold harmless" agreements that protect parties from liabilities that arise during the contract.

The future

A list of seven basic principles for algorithmic transparency and accountability were recently published by the United States Public Policy Council of the Association for Computing Machinery. It's a start, says Shneiderman, but a lot more needs to be done, starting with on the establishment of a National Algorithms Safety Board.

Those responsible for providing oversight would need a high level of knowledge but one that wouldn't risk them losing their independence. Shneiderman pointed to the the Federal Reserve Board as a model. It requires overseers of banks to rotate every few months so they don't become too friendly with those whom they must monitor.

They will also need sufficient powers to subpoena information and enforce recommendations to ensure that their decisions can have a positive impact.

[Read next: Sir Tim Berners-Lee lays out nightmare scenario where AI runs the financial world]

The question of whether the algorithm or human can provide a better form of judgment is a simplistic dichotomy, and a dangerous foundation to guide future human-computer interaction. They need to work together, but the human must maintain control.

"Every rule has enforcement; exemptions, enhancement and education, and I think that's what it takes," said Shneiderman. "The goal is to clarify responsibility, so as to accelerate quality. That's the key thing."