The UK's first major Parliamentary inquiry into Artificial Intelligence has called for a new cross-sector ethics code to ensure that the country becomes a world leader in AI.

Lord Clement-Jones, the Chairman of The House of Lords Select Committee on Artificial Intelligence, told Techworld that an ethical approach was essential to ensure public support for AI.

houses of parliament brianajackson
© iStock/BrianAJackson

"What we want is to make sure that the public is fully trusting in this technology, and you can only do that if they believe it's for the benefit of them and others when they're being applied, and also that it's transparent and unbiased in its application," he said.

The proposed "AI Code" could attract public support by creating consistent guidelines for developing and using AI across all organisations and companies in both the public and private sectors.

In a report titled AI in the UK: Ready, Willing and Able?, the committee set out five principles to form the basis of the code, which could be adopted internationally:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

Legislation more urgent than regulation

This AI code could provide the basis for future statutory regulation, but the committee stopped short of recommending new regulation specifically for AI at this point.

Lord-Clement-Jones said that "the overarching ethics are much more important at this stage", and that existing regulatory bodies remain best placed to judge the ethics of AI systems under their jurisdiction due to their expertise in the sector.

"The guys who really know what they're doing in regulating different sectors are those sector regulators, but we think there's a common set of principles that should be adopted across the board," he said.

The developments in AI may, however, require new legislation. It is difficult to assign accountability for decisions made or informed by AI systems. This becomes more important when they have a major impact on the lives of citizens, such as by assessing mortgage applications, diagnosing illnesses, or guiding autonomous vehicles.

AI systems can malfunction or make erroneous decisions that have harmful consequences. This danger is amplified when algorithms learn and evolve. The committee concluded that it was not clear whether the existing mechanisms for legal liability were sufficient for these scenarios, and recommended that the Law Commission consider the adequacy of existing legislation as soon as possible.

"What we're really saying is it is not entirely clear just what the liability is, where the accountability for a particular decisions lies and so on when we're talking about AI, but we would like to make sure that our law does cover it, and therefore the law commission would be the ideal people to do it," said Lord Clement-Jones.

Effective transparency

Transparency is crucial to achieve public trust and develop ethical AI systems, but the committee concluded achieving full technical transparency for some systems is difficult if not impossible, and would, in any case, not always be appropriate or helpful.

Lord Clement-Jones explained that the level of transparency should be proportional to use. Netflix recommendations, for example, would require a lesser degree of transparency than mortgage decisions.

"Those sorts of very sensitive areas we think should either have clear ex-ante explainability in the highest case of sensitivity, or at the very least explainability after the event, and we think it's only fair and right that those should be explainable and transparent," he said. "But there are others where that level of transparency may not be quite so necessary."

It can be difficult to make advanced algorithms transparent. The route that some AI systems take to arrive at a decision is so complex that even their own engineers do not always know how they got there.

Lord Clement-Jones believes that these issues should not become a barrier to transparency.

"If they are black boxes they may have to open them up," he said. "How you design algorithms and AI systems is going to be pretty important in the future but people in the tech industry do need to know what the requirements are.

"I think it would be just unfair to suddenly impose this, and that's why we raised these issues because this may be something that we ought to be addressing in the near term."

Creating a consistent approach to AI ethics

The UK government has been relatively active in developing ethical principles for AI. In May 2016, Matt Hancock MP announced the first Data Science Ethical Framework for public consultation. The Government Digital Service is updating now this following feedback from the British Academy, the Royal Society and Essex County Council.

The government has also announced the creation of the Centre for Data Ethics and Innovation, "to enable and ensure safe, ethical and ground-breaking innovation in AI and data‑driven technologies", and a new Digital Charter to establish norms for the online world.

The inquiry found that these varied programmes lacked coordination, and called for "clear and understandable guidelines governing the applications to which AI may be put, between businesses, public organisations and individual consumers."

"We do think the government is going in the right direction," said Lord Clement-Jones.

"We just need greater coordination of the various bodies that they've suggested, and we haven't suggested any new bodies ourselves. Indeed, we haven't suggested any new regulators. A lot of this is about getting the right impetus and the right level of coordination."