Data ethics has exploded into mainstream consciousness in recent weeks, with media coverage of terrorism advertising on YouTube, Cambridge Analytica using Facebook posts to personalise election campaigning, and the endless stream of scandals engulfing taxi-hailing app Uber.

The principles and rules are struggling to keep pace with the technological development. A panel of experts assembled by techUK discussed how to ensure principled behaviour. With ethical notions of consent and privacy constantly stretched by the latest advances in tech, a new structure is needed to establish criteria to protect data.

data science matjaz slanic
Image: iStock/Matjaz Slanic

"You need standards that give you certainty to innovate," says Royal Statistical Society Executive Director Hetan Shah. "Without public trust you could lose your license to operate."

The NHS lost that license in the care.data debacle. Despite widespread support for the concept of using NHS records to improve the health service, the handling of the data protection issue caused an outcry that destroyed the scheme.

Public attitudes to data use

Recent scandals around data use have left public trust at a low ebb. The Royal Society recently asked members of the public how much they trust an institution, and how much they would trust that institution with their data.

"The second answer was always lower than the first," says Shah "You'd never trust an institution with your data more than you'd trust it in general. There's a trust deficit, and that's a societal problem."

Claire Craig, the director of science policy at the Royal Society, has been involved in a study asking British citizens of different socio-economic backgrounds their views on the use of data. The qualitiative research revealed that the public's criteria for judging risk begins with perceived motivation.

"The main message is the importance of context," says Craig. "Their basic criteria for judging risk and benefit of a particular application very much start with a perceived motivation.

"They really care why a new technology has been introduced, why a new application, what data, and the purpose. And they care about the beneficiaries. In particular they’re more supportive if they see it helping them, people like them, groups like theirs, and society more generally."

Basically, there need to be direct consumer benefits. Profit was not a problem if the work helped humans.

Any autonomous decision-making driven by data would be assessed based on the perceived level of risk and culpability. Amazon suggestions for example, would be viewed with far less concern than self-driving cars.

They were supportive of applications that enabled more human contact, through results such as freeing up time to spend with friends and family.  

The over-reliance on technology could lead to people permanently losing the skills they’ve held for generations.

Technology needs to be proven to augment humanity rather than undermine it. Helping professionals save time for more important work would be widely supported, but automation that could replace them would unsurprisingly make them wary.

"There were big concerns about the being replaced [and] the future of work," says Craig. "Where's the voice for potential new jobs?"

These worries extended to existential fears, as technology sets us on an inevitable pathway towards depersonalisation and challenges the essence of what it means to be human and of value, if a computer than can everything better than they can. If an algorithm had the authority to determine your choices, restrictions on freedom in areas such as career, education and financial support seemed an inevitable consequence.

Building trust

Positive uses of data rarely receive the same exposure as the negative ones. For every Las Vegas casino that uses data science to estimate your spend threshold as you walk in, and when you reach it offer you a drink in order to help you break it, there’s a Streams app improving health outcomes and saving nurses hours each day by scanning patient data to predict acute kidney injury risk.

Trust would be boosted by publicising the positives, such as TfL's use of open data to predict when a bus is coming, or the Food Standards Agency (FSA) monitoring social media for the spread of the Norovirus.

 A recent Frontier Economics report predicted that AI could add an additional US$814 billion to the UK’s economy in 2035, and an increase in growth rate 2.5 percent to 3.9 percent.

"The hard numbers are very impressive, but still sell short the way in which these data-driven technologies like data analytics, like artificial intelligence, really can positively transform every aspect of our economy, every aspect of our lives, and really help us build richer, healthier, cleaner communities," says Microsoft UK Government Affairs Manager Owen Larter.

Algorithmic pattern recognition technologies are already being used in the US healthcare system to address preventable errors in hospitals, the third largest cause of deaths after cancer and heart disease. This data-driven pattern recognition can flag anomalies in established clinical best practices to clinicians, and prevent these errors causing significant harm.

The UK today has an ethical imperative to answer the challenges such as the healthcare costs of an ageing population that will mean the NHS of today is no longer affordable tomorrow.

The data troves that we’ve developed can be used save lives, but only if is release can its potential be unleashed, techUK deputy CEO Antony Walker argues.

"I would argue that we have an ethical responsibility to our children and grandchildren if we want them to have a free health service," says Walker. "The only way we can do that is through the use of data."

Attributing accountability can be hard when it comes to code, but making the processes auditable to explain how it works would help build public trust. There also needs to be a response to concerns raised by the public, Craig argues. 

"Transparency is necessary but by no means efficient," he says. "Knowing what is happening is only the starting point."

Accountability, responsibility and liability are a complicated triangle. The latter is where it really hurts the company, and where transgressions need to be corrected.

In addition to the aforementioned issues around privacy, governance and consent, public fears persist around data equity and bias. Algorithms are often given more credence than opinions, despite being a product of human sentiments and prejudices.

Data science remains a relatively new discipline so scientists need thorough training in data ethics and standards. The outcomes of the algorithms they design should be audited to ensure transparency and safety.

"It's hard to peer into the black box of algorithms," says Shah.

He wants an independent data ethics council set up, and would rather give existing regulators additional powers than establish new ones.

Into the future

Luciano Floridi, Professor of Philosophy and Ethics of Information at the University of Oxford, has been analysing the potential future of data and the ethical implications that will emerge in the coming years.

He describes two separate outlooks for how AI will develop: "the swimming pool model", which will see the whole world filled to brim with it, and his personal prediction of the "the pothole model", where drips fall on everything, but only fill certain holes up to the top.

"In terms of linking the holes, it won't be AI doing all the work, it'll be humans, it'll be us," he says. "And there will be a lot of ethical issues we'll have to understand about how we work as interfaces between an AI app, and another AI app, and another system that needs to be linked and so on. How you link all this is entirely unchartered territory."

New technology always offers new opportunities for crime, but private companies, public bodies and law enforcement agencies can be slow to catch on. Europol’s 2016 Internet Organised Crime Threat Assessment (IOCTA) has a section for all manner of established cyber crimes, but barely a mention of artificial intelligence.

"There’s a lot of talk about using machine learning, AI to fight organised crime. Is anyone talking about how organised crime is going to use the same technology?” he asks."If you find something in an operating system in some kind that is a vulnerability, can you imagine what that is going to look like once it's a vulnerability that is hackable within an AI system. The only people I know that are talking about this is the car industry."

Algorithms are promoted for their potential to strengthen computer security, but have a similar capacity as a means to overcome it. No longer will they have to hack every individual car's computer, when automation has them all running on the same system.

There are some positive developments inside the private sector, however. Companies increasingly see data ethics as an asset that’s good for business, particularly once they move from a startup to a large enterprise.

Technology can change ethics, such as the contraceptive pill, which provided the protection to trigger a sexual revolution before the public pushed it forward. If data is to begin to fulfil the transformative potential that it offers, ethical rules must be established, and a framework to enforce them. It’s up to the government and the industry to put a new system in place.

Find your next job with techworld jobs