Imagine you're hurtling down a train track at high speed in a runaway cart (a highly probable scenario, of course). A little ways down the track 50 innocent people are tied down, screaming and struggling in terror. A lever will change the course of the train, directing it down an alternate track. This way is not unimpeded either, but there is only one person strapped there. What do you do?

A version of this dilemma constitutes the 'trolley problem' and has preoccupied generations of philosophers and thinkers. Although for most it might seem obvious to swerve the train and careen into one person rather than 50, what about if there was a large man on the bridge above the tracks - would you push him onto the railway in the hopes of derailing the train? 

autonomous vehicle

Aside from unearthing interesting quirks of moral reasoning, this problem has acted as a lens through which to examine sociological phenomena and societal norms. Different elements of the problem can be tweaked to gauge the calibration of society's moral compass. For example, would you kill ten murderers over one nun? What about ten homeless people over one successful businessman?

While this problem can appear esoteric, it sometimes becomes a real-life dilemma, when drivers involved in accidents are forced to make a split second judgement call about whether to swerve or crash into an oncoming object. It's also a problem that self driving cars may well have to deal with at some point down the line.

Although they've been hailed as capable of reducing 90 percent of road accidents, there will still inevitably be moments of malfunction, human error or technological failure that could result in this kind of problem arising. The question is: can a decision that is currently left to the momentary gut instinct of humans be codified into a series of complex algorithms? And more importantly, should it?

Earlier this year, Techworld covered the research of MIT's Media Lab group, Scalable Cooperation, which was in the process of attempting to examine global preferences on different iterations of the trolley problem, and hence preferences for how self-driving cars would behave in the real world. Still available on the website, the Moral Machine interactive quiz presents the viewer with two courses of action in each scenario, manipulating important dimensions to tap into the moral codes of different participants.

Image: © Moral Machine
                    One of the diagrams shown to participants in the Moral Machine (Image credit: Moral Machine)

Some months later, the researchers have analysed 39.61 million decisions made by 2.3 million participants from many different countries (over 100 participants from 130 of the countries sampled) in a study published in Nature.

The study discovered that there were some important geographical delineations when it came to predicting preferences. Three broadly homogeneous groups emerged: the east (containing countries such as China, far east and Islamic countries), the west (countries including the US and Christian European countries) and the south (including most of south America).

These areas differed in their preferences for the nine dimensions interrogated by the study: preferences for sparing humans (over animals); preferring inaction (over swerving the car); sparing more people; sparing females; sparing younger; sparing higher status (i.e. homeless vs businesspeople); sparing the lawful; sparing the fit; sparing passengers (over pedestrians).

The cultural differences between preferences can be vast. As the infographic shows, Argentina, compared to China (the country it was most dissimilar to), had a much stronger preference for sparing pedestrians and younger people. Chinese respondents showed greater ambivalence about saving more people rather than less, a greater preference for saving 'lawful' people (i.e. those not jaywalking) and slightly less preference than Argentina for prioritising the lives of women over men.

Never the twain shall meet

East is East, and West is West, and never the twain shall meet. This quote, taken from "The Ballad of East and West", a poem by Rudyard Kipling, has long signified the apparently insurmountable divide between eastern and western cultures. But within the Moral Machine, the most potent cultural difference was the contrast between western, individualistic cultures, and collectivist, eastern cultures. The former has been linked to the prioritisation of individual success and emotions such as pride, whereas the latter is linked to the prioritisation of group success and happiness.

This dichotomy is manifested in various psychological phenomena, demonstrated in studies such as that of small children, which show that those from western backgrounds focus on the 'characters' in the foreground, while children from eastern cultures were more likely to scan the background of the picture to look for information about context.

In terms of the Moral Machine simulation, this difference manifested as a preference for saving older people in eastern cultures, and for saving younger people in western cultures. This is because older people generally command more respect in eastern cultures, due to their accumulated contribution to the community and society, whereas younger people are accorded a lower status. In countries like the US, however, from an individualistic perspective this is reversed, as older people are viewed as having 'had their time' enjoying life on the planet, while young people are still owed the chance to experience life.  

Inequality breeds contempt

Aside from individualism and collectivism, another strong predictor of preferences by geography was the level of inequality in each country. The GINI index rates countries on the degree of inequality there, with Scandinavian countries scoring high on equality, and countries rife with corruption and gender inequality scoring lowly.

In the Moral Machine, the countries experiencing greater levels of inequality showed a greater preference for sparing the high status, and sometimes also the physically fit. For example, a country like Norway, where levels of inequality are low, showed a preference far below the global average, while its counterpoint, the highly corrupt Angola, preference for higher status was second highest out of all the countries polled, and far above the global average.

Central American countries like Guatemala and Nicaragua also greatly preferred saving the physically fit over the unfit, while showing little regard for lawfulness.

Aside from offering a fascinating glimpse into how cultural factors shape morality and our perceptions of the varying 'value' of other human beings, this also presents a potential predicament to the development of autonomous vehicles, and could hint at a future lack of global consensus on an ethical code. 

Reality bites 

Daimler AG executives may have earned ire with the - hastily retracted - assertion that Mercedes-Benz autonomous vehicles would 'protect their passengers at all costs'. But how far are we, really, before autonomous vehicles have to make these kinds of decisions in the wild?

For Alan Winfield, one of the few researchers working in the area of robot ethics (to be clear, the ethical actions and understanding of robots themselves, rather than the humans programming them), this eventuality is so far off as to render research such as this farcical.

"It's obviously a solid piece of social sciences research, but the basic problem of the study is the working premise that a driverless car could in fact sense that it's facing an ethical dilemma and then make a decision," says the Professor of Robot Ethics at the University of West England, "that, I'm afraid, is complete fantasy."

Another researcher in this area, Bernd Stahl, concurs. He is Director of the Centre for Computing and Social Responsibility at the De Montfort University, and currently a lead on the SHERPA project, which is investigating how to develop a code of ethics for various different technologies in Europe. 

"We might be in a similar situation where we have to make a decision to do A or B, but the trolley problem seems to assume that we actually know the outcome, that we have complete information about what the situation is and that we can realistically assess the consequences, and none of those are true," he tells Techworld. "I think the trolley problem is an interesting philosophical problem, but it's got absolutely nothing to do with the practise of building AI."

Some would argue that decisions such as the trolley problem do not rely on a car having moral 'awareness' as such, simply encoding set of algorithms defining the preferred course of action. However, even in terms of purely technological considerations, Winfield warns we are decades off having a car that could accurately recognise attributes such as 'young' or 'old', 'wealthy' or 'poor'.

He says that right now, cars would struggle even to work out whether a large 'blob' was a person or several. "What if the three people over on one side are in a queue which means that you really only see one person? What if they're in a huddle?" he says. "The problem is that I think there is a huge misunderstanding about how advanced robot sensing is."

But even if a car was advanced enough technically to discern these qualities, should it? "I'm not sure that I want robots to make ethical choices because what you're doing is that you're delegating moral responsibility to a machine," says Winfield. 

Although it's still early days, the answer from one country is also a resounding 'no'.

In 2017, Germany announced ethical guidelines for self-driving vehicles that cars should only be governed by the impetus to cause as least harm as possible, and that any discrimination on the basis of factors such as race, gender, age or disability would be illegal. In other words, all human life is equal. However, it's unclear is this extends to unlawful victims, given German respondents' strong preference for mowing down jaywalkers over lawful pedestrians in the Moral Machine.  

An emerging area that could aid the development of these capabilities is imitation learning, involving training algorithms on sets of videos depicting how cars would behave in an ideal situation.

"It's still very early stages, it's all new in the research world, but you can show the machine, you can show that A.I. what is the correct thing to do in this case," Head of AI Research at Cambridge Consultants, Dominic Kelly, tells Techworld. Although not accurate enough yet and still associated with inherent risk, this could be instrumental in teaching autonomous vehicles ethics - such as to crash into a wall and self-destruct rather than knock into two citizens on the road in front. 

But could a lack of consensus between countries on the ethical standards of self-driving vehicles prove problematic? 

"If we would like AI to be reflective of our social mores, it stands to reason that ethical codes being developed will have to maintain some level of flexibility to vary by country," says tech ethicist, David Polgar. "There are baseline universal considerations, however, that should be incorporated throughout the world that echoes our shared values and also basic safety." However, he says that there could be an issue with certain 'ethical' technologies being able to scale up globally. 

"A major conflict in the coming years will be the right of consumers to choose products that align with their individual moral code versus restrictions placed on products that may be deemed against the good of humanity," says Polgar. 

A potential concern for companies is whether or not they would be able to sell their vehicles internationally, if their ethical codes contradicted that of a different country's. 

All of these issues breed yet more questions. For Stahl, one is the most interesting is the concept of culpability.

"Will we ever deem a piece of technology to be responsible?" he ponders, saying at the moment it's unlikely, but this could change as these vehicles gain more autonomy. "What if our autonomous vehicle does something that was unpredicted, who's going to be held liable? How will this play out in the courts of law? What will be the future legal environments where these sorts of decisions are made?" For Stahl, and for the rest of society, questions like these are becoming ever more compelling.