Social media has transformed the way we consume information and engage with politics. The allegations that the Russian state used social media platforms to interfere in elections have caused concern that they could be a threat to democracy, through spreading information that is designed not to inform but to induce. 

"We're up against a very complex information pollution system where we are only just starting to understand the breadth and therefore we can only just start to think about remedies," professor of digital media at Manchester School of Art, Manchester Metropolitan University Farida Vis told a House of Lords Committee investigating the issue on Tuesday. "But in terms of what is the threat to liberal democracies, I think we're at crisis point."

Image: iStock/mrPliskin

Social media companies provide the perfect platform to exploit the weaknesses in our political systems, as their business models rely on manipulating human emotions.

"Their goal is to maximise the amount of time and attention people are spending with them," James Williams, a doctoral candidate researching design ethics at the Oxford Internet Institute, told the committee.

"So in order to do that, because there's so much competition for people's attention, what has to happen is resorting to exploitation of these psychological biases, appealing to the lower and lower parts of ourselves."

A recent report on information disorder by the Council of Europe warns that the nature of social media content creation and amplification has created a new form of information pollution at a global scale.

The authors highlight three separate components of information disorder: misinformation, which refers to false information that is shared without aiming to cause harm; disinformation, which spreads it with malicious intent; and "malinformation", which involves the dissemination of genuine information with the desire to cause damage, often by leaking private data the public.

What are the barriers to regulation?

Enhanced regulation is the obvious solution, but the nature of the problem makes it complicated to define and enforce. Social media platforms operate internationally with complex systems that can be controlled by unidentifiable accounts, and have circumvented the confines of a traditional technology company to operate in an ambiguous variety of sectors.

They've moved away from their initial design towards media or utility companies, as Zuckerberg himself once described Facebook. These industries have their own regulatory regimes, but social media companies don't easily fit into the definitions of either category.

Regulating the content adds another layer of complexity. Social media companies argue that it's impossible to check everything that's published on their platforms. If they could, political advertising would be relatively straightforward to identify, but many posts don't take the form of official paid advertisements.

"We are not dealing with paid-for content in the way that perhaps some of the debates have been framed," says Vis. "Certainly someone is being paid, but it's not an ad; it is a person that has been created to shape political discourse."

As is often the case with emerging technologies, regulation moves too slowly to keep up with the rate of digital change. The social media companies predictably prefer to address this through self-regulation, but such a system is unlikely to gain the public's trust.

Vis nonetheless believes that effective accountability will more likely be found through collaboration between governments and social media companies on what can be done at platform-level, backed by education and stronger legal frameworks in external areas such as advertising.

"We need to find a middle ground, a third space for having these discussions," she says.

"Because I think where a lot tends to break down is finger pointing and culpability - that this is the fault of the platforms, as if this was somehow their intention or as if they have easy and ready control over what happens on their platforms.

"I think those are simplistic understandings of the breadth of the problem, and I also I think they don't lead necessarily to productive dialogue."

"We need to find a space where everybody can have a buy-in and it becomes something that everybody can get behind. And for some actors that might be purely financial, but if we can frame this in such a way that there is a financial incentive to do better, my personal opinion is that that may work a great deal better with these companies than a punitive measure."

How bots thrive on social media

This collaborative approach has resulted in all the social media companies implementing their own initiatives, from flagging "fake news" to educational outreach, but their efforts appear to have had little effect.

They continue to publish an abundance of false information that is shared for perhaps nefarious purposes, showing they still lack the ability to control the content on their platforms.

Their desire to act is often questioned, but much of the problem lies beyond their power.

Much of the US congressional hearings into election meddling through social media has focused on bots. These can vary from simple automation software that spouts messages, to more sentient bots that respond to specific comments with relevant arguments.

Bot networks come out of nowhere and quickly disappear again. It's hard to find evidence for where such ephemeral content comes from when it's produced by highly sophisticated actors with state backing.

As it's difficult to identify it's also difficult to remove.

The way the messages are functioned, how they function and the influence they exert is only beginning to be understood. Even when it's possible to close specific accounts, the action reveals the technology that was developed to address the problem.

There are also limitations in defining false information. In the wake of the March terrorist attack in Westminster, the SouthLoneStar Twitter account accused a Muslim woman photographed walking along Westminster Bridge of ignoring the victims. The claim was unfounded but the photograph was authentic, motive and tapped into a national sentiment, which helped the image quickly go viral.

Twitter later concluded that the SouthLoneStar account had been created by the Internet Research Agency, a secretive company based in St Petersburg that is alleged to operate on behalf of the Russian government.

This complex web is difficult to unravel. In the case of SouthLoneStar, Twitter apparently untangled the source of the story, but as its conclusions were based on an internal investigation and the analysis of private data, independent verification is difficult to achieve. Both the government and public are thus forced to trust in the investigations conducted by social media companies.

If their conclusions are trusted, they lead to further complications around how to react. Issue-based messaging, like such as that disseminated through the SouthLoneStar account is harder to recognise than traditional political campaigning.

"Advertising can be somehow a misleading term. What we're dealing with is messages that are designed to persuade," says Vis.

"I know that, of course, that is a classic definition of advertising, but when we think about advertising we still think about messages that we can recognise as advertising."

Does the public have an appetite for truth?

Another way to reduce the influence of false information is by improving critical analysis. Professor Vis backs overhauling the national curriculum to teach citizens how to deal with information online, but acknowledges that critical thinking may be losing its appeal.

Confirmation bias, echo chambers, filter bubbles and shrinking attention spans are changing the ways in which we both consume and interpret information.

In a recent Ofcom survey, only 20 percent of respondents said they always check other sources to get a range of views on a story. Almost the same proportion said rarely do this (19 percent) or never do this (18 percent).

"Forty percent of people have no interest whatsoever in contextualising what they're consuming," said Vis. "That's not a technology problem."

Regardless of the public appetite, the changing nature of their access to news makes it difficult to verify information.

“I think the scale, the speed, the way in which this information is packaged is entirely different," explained Vis.

"What we are also faced with now is that there is a breaking of the connection between the content and the source, whereas previously it was much easier to say this information comes from that source and therefore I can form my opinion about that as a package of content.

"If we are now dealing with content that looks like it comes from a reputable source but isn't because these platforms essentially focus on the spreading of content and the source can be divorced from that, then I think we are dealing with something very different because we've lost that connection."

The platforms would improve the public's critical thinking by being more transparent about their persuasive design goals, but are reluctant to reveal anything that could harm their image and brand value. They are accountable to their shareholders rather than to our democracies.

As Williams argued in his Nine Dots Prize-winning essay Stand Out of Our Light: Freedom and Persuasion in the Attention Economy, digital technologies are "designed to exploit our psychological vulnerabilities in order to direct us toward goals that may or may not align with our own."

They thereby "frustrate and even erode the human will at individual and collective levels, undermining the very assumptions of democracy".

He believes that the changing priorities of the public represents a greater threat to democracy than the spread of false information.

"I think that the risk of this is not that there will be fake news, but that people won't care if it's fake or not."

Find your next job with techworld jobs