Facebook, Twitter, Microsoft and YouTube have created a joint effort to flag and trace online terrorism propaganda – but with the exception of Microsoft, none of them will explain their definition of terrorism.

Back in May, EU Justice Commissioner Vera Jourova dragged the four over the coals about a perceived slowness to react to hate speech online.

isis flag wikipedia
Image credit: ISIS

At the time, rather than drafting new European legislation, the companies entered a “voluntary agreement” which would ensure content that promotes hate was addressed within 24 hours. But officials said that the businesses, in practice, “only reviewed 40 percent of recorded cases in less than 24 hours”.

This week they announced a shared database to collaborate on the removal of the “most extreme and egregious terrorist images and videos”.

But what does this mean – precisely – when evocative words like ‘terrorist’ are so open to interpretation?

All but Microsoft were unwilling to comment any further than the original statement.

To its credit, Microsoft acknowledges the difficult business of defining terror: “There is no universally accepted definition of terrorist content. For purposes of our services, we will consider terrorist content to be material posted by or in support of organizations included on the Consolidated United Nations Security Council Sanctions List that depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups. The U.N. Sanctions List includes a list of groups that the U.N. Security Council considers to be terrorist organizations.”

Microsoft goes into some specifics about the actions it will take in a comprehensive blog post

The statement from Twitter is here and Facebook’s is here

Listen: The UK Tech Weekly Podcast discuss internet companies defining terrorism 

Techworld asked all four businesses if they could provide some transparency into the content they were monitoring: how fixed were their definitions of terror? Does it extend beyond violent, radical Islam, and into Basque separatism, or Irish republicanism? Will the database extend to far left and far right ideologies?

All refused to delve any further into the question, so aside from Microsoft, there is zero transparency about exactly what sort of content will see users placed onto their shared database. It should be noted that Twitter has recently become more open to shutting down racist and gendered abuse online following years of criticism over perceived inaction.

But it is understood by Techworld that each company has its own different policies, practices and definitions as they relate to terrorism.

Facebook’s community standards, for example, states: “We don’t allow any organisations that are engaged in the following to have a presence on Facebook: terrorist activity, or organised criminal activity. We also remove content that expresses support for groups that are involved in the violent or criminal behaviour mentioned above. Supporting or praising leaders of those same organisations, or condoning their violent activities, is not allowed.”

Techworld understands that Facebook is focusing on the most extreme examples of terrorist propaganda, such as recruitment videos from ISIS.

The community standards page is unclear about the definition of legality it is referring to. Would expressing solidarity with the Kurdish separatist PKK, which is currently (somewhat contentiously) listed by the US Department of State as a terrorist organisation be enough to have a Facebook user listed on the shared database?

Although the joint effort is difficult to argue with, it does also raise questions about the power that these organisations hold in controlling and removing information – and a debate will be necessary to ensure the companies do not step beyond their reach, as they have promised, for now.