Facebook is admitting that it can do a better job in removing certain types of hate speech on the site, and is introducing some new policies to improve its track record.
The social network, in its current Statement of Rights and Responsibilities, prohibits "hate speech," but in recent days the company's systems to identify and remove hate speech have failed to work as effectively as they could, particularly around issues of gender-based hate, said Marne Levine, vice president of global public policy at Facebook, in a post on Tuesday.
The changes also come amid concerns voiced by groups like Women, Action and the Media and the Everyday Sexism Project about hateful and violent content targeting women appearing on Facebook.
In some cases, for instance, content has not been removed quickly enough, or content that should be removed has been evaluated using outdated criteria, Levine said.
In essence, the guidelines the social network has been using to respond to reports of violations have failed to capture all the content that violates the company's standards, Facebook said.
"We need to do better -- and we will," Levine said.
As part of the changes, which the site will enact immediately, Facebook will solicit feedback from legal experts and others like representatives from women's groups to update the guidelines the site uses to evaluate hate speech reports. These same types of resources will also be used to update the training for the teams that review and evaluate reports of hateful speech or harmful content, Facebook said.
One of the more interesting changes is aimed at holding accountable people who post content that does not qualify as actionable hate speech but is still cruel and insensitive. Facebook began testing this new requirement a few months ago, by requiring any person who posts content containing "cruel or insensitive humor" to include his or her authentic identity for the content to remain on the site, Facebook said.
The idea is that by having people stand by their content, other users can hold that person accountable and directly object to it. Facebook will continue to develop this policy based on its results, which so far "indicate that it is helping create a better environment for Facebook users," the company said.
Facebook, whose corporate mantra is "to make the world more open and connected," acknowledges that it walks a fine line between facilitating free speech and also making its platform safe and respectful.
The company has also struggled to distinguish hate speech from comparatively less offensive content like distasteful humor. "In these cases, we work to apply fair, thoughtful and scalable policies," Levine said.
As the site has grown to attract more than 1 billion users, "we're constantly re-evaluating our processes and policies," the company said.