Why Facebook’s harassment guidelines fail to protect women

It’s become such a common social media hassle that it barely merits mention: A woman reports threatening, misogynist or just plain hateful abuse, only to be told (by someone—an automated reply? A language-processing bot? A human moderator?) that it doesn’t meet some standard for “real” harassment. Milo Yiannopolous, for example, effectively convinced video game nerds that Nazism was a hip new lifestyle, and he didn’t get kicked off Twitter until he went after Ghostbusters’ Leslie Jones.

Look not very hard, and you’ll find Facebook groups that promote hate speech and yet somehow fail to violate any official “terms and conditions.” And thanks to a leak last week, we now know for sure what many of us have suspected: it’s not a glitch in the system.

Leslie Jones hate

Ghostbusters’ Leslie Jones was attacked with a series of racist tweets 

Last week, the Guardian obtained and published Facebook’s internal abuse guidelines; the documents paint a grim picture—harassment and intimidation tactics are features of our online existence. They haven’t been overlooked—they’ve been accepted.

Let’s start with the death threats: “People commonly express disdain or disagreement by threatening or calling for violence,” the guidelines’ section on “credible violence” tells us, but it insists these threats are made “in generally facetious or unserious ways” and therefore can’t be removed from the platform. Facebook will, in fact, only act on a threat if it’s either made toward a “vulnerable person”—heads of state qualify; so do journalists—or includes specific details like time or place. In the examples given by Facebook, “I will kill the Taiwan head of state in 48 hour” or “I’ll destroy the Facebook Dublin office” would be credible; “I hope someone kills you,” or “unless you stop bitching I’ll have to cut your tongue out” would not.

Disturbingly, Facebook terms the latter threat an “aspirational/conditional” statement. These have to be backed up with some kind of substantiating detail, even when “vulnerable persons” are being targeted; on Facebook, it is more objectionable to write “I will kill you” than it is to write “I will kill you unless you do what I say,” even though that “unless” is often what makes a threat effective. Evidently, attempts to intimidate a target into silence are acceptable: Both “stop bitching” and “little girl needs to keep to herself before daddy breaks her face” are included on Facebook’s list of “aspirational” comments that moderators can leave in place.

Facebook’s definition of hate speech

By now, you may have noticed a pattern. Facebook technically prohibits hate speech, but defines as narrowly as possible. It, for example, notes, in these documents, that it must be more serious about threats directed to “vulnerable groups.” But its definition of vulnerable groups is…limited. So limited that there are only four: “homeless people,” “foreigners,” and “Zionists,” along with, in the Philippines, people who sell or use drugs. Women—or black people, or trans people, or queer people more generally, who routinely face abuse online—don’t make the cut. Per the leaked guidelines, “We should put all foreigners into gas chambers” is flagged (as it should be) but, “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat” is totally fine.

I should lay my cards on the table here: I’ve had to report harassment to Facebook twice, which is a low number for a woman in my profession. In one instance, someone who didn’t like an article I wrote had set up a fake profile under my name; it hosted scintillating content such as pictures of dicks drawn on my author photo. In the second, more serious case, a stalker who’d been fixated on me for over a year—he’d left a barrage of insults on nearly every social media presence that I had, and sent multiple e-mails; he’d threatened my husband, and contacted friends to harass them for supporting me; his social media profiles were filled with what seemed to be hundreds of posts about me, sometimes going up every few hours, but never more than a few days apart; moreover, blocking him was not enough, because he just set up new accounts and sent new messages—had found my Facebook page and begun leaving comments, letting me know he was still watching me.

mark zuckerberg death threat

Mark Zuckerberg has spoken out on the Facebook comments policy time and again

In both cases, Facebook’s initial response was that the reported accounts did not violate their standards. In the first case, the profile did not meet Facebook’s standards for “impersonation;” I was able to get in touch with a staff member, and the page was eventually deleted. In the second, “stalking” wasn’t even on Facebook’s list of reporting options. According to Facebook’s rubric, there was no box to check or form to fill out that would adequately explain the situation. I couldn’t provide the documentation that might show a moderator why this person being able to contact me through their platform was a problem or why blocking him was not a solution. I was stuck.

And I’m not the only one. Users have turned to the the site’s “help” boards in vain; one explained her harassment this way: “[it] includes them trying to pay me for sex and after I say no, they continue trying to push the issue. Or they’ll change the subject and then bring up sex and sexual acts again later. They’re also sending me nude/sexual pictures… [but their] account doesn’t fit under any of the reporting options.” In another widely circulated incident, writer Clementine Ford was not able to get her abusers banned, even when they sent over 1,000 messages in 48 hours. She was, however, banned herself when she re-posted some of those messages publicly. Apparently, comments like “get socked fucking feminist” or “how about a suck on my single barrel pump action yoghurt riffle [sic]” (complete with unsolicited dick pic) were fine to send privately. When shown to the public by the victim, they became offensive content.

sonam cannes 6

Sonam Kapoor recently spoke up against internet trolls and anonymous hate on social media

Incidents like these, common though they may be, are entirely avoidable. But to avoid them, Facebook would have to listen to women. Mob harassment, unsolicited sexual content, and stalking are ubiquitous features of women’s online harassment stories. The Pew Research Center reports that both men and women experience harassment along the lines of “being called names” or “being purposefully embarrassed” — in fact, men experience it slightly more often than women do—but when it comes to “severe harassment” (which Pew defines as stalking, physical threats, sexual harassment and/or harassment over a sustained period) the victims are predominantly female. These forms of abuse may not constitute “credible threats” like a man with a gun showing up at your house would, but they do have real-world effects, including emotional trauma, reputational damage, and difficulty in finding future employment.

To be fair, it’s not just Facebook that has a problem. Compared to a site like Twitter—that once proudly declared its determination to represent as “the free speech wing of the free speech party,” turning it into the hangout of choice for neo-Nazis and free-floating harassment mobs—life on Facebook is relatively peaceful. (Though, one live-streamed murder is, I think, undoubtedly one live-streamed murder too many.) And some of the trouble comes down to plain human error; according to the Guardian report, these complaints are reviewed by actual human moderators, not bots, but the moderators are so overworked that they often have only ten seconds or less to make a decision in each case.

Still, it’s the guidelines’ laissez faire attitude, as much as their implementation, that’s disturbing—not least because they don’t seem to have a realistic grasp on how sexist harassment works. In a response to the Guardian leak, Facebook’s head of product policy, Monika Bickert, has claimed that “we face criticism from people who want more censorship and people who want less. We see that as a useful signal that we are not leaning too far in any one direction.” (ELLE.com reached out directly to Facebook for comment, but did not hear back before press time.) But calls to remove harassment and abuse from the network are not demands for “censorship.” They’re demands for features that marginalized users need in order to use the site. These kinds of harassment are not a prelude to violence. They are violence—and their removal would signal that the site takes seriously marginalized users’ rights to privacy and security.

To its credit, Facebook has committed to working with feminist organizers on this. Soraya Chemaly, the director of the Women’s Media Center Speech Project, has been working with Facebook since 2013, when she took part in protesting their policy regarding graphic images of sexual violence.

Soraya Chemaly

Soraya Chemaly

“In my personal experience it took the threat of significant monetary losses to get their focused attention,” Chemaly told me, “but once we began working together they have been responsive and engaged.” She characterizes Facebook as more earnestly willing to address these problems than most and attributes the flaws in their policy to wider societal issues: “The company’s guidelines may be disturbing, but they are squarely in the mainstream in terms of legal interpretations of free speech in the United States, where hate speech, for example, is not barred by law…. We don’t, as a culture, value human safety and dignity above profit.”

“Their systems, initially built on and defended on the bases of a ‘neutral’ technology and an ‘objective’ platform, continues to reflect and amplify social inequalities, despite the good intentions of any individual actors,” Chemaly continues. When a social network’s harassment policy fails to explicitly account for those forms of abuse most likely to affect women, the result is a systemic bias against female users. To fix the situation at Facebook—or on any other social media site—we would have to be willing to see emotional abuse as real abuse and harm done to women as real harm. Until we’re able to see, hear, and trust women’s accounts of the violence they experience online (and everywhere else) the rules we create will never be enough to protect them.

From: Elle USA

Leave a comment

Your email address will not be published. Required fields are marked *

Generic selectors
Exact matches only
Search in title
Search in content