A white man called her kids the n-word. Facebook stopped her from sharing it.
Tracy Jan and Elizabeth Dwoskin July 31 at 6:02 PM
Francie Latour was picking out produce in a suburban Boston grocery store when a white man leaned toward her two young sons and, just loudly enough for the boys to hear, unleashed a profanity-laced racist epithet.
Reeling, Latour, who is black, turned to Facebook to vent, in a post that was explicit about the hateful words hurled at her 8- and 12-year-olds on a Sunday evening in July.
“I couldn’t tolerate just sitting with it and being silent,” Latour said in an interview. “I felt like I was going to jump out of my skin, like my kids’ innocence was stolen in the blink of an eye.”
But within 20 minutes, Facebook deleted her post, sending Latour a cursory message that her content had violated company standards. Only two friends had gotten the chance to voice their disbelief and outrage.
Inside Facebook: How developers say they police violent live content
The Washington Post got rare access inside Facebook's headquarters to talk to the people behind the Facebook Live platform about the challenges of policing sensitive and violent material. (Lee Powell/The Washington Post)
Experiences like Latour’s exemplify the challenges Facebook chief executive Mark Zuckerberg confronts as he tries to rebrand his company as a safe space for community, expanding on its earlier goal of connecting friends and family.
But in making decisions about the limits of free speech, Facebook often fails the racial, religious and sexual minorities Zuckerberg says he wants to protect.
The 13-year-old social network is wrestling with the hardest questions it has ever faced as the de facto arbiter of speech for the third of the world’s population that now logs on each month.
In February, amid mounting concerns over Facebook’s role in the spread of violent live videos and fake news, Zuckerberg said the platform had a responsibility to “mitigate the bad” effects of the service in a more dangerous and divisive political era. In June, he officially changed Facebook’s mission from connecting the world to community-building.
The company says it now deletes about 288,000 hate-speech posts a month.
But activists say that Facebook’s censorship standards are so unclear and biased that it is impossible to know what one can or cannot say.
The result: Minority groups say they are disproportionately censored when they use the social-media platform to call out racism or start dialogues. In the case of Latour and her family, she was simply repeating what the man who verbally assaulted her children said: “What the f--- is up with those f---ing n----r heads?”
blog post, that “too often we get it wrong,” particularly in cases when people are using certain terms to describe hateful experiences that happened to them. The company has promised to hire 3,000 more content moderators before the year’s end, bringing the total to 7,500, and is looking to improve the software it uses to flag hate speech, a spokeswoman said.
CLICK HERE FOR COMPLETE STORY
Tracy Jan and Elizabeth Dwoskin July 31 at 6:02 PM
Francie Latour was picking out produce in a suburban Boston grocery store when a white man leaned toward her two young sons and, just loudly enough for the boys to hear, unleashed a profanity-laced racist epithet.
Reeling, Latour, who is black, turned to Facebook to vent, in a post that was explicit about the hateful words hurled at her 8- and 12-year-olds on a Sunday evening in July.
“I couldn’t tolerate just sitting with it and being silent,” Latour said in an interview. “I felt like I was going to jump out of my skin, like my kids’ innocence was stolen in the blink of an eye.”
But within 20 minutes, Facebook deleted her post, sending Latour a cursory message that her content had violated company standards. Only two friends had gotten the chance to voice their disbelief and outrage.
Inside Facebook: How developers say they police violent live content
The Washington Post got rare access inside Facebook's headquarters to talk to the people behind the Facebook Live platform about the challenges of policing sensitive and violent material. (Lee Powell/The Washington Post)
Experiences like Latour’s exemplify the challenges Facebook chief executive Mark Zuckerberg confronts as he tries to rebrand his company as a safe space for community, expanding on its earlier goal of connecting friends and family.
But in making decisions about the limits of free speech, Facebook often fails the racial, religious and sexual minorities Zuckerberg says he wants to protect.
The 13-year-old social network is wrestling with the hardest questions it has ever faced as the de facto arbiter of speech for the third of the world’s population that now logs on each month.
In February, amid mounting concerns over Facebook’s role in the spread of violent live videos and fake news, Zuckerberg said the platform had a responsibility to “mitigate the bad” effects of the service in a more dangerous and divisive political era. In June, he officially changed Facebook’s mission from connecting the world to community-building.
The company says it now deletes about 288,000 hate-speech posts a month.
But activists say that Facebook’s censorship standards are so unclear and biased that it is impossible to know what one can or cannot say.
The result: Minority groups say they are disproportionately censored when they use the social-media platform to call out racism or start dialogues. In the case of Latour and her family, she was simply repeating what the man who verbally assaulted her children said: “What the f--- is up with those f---ing n----r heads?”
blog post, that “too often we get it wrong,” particularly in cases when people are using certain terms to describe hateful experiences that happened to them. The company has promised to hire 3,000 more content moderators before the year’s end, bringing the total to 7,500, and is looking to improve the software it uses to flag hate speech, a spokeswoman said.
CLICK HERE FOR COMPLETE STORY