Artificial intelligence isn't smart enough to stop hate speech yet, Facebook says
One of the most daunting, complicated issues facing social media websites is hateful speech.
A couple years ago, Pew found more than half of internet users had seen someone online being called offensive names, or efforts to purposefully embarrass someone.
Facebook, even if it seems like your parents (or grandparents) are slowly taking it over, isn't immune. And its rigid definition of hate speech says it's when there's a direct attack on someone based on race, ethnicity, religion, national origin, sexual orientation, sex or gender identity, or serious health issues.
But how do you determine actual hate speech versus say, satire, which can use terms tied to hate speech to make a point? Or self-deprecating comments that use offensive language?
That big question – What is hate speech? – is at the center of this blog post by Facebook VP of Public Policy Richard Allan. As he explains, the first step to stopping hate speech is to define it. But that's incredibly difficult, because it's so much about context, the writer's intention, and personal opinion.
"What does the statement 'burn flags not fags' mean?" Allan writes, noting it could be an attack on the gay community – or an attempt to "reclaim" the word. Or if in the UK, an encouragement to not smoke cigarettes. The word "dyke" could also be used as a direct attack on someone; but there are also groups like Dykes on Bikes MPLS, where the term is clearly not used as hate speech.
So how do they stop it?
Every week for the past two months, Facebook has deleted 66,000 posts that were reported as hate speech, Allan says. He also acknowledges they make mistakes, missing things that are clearly hate speech, while deleting posts that aren't.
While they're using artificial intelligence to review posts for people who may be considering suicide, and to detect some content that violates standards, it's not to the point where AI solves the problem.
"Technology will continue to be an important part of how we try to improve," Allan writes. "But while we’re continuing to invest in these promising advances, we’re a long way from being able to rely on machine learning and AI to handle the complexity involved in assessing hate speech."
Basically, a machine can't understand things like context, intent, and personal opinion.
Which is why Facebook is so reliant on people reporting offensive posts. There's a team of 4,500 people working around the world to review what gets reported and determine if it violates standards. And Facebook says it plans to add 3,000 more over the next year.
People have widely differing opinions on this too. Americans are more tolerant of hateful speech than other countries in the world, Pew Research found.
Age matters too. Another Pew survey found people age 18-34 were more likely than any other age group to support the government censoring offensive statements about minority groups. But that was still only 40 percent of millennial respondents.