A document obtained by an Australian news outlet shows how Facebook keeps tabs on – and tries to guess – its users' moods. And raises questions about how that information might be available to advertisers.
The story broke in The Australian, which says it got hold of a 23-page document labeled as "Confidential: Internal Only." The document, which is from this year, lays out how Facebook can monitor the mood of a user (as young as 14 years old) based on their posts, and can guess when they feel "useless," or like a "failure," or "anxious."
With that, the social media network can find "moments when young people need a confidence boost," the document says, according to The Australian. The document was put together by a couple of Facebook execs in Australia as part of a presentation for one of the country's biggest banks.
Facebook declined to say if this was happening outside Australia, but The Australian notes ad products are generally introduced on a regional or global basis.
We know Facebook is already using AI to monitor that type of behavior, specifically anything that might point to suicide. In March, Facebook rolled out new updates that tries to identify when somebody might be struggling, including being able to recognize posts that seems “very likely” to include thoughts of suicide.
Facebook sent a response to The Australian, apologizing for "the process failure" and saying it will look into what happened, with appropriate discipline then a possibility.
In another comment published Sunday, Facebook called the article "misleading."
"Facebook does not offer tools to target people based on their emotional state," the site said.
Instead, Facebook says the analysis was done to help marketers understand how Facebook users are expressing themselves.
"It was never used to target ads and was based on data that was anonymous and aggregated," Facebook continued.
However, the social media platform did acknowledge something wasn't done the way it was supposed to be. It said the research "did not follow" the established process of review, and they're working to "correct the oversight."
More ad questions
This is the second time in the past six months Facebook is facing questions about how it lets advertisers use data.
Last fall, ProPublica published a story about advertising tools that let a company exclude racial groups from seeing ads. (So I could make sure my ad didn't go to black Facebook users, for example.)
In February Facebook updated its policies to make it clear advertisers can't discriminate against users based on "race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition."
Facebook also faced questions after a man in Cleveland posted video of himself shooting and killing someone, then livestreamed a confession – footage that was up for two hours before Facebook took it down.