Facebook defines content of graphic violence as the information that glorifies violence or celebrates the suffering or humiliation of others, which it says may be covered with a warning and prevented from being shown to underage viewers.
This report covers Facebook's enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam and fake accounts. While it's blocked more than 500 million fake accounts, Facebook estimates that about 3 to 4 percent of accounts on the website are fake. Today's report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. Facebook claimed that it was able to detect 98.5% of the fake accounts soon after they were created in the first three months of 2018.
Facebook plans to continue publishing new enforcement reports, and will refine its methodology on measuring which bad content circulates over the platform. Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them.
Though Facebook extolled its forcefulness in removing content, the average user may not notice any change. As indicated by the article, by Sheera Frenkel, Facebook has been under pressure to remove nudity, violence and hate speech, among other "inflammatory content".
Hate speech was the only one that Facebook's detection tech could not quite get a handle on, as only 38 percent of the posts that were removed were flagged that way. It attributed the increase to "improvements in our ability to find violating content using photo-detection technology, which detects both old content and newly posted content".
Facebook's internal technology flagged adult nudity or sexual content about 96% of the time before it was reported by users, according to the report. Facebook boasts 2.2 billion monthly active users, and if Facebook's AI tools didn't catch these fake accounts flooding the social network, it would have gained than a quarter of its total population in just 89 days.
Malaysia's Prime Minister Mahathir might stay up to two years in post
The pardon allows Anwar to re-enter active politics immediately, but it was not clear what his role in government would be. Najib has said the deposit was a donation by an unnamed member of the Saudi royal family which had been largely returned.
Facebook's report suggests its investment in AI that can help moderate objectionable content is slowly paying off.
Facebook released the data not to brag, but instead, the company said in a statement that it's offering up its statistics so users can judge its performance themselves. In this case, 86% was flagged by its technology. Most often they contain spam.
"Yes there are clear skews in many of these metrics", said Schultz.
"All of this is under development. And it's created to make it easy for scholars, policymakers and community groups to give us feedback so that we can do better over time". To that end, the company is scheduling summits around the globe to discuss this topic, starting Tuesday in Paris.
Representatives will also visit Oxford, England on Wednesday, May 16 and Berlin on May 17.