Facebook assigns trustworthiness scores to users

Image of a mobile phone showing the Facebook app

Facebook starts rating users on trust to combat fake news

Reputation scores are created to help the company identify users who might be abusing Facebook's content flagging systems to report real news as fake, the Post says.

But there are obvious pitfalls to this kind of system, with Lyons admitting that it is "not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they're intentionally trying to target a particular publisher".

"The idea that we have a centralized "reputation" score for people that use Facebook is just plain wrong and the headline in The Washington Post is misleading".

SAN FRANCISCO: Facebook acknowledged Tuesday it has developed tools to identify users "indiscriminately" flagging fake news as it refines its effort to combat misinformation.

A low trustworthiness score doesn't entirely determine a person's credibility, Lyons said, and users don't get a single unified score.

The system Facebook built for users to flag potentially unacceptable content has in many ways become a battleground.

Family of Mollie Tibbetts releases statement: 'Our hearts are broken'
A murder charge has been filed against Cristhian Bahena Rivera , who will appear in Poweshiek District Court Wednesday afternoon. Elizabeth Warren, D-Mass., does not believe the alleged murder of Mollie Tibbetts by an illegal immigrant is a "real" problem.


The specific factors that are considered when assigning users a score are likely to remain a mystery, as the tech giant attempts to develop algorithms that can not be cheated.

However, the reputation score is only one measurement "among thousands of new behavioral clues" that the company uses to assess whether you're a risk or not, the Post said. Her hesitation stems from not wanting to tip off bad actors on how the process works as doing so could allow them to easily game the system.

It's a tricky position for a company claiming commitment to transparency.

In 2015, Facebook gave users the ability to report posts they consider to be false. Lyons said in the interview.

Facebook, like many others in the social media sphere, rely on users to flag problematic content, however many are reporting content they merely disagree with as untrue, flooding Facebook's fact-checking department with unfounded claims. When a user flags a post as containing false news, their score is adjusted based on the veracity of this tip.

This article was originally published by The Washington Post.

Latest News