Facebook is taking steps to combat the ever-growing fake news problem that we face on the internet.
The social media giant has teamed up with independent 3rd party fact-checking organizations that adhere to Poynter’s International Fact-Checking Code of Principles – the global standard for fact-checking. The 3rd party organizations will be responsible for identifying fake news sources, which will then be used on the Facebook platform to ensure fake news content is no longer prevalent.
In conjunction with their newfound partnerships, Facebook today is launching a number of new on-platform features.
Firstly, for those with greater media literacy, users will be able to flag stories as fake news. A heavy volume of reports, mixed with other proprietary behavior-based signals from users on the platform, will result in content being sent to the fact checking organizations. From there, if the organizations deem the story to be fake, it will be flagged as “disputed” in Facebook’s system.
Once a piece of content has been marked as “disputed,” a label will appear below the link preview on Facebook posts with a large red caution symbol. If interested, users can click to read a report of why the content was marked as such.
If a user attempts to share a post that has been marked as disputed, they’ll receive another warning, informing them of how the content they’re about to share has been proven false.
Read, but not shared
Not all users are passionate enough to report content, but savvy enough to know when something shouldn’t be shared. As an extra measure of attack against fake or misleading news, Facebook is beginning to test a new search ranking signal. If the algorithms determine that reading an article makes users less likely to share it, there’s a good chance that it has mislead the user in some way.
“We’re going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it,” wrote Facebook.