Facebook ‘quality news’ prioritization may result in ‘bias & won’t change quality’
Facebook is now planning to change the way it presents news to the users by introducing a ranking system aimed at defining what it deems “quality news.”
“We are, for the first time in the history of Facebook, taking a step to try to define what quality news looks like... I think we would agree that not all news is created equal, and this is a big step for us to begin thinking about that,” Facebook’s Head of News Partnerships Campbell Brown told the Recode Code Media conference in California on February 12.
The world’s biggest social network says its fight will help make it a more credible source for getting information and that it will prioritize the way it handles stories to prevent fake news. Co-founder Mark Zuckerberg said earlier that the company is going to rely on a user survey to help to determine which sources are “trustworthy.”
RT spoke to internet law expert and social solicitor Yair Cohen, who thinks that social media companies aren’t revealing the real criteria of a filter.
RT: The Facebook news head said news isn't created equal. Who should decide what's quality among all the quantity of stories?
Yair Cohen: The original idea was that the Facebook users will decide what quality is and what is not quality. Now, it seems that Facebook wants to make this decision. I think that Facebook is clearly concerned about governments taking over, moderating and removing content from its own platform. So, Facebook has decided to take the initiative and moderate its own content.
…Currently, Facebook has got approximately 7,000 individuals who are responsible for creating policies for Facebook, policing those policies and then making a decision what content should be viewed by users, what content shouldn’t be viewed by users. But these 7,000 are not operating within any form of transparency. Nobody really knows what they are doing and I think this is quite dangerous.
RT: When we look at fake news or quality news, what criteria can be used to create a ranking system?
YC: It is a subjective issue, the most important thing is that people will know what criteria is being used… I just came back from a conference in California, it was the first conference of its kind where the leaders of the largest social media companies got together to speak for the first time publicly about how they moderate and remove content. And it was very clear that the whole process is being done in secret. There is no published criteria. And the reason they say they don’t publish the criteria – they don’t want people to manipulate the content. So, the whole process is very highly secretive. Provided we know what the criterion is, then we can judge for ourselves whether it is right for Facebook or for any other organization to remove content or not. But as long as we don’t know about it, it seems to me pretty much totalitarian, this whole regime.
RT: This ranking system in essence could be called biased if a single group is making all the decisions, would you agree?
YC: I certainly agree with that. It is a very small group which is not accountable to anyone, all the decisions are being made in secret, the algorithm is secretive. There is going to be artificial intelligence, there is going to be some machine-generated content removal, but quite a lot of decisions still will be made by a human. We don’t really know who these people are. I think it is highly dangerous that… a very small group of people will decide perhaps, that it is in their view, or in people’s interest to promote a political party, they might have their own bias towards what is bias and what is not bias.
RT: Can this new initiative really help eliminate fake news?
YC: I think this initiative is really designed by the social media companies to try and prevent government intervention in the content removal process. It will certainly not affect the quality, so to speak, of any news.