icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
6 Mar, 2020 02:28

War on ‘fake news’ made Facebook users more gullible – just in time for the 2020 election! Is anyone surprised?

War on ‘fake news’ made Facebook users more gullible – just in time for the 2020 election! Is anyone surprised?

Facebook’s efforts to crack down on “fake news” have actually made the problem worse, as a recent study found. Who knew that asking users to outsource their critical capabilities to the platform would make them more credulous?!

When Facebook sent its army of fact-checkers to do battle with the disinformation scourge, ordering them to tag all “fake news” with a label to warn future readers, they must have known that even the fiercest truth-warriors couldn’t possibly get to every single false story. Facebook claims to have 1.25 billion daily users, and mere human moderators are hopelessly outmatched against the sheer volume of (dis)information transmitted on the platform.

Add to the mix that Facebook, working hand in hand with ideological actors like the Atlantic Council, are not just tagging obviously false stories, but also stories that counter the narrative certain political interests want to pass off as fact, and the task becomes even more Sisyphean.

Fact-checkers have to weigh a given story against both reality and Approved Reality™ before determining whether to slap the label on, and even the supposedly-reliable “fact-checkers” used by Facebook have their own biases that must be factored in.

Also on rt.com Poynter retracts list of 'unreliable news sources' after listed sites prove it unreliable

If Facebook had been more vocal about its limitations when it rolled out the fact-checking feature, perhaps Massachusetts Institute of Technology professor David Rand would not be writing about the “implied truth effect.” The paper he published earlier this week showed in no uncertain terms how the fact-checking initiative had backfired.

It really shouldn’t be a surprise that Facebook users are more likely to share fake stories that merely haven’t been labeled as such (36.2 percent, according to the study) than they are to share the same stories on a platform with no fact-checking (29.8 percent). The “ideal” Facebook user – the one who trusts the platform unconditionally – believes that its moderators-in-shining-armor can process every single fake story and properly label them before they reach users’ eyes, and this is an image the platform has cultivated every step of the way.

“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” Rand explained, speaking for those who take Facebook at its word. But when a platform treats users like infants, in need of a mental babysitter (a NewsGuard, as it were) to protect them from the scourge of fake news, some will eventually come to rely upon that babysitter to vet their thoughts before they think them.

Also on rt.com 'NewsGuard' app gives news sites 'trust' ratings & targets alternative media. What could go wrong?

Facebook has bigger problems than increasingly gullible users, of course – it has staved off government regulation with a promise to scrub disinformation from its platform, and if word gets out that its fact-checking efforts have actually made users more susceptible to the dreaded “foreign meddling” campaigns they were instituted to protect against, who knows what kind of profit-squelching government controls might be unleashed? Accordingly, Facebook announced in December it was hiring “community reviewers” to review stories flagged by “fake news”-hunting algorithms.

Doubling down on its attempt to pre-chew news for its users is the wrong way for Facebook to arrest its “fake news” spiral. There will always be some stories that slip through the cracks. More importantly, users should be encouraged to think for themselves. If the low-quality memes the Internet Research Agency pushed in 2016 really conned a bunch of voters into electing a candidate they would not have otherwise chosen – as mainstream media still insists is true, in stories that pass Facebook’s fact-checks with flying colors – it would be in everyone’s best interest to shore up Americans’ critical capacity, right?

Of course, no one in Washington really believes those memes (or a herd of rampaging “Russian bots”) swayed the 2016 election, and no one really wants to deal with the threat a well-informed electorate would pose in 2020.

Also on rt.com ‘We’ve seen no evidence’: Social media networks blow up US officials’ claim of ‘Russian coronavirus disinformation campaign’

US politicians are more likely to embrace Professor Rand’s disturbing “solution” to the “fake news” problem: “If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem because there’s no longer any ambiguity.”

Such a system would complete the process of taking critical thinking out of the user’s hands and placing it into the hands of ideologically-motivated ‘fact-checkers’ – an idea that should chill any free-thinking individual to the bone.

Like this story? Share it with a friend!

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

Podcasts
0:00
14:49
0:00
14:50