icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
4 Mar, 2020 18:12

The ‘implied-truth effect’: Why labelling things as ‘fake news’ simply doesn’t work how Big Tech wants

The ‘implied-truth effect’: Why labelling things as ‘fake news’ simply doesn’t work how Big Tech wants

Ever since the shock outcomes of the 2016 US presidential election and Brexit referendum, Big Tech has vowed to stamp out fake news once and for all. However, a new study indicates they may have missed the point entirely.

Despite employing hordes of fact-checkers whose sole-purpose is to differentiate the 'truth' from the plague of Fake News™, the unintended consequences of this war on disinformation are that it doesn't necessarily increase critical thinking among the general population on social media, rather it sends people into the arms of the next great swindler who has yet to be found out by the heroic fact checkers. 

In other words, marking one story as fake news doesn't protect people from the myriad of untagged stories that also contain misleading information or outright falsehoods. 

Also on rt.com Not quack-checked! MSM dives for ‘Chinese duck army’ story as plague of locusts ravages Pakistan

That’s according to a new study co-authored by MIT professor David Rand, who proposes an "implied-truth effect" in news consumption, where those who have been spared the ever-watchful gaze of the fact-checkers benefit from perceived endorsement by pure virtue of positioning. This selective labeling of false news makes other news stories seem more legitimate regardless of their content. 

"Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified," says professor Rand.

There's no way the fact-checkers can keep up with the stream of misinformation, so even if the warnings do really reduce belief in the tagged stories, you still have a problem, because of the implied truth effect.

Rand adds that it's perfectly rational for readers to assume this implied truth and argues that, instead, stories should also be marked as “verified,” separating the diamonds from the rough, rather than merely tagging the fakes if we are to clean up the online information ecosystem.

Rand and his team of researchers conducted a pair of online experiments with a total of 6,739 US residents in which they were presented with a variety of true and false headlines in a social media-style news-feed. 

The false stories were chosen from popular (but itself somewhat flawed) fact-checking site Snopes.com, and included things like: "BREAKING NEWS: Hillary Clinton Filed for Divorce in New York Courts" and "Republican Senator Unveils Plan To Send All Of America's Teachers Through A Marine Bootcamp." 

Also on rt.com ‘This should go well’: Trolling and worry as Twitter reveals plan to flag ‘lies & misinformation’

Participants were divided into three groups and asked which stories they would consider sharing on social media. 

One group was given the headlines with only some marked as false, the next group saw a news-feed with some stories marked as "False" and some stories given "True" verification stamps, while the third control group saw a news-feed with no labels present and had to decide for themselves without prompting. 

Overall, the researchers found that marking stories as false does make people less inclined to share them. 

However, in the group given no labels, participants would consider sharing 29.8 percent of the false stories mixed in. This figure dropped to 16.1 percent among the group in which false stories had a warning label. However, readers were still willing to share 36.2 percent of the remaining fake stories on the newsfeed that didn't have any label. 

"We robustly observe this implied-truth effect, where if false content doesn't have a warning, people believe it more and say they would be more likely to share it," Rand said. 

When the combination of labels was applied, participants were much less likely to share the stories across the board, sharing only 13.7 percent of the stories with headlines labelled as false and 26.9 of the false stories without a warning label. 

Also on rt.com Angelina Jolie teams up with BBC to fight fake news. Just don’t mention the BBC’s history, kids

"If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem, because there's no longer any ambiguity," Rand says. "If you see a story without a label, you know it simply hasn't been checked."

Rand also suggests that the labels had the powerful effect of helping people to transcend the oft-perceived immovable ideological biases. 

"These results are not consistent with the idea that our reasoning powers are hijacked by our partisanship," Rand says.

Rand advocates for continued research into the phenomenon of fake news and online bias, but, in the meantime, proposes a simple solution for social media giants to help overcome the scourge of fake news, as is their stated goal: label both the good and the bad. 

Like this story? Share it with a friend!

Podcasts
0:00
29:33
0:00
27:22