“Falsehood flies and the truth comes limping after it,” wrote Jonathan Swift in 1710. Now a group of scientists say they have found evidence Swift was right – at least when it comes to Twitter. In the paper, published in the journal Science, three MIT researchers describe an analysis of a vast amount of Twitter data: more than 125,000 stories, tweeted more than 4.5 million times in total, all categorised as being true or false by at least one of six independent fact-checking organisations. The findings make for unhappy reading. “Falsehood diffused significantly farther, faster, deeper and more broadly than the truth in all categories of information,” they write, “and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends or financial information.”
How much further? “Whereas the truth rarely diffused to more than 1,000 people, the top 1% of false-news cascades routinely diffused to between 1,000 and 100,000 people,” they write. In other words, facts don’t get retweeted, while too-good-to-be-true claims are viral gold. How much faster? “It took the truth about six times as long as falsehood to reach 1,500 people, and 20 times as long as falsehood to reach a cascade depth of 10” – meaning that it was retweeted 10 times sequentially (so, for example, B reads A’s feed and retweets a tweet, and C then reads B’s feed and retweets the same tweet, all the way to J). The researchers speculate that falsehoods spread so fast because they fulfil our desire for novelty. True news, hamstrung by the requirement that it has to have happened, is much alike, but fake stories can surprise and entertain with no limit. The scientists posit that “when information is novel, it is not only surprising, but also more valuable, both from an information theoretic perspective [in that it provides the greatest aid to decision-making] and from a social perspective [in that it conveys social status on one who is ‘in the know’ or has access to unique ‘inside’ information].”
Despite the recent focus on Twitter “bots,” automated accounts seemed to have little influence on the spread of false rumours. The researchers initially ran the analysis after removing all the bots they could find, but even when they added them back in, the overall conclusions remained the same. The only major change was that bots sped up the spread of all news, true and false: “This suggests that false news spreads farther, faster, deeper and more broadly than the truth because humans, not robots, are more likely to spread it. “This implies that misinformation containment policies should also emphasise behavioural interventions, like labelling and incentives to dissuade the spread of misinformation, rather than focusing exclusively on curtailing bots.” Similarly, false news is not spread because of the prevalence of a few bad actors who deliberately set out to mislead. Instead, the distinction between the distribution speed of true and false rumours seems to be wholly down to the normal people in the middle of the chain: those who decide to hit retweet, or not, on any given tweet they may see.
Focusing just on reliable news organisations might help users avoid sharing fake rumours – if there were any agreement on which news organisations they were. Instead, the researchers report, “there is no correlation between the degree to which the American public finds a source ‘reliable’ and the fraction of its verified stories which are true” (as measured by Politifact). Fox News is trusted by more than three times as many Americans as Bloomberg, while all the major TV networks are trusted by more Americans, and rated as less reliable, than the Wall Street Journal and New York Times.
Credit: Alex Hern for The Guardian, 8 March 2018.