Facebook, Twitter and The Digital Disinformation Mess

The kind of disinformation now known as fake news has tainted public discourse for centuries, even millennia. But it’s been amplified in our digital age as a weapon of fearmongers, mob-baiters and election-meddlers that can widen social fissures, undermine democracies and bolster authoritarian regimes. As voters in some of the world’s most-populous countries headed to the polls in 2019, governments began to respond. Companies such as Facebook, Twitter and Google have come under increasing pressure to take action.

1. What’s disinformation?

It’s often defined as false content spread with the specific intent to deceive, mislead or manipulate. (That’s different from misinformation, which is erroneous but spread unintentionally.) Disinformation can take the form of legitimate-looking news stories, tweets, Facebook or Instagram posts, advertisements and edited recordings distributed on social media or by messaging app. A new worry is what are called deepfakes: video or audio clips in which computers can literally put words in someone’s mouth.

2. What’s different in the internet age?

Barriers to mass communication are lower. There’s no need for a printing press or broadcast tower, or to get the attention of traditional media gatekeepers. With platforms such as Facebook and Twitter, modern-day purveyors of disinformation need only a computer or smartphone and an internet connection to reach a potentially huge audience — openly, anonymously or disguised as someone or something else, such as a genuine grassroots movement. In addition, armies of people, known as troll farms, and so-called internet bots — software that performs automated tasks quickly — can be deployed to drive large-scale disinformation campaigns.

3. What’s the harm?

If the global reach of social media were being used merely to spread messages of peace and harmony — or to make money — maybe there wouldn’t be any. But the purposes are often darker. In what’s known as state-sponsored trolling, for instance, governments create digital hate mobs to smear critical activists or journalists, suppress dissent, undermine political opponents, spread lies and control public opinion.

4. Who produces it?

The discussion often starts with Russia. A Rand Corp. study of the unresolved conflict in eastern Ukraine, which has claimed some 13,000 lives since 2014, found the Russian government under President Vladimir Putin ran a sophisticated social media campaign that included fake news, Twitter bots, unattributed comments on web pages and made-up hashtag campaigns to “mobilize support, spread disinformation and hatred and try to destabilize the situation.” A NATO report noted, rather than using cyber attacks to take out physical networks, the target in Ukraine was “the minds of the people.” Another Russian disinformation campaign, the one targeted at the 2016 U.S. presidential election, reached millions of American voters with phoney posts and ads that sought to exploit divisions on hot-button issues. One study found more than 6.6 million tweets linking to fake news and conspiracy theories on Twitter in the month before the 2016 vote. The actual impact on the result is contested.

5. Who else?

Others include:

  • Before India’s elections in April and May, shadowy marketing groups connected to politicians used the WhatsApp messaging service to spread doctored stories and videos to denigrate opponents. The country also has been plagued with deadly violence spurred by rumours that spread via WhatsApp groups.
  • A study of 100,000 political images shared on WhatsApp in Brazil in the run-up to its 2018 election found that more than half contained misleading or flatly false information; It’s unclear who was behind them.
  • In countries such as Sri Lanka and Malaysia, fake news on Facebook has become a battleground between Buddhists and Muslims. In one instance in Sri Lanka, posts falsely alleging that Muslim shopkeepers were putting sterilization pills in the food served to Buddhist customers led to a violent outburst in which a man was burned to death. In Myanmar, a study commissioned by Facebook blamed military officials for using fake news to whip up popular sentiment against the Rohingya minority, helping to set the stage for what UN officials have described as ethnic cleansing.
  • A 2018 report from Oxford University found evidence of organized social-media manipulation campaigns in 48 countries, up from 28 in 2017.

6. How does it work?

A blatant falsehood might spring up on something that resembles a legitimate news website — with names such as newsexaminer.net or WorldPoliticus.com — and goes viral when it’s tweeted by someone with lots of followers or turned into a “trending” YouTube video. The most sophisticated disinformation operations use trolls, artificial intelligence and internet bots to flood the zone with social-media posts or messages to make a fake or doctored story appear authentic and consequential. Fake news can be a complete fabrication (the pope didn’t really endorse Donald Trump), but often there’s a kernel of truth that’s taken out of context or edited to change its meaning.

7. What are social-media companies doing?

Facebook and Google, under pressure from lawmakers and regulators, have started requiring political ads in the U.S. and Europe to disclose who is behind them, as in other media. Google’s YouTube division has started to change its “up next” algorithms to limit recommendations for suspected fake or inflammatory videos, a move it resisted for years as it sought to maximize viewership. WhatsApp now limits, to five, how many people or groups a message can be forwarded to. Its parent company, Facebook, said it spent 18 months preparing for India’s 2019 election: It blocked and removed fake accounts, looked for attempts at meddling and partnered with outside fact-checkers (albeit relatively few) to combat fake news. Facebook also has developed artificial intelligence tools to help identify content that’s abusive or otherwise violates the site’s policies. In the wake of the March 15 massacre in Christchurch, New Zealand, Facebook, Google and Twitter signed a voluntary agreement with world leaders pledging to fight hate speech online.

8. What about governments?

Singapore lawmakers passed a bill allowing for criminal penalties of up to 10 years in prison and a fine of up to S$1 million ($740,000) for anyone convicted of spreading online inaccuracies. The responsibility for identifying falsehoods detrimental to the public interest was given to government ministers. Malaysia enacted a similar law that the government, elected last year, is trying to repeal. Indonesia set up a 24-hour “war room” ahead of its 2019 elections to fight hoaxes and fake news. France has a new law that allows judges to determine what is fake news and order its removal during election campaigns. In the U.S., efforts to crack down on disinformation can run up against the guarantee of free speech, although some platforms have begun to restrict postings by anti-vaccine activists, for example. Then there’s the Philippines, where the government of President Rodrigo Duterte encourages “patriotic trolling” to undermine his critics.

The Reference Shelf

  • Facebook is finding out how hard it is to clean up toxic content.
  • The Knight Foundation studied disinformation campaigns on Twitter.
  • The Guardian looks at Britain’s covert propaganda in the U.S. during World War II.
  • How a handful of people tried to stop fake news in India, and one determined cop’s fight.
  • A three-part series on YouTube on social-media algorithm manipulation.
  • Foreign Policy magazine says the U.S. and Europe are unprepared for disinformation wars.
  • In Ukraine’s last election, Russia tried a new Facebook tactic.

Credit: Shelley Banjo for Bloomberg Businessweek, with assistance by Marie Mawad and Grant Clark. 19 May 2019.