Most users on social media would offer to correct fake news articles by politely providing a more accurate source. However, an experiment found that it makes the problem worse instead of fixing the issue. The researchers behind the endeavor noted that offering corrections leads to less accurate news being retweeted and users facing “greater toxicity.”
Social media platforms such as Facebook and Twitter have constantly faced misinformation proliferating on their platforms. A huge number of articles purported as misinformation revolved around the Wuhan coronavirus and vaccines made to address it. Twitter said back in March of this year that it has removed more than 8,400 tweets and challenged 11.5 million user accounts worldwide because of COVID-19 misinformation.
Thus, a team of researchers from the University of Exeter and MIT Sloan School of Management conducted an experiment on Twitter to examine how users respond to accuracy checks. They created a series of Twitter bot accounts that appeared to be genuine human profiles. These accounts were in existence for at least three months and had at least 1,000 followers.
Alongside this, the researchers also identified 2,000 Twitter users who had tweeted any one of 11 frequently shared fake news articles. These 11 articles had been debunked by fact-checking site Snopes.com. When the bots found any of the 11 claims being posted, they sent out an automated polite reply with a link to the Snopes fact-check.
But according to lead study author Mohsen Mosleh, their paper’s findings were “not encouraging” as it suggested that politely pointing out falsehoods was ineffective. “After a user was corrected, they retweeted news that was significantly lower in quality and higher in partisan slant. [Their] retweets [also contained more toxic language,” he said. Because of their findings, Mosleh and the rest of the paper’s authors think that people should be wary about “going around correcting … other [users] online.”
The research team also evaluated more than 7,000 retweets with links to the 11 articles. The 2,000 users observed by the team posted the retweets 24 hours after the bot accounts mentioned the corrections.
The team found that the accuracy of news sources retweeted by the users declined by about one percent in the 24 hours following correction. They also found a one percent rise in partisan lean – the tendency to favor a party or person – in the retweeted content. Moreover, the toxicity of the language used in the retweets also increased by three percent.
However, the research team noted a distinction between the retweets and primary tweets they observed. They also noted that this distinction extended to the three areas – accuracy, partisan lean and language – analyzed for the study. Original tweets did not see diminished news quality, while retweets became less accurate.
According to the authors, Twitter users spending more time drafting primary tweets and spending less time on retweets played a role in this distinction. They surmised that the social media site’s retweeting capability contributed to the proliferation of fake news there. (Related: Scientific study of fake news RETRACTED because it was found to be fake science.)
Study co-author David Rand said: “Our observation that the effect only happens to retweets suggests that the effect is operating through the channel of attention.” He added that he and the other authors expected that users would focus their attention on accuracy when corrected – but it did not. “It seems that getting publicly corrected by another user shifted people’s attention away from accuracy … to other social factors such as embarrassment,” he said.
The authors also noticed that the effects on the retweets slightly increased when a user with the same political affiliation as the original poster offered the correction. “This shows how complicated the fight against misinformation is, and [this study] cautions against encouraging people to go around correcting each other online,” it concluded.
Prior to the study, Twitter already implemented measures to combat the proliferation of fake news on the site. Reuters reported back in May 2020 that the social media platform would add warning labels on tweets with inaccurate information. It added that the labels will also apply to tweets posted prior to the announcement – and will apply to anyone on the platform. (Related: Twitter to add warning labels to “misleading” coronavirus tweets – but who’s fact-checking Twitter?)
According to the company, the move was part of a new approach to misinformation – which would eventually extend to other topics. Twitter Policy Strategy, Development and Partnerships Head Nick Pickles said: “One of the differences in our approach here is that we’re not waiting for a third party to have made a cast-iron decision one way or another. We’re reflecting the debate, rather than stating the outcome of a deliberation.”