Organizations Policy Dec 1, 2024
Feeling Outraged? Think Twice Before Hitting “Share.”
Misinformation fuels outrage—which in turn leads to mindless social-media shares, a new study finds.
Yevgenia Nayberg
Misinformation runs rampant on social media. And concern over its influence is only growing with the emergence of AI-generated misinformation.
Policymakers, regulators, and other stakeholders in content moderation have proposed a variety of methods to counter or prevent the spread of misinformation on social media. For example, there are accuracy nudges that discreetly remind people to keep an eye out for misleading information. There are efforts to debunk misinformation head on. And there are “prebunking” methods that preemptively address falsehoods to help build public resilience to misinformation.
In large part, these strategies rest on the idea that people generally care about the accuracy of information, says William Brady, an assistant professor of management and organizations at the Kellogg School. But some types of content are particularly good at getting people to overlook accuracy.
“There’s a class of misinformation that we should pay special attention to,” he says, “because of the very fact that it tends to put us in a motivational state where we’re not actually going to be paying that much attention to accuracy. And this would be misinformation that evokes moral outrage.”
The relationship between misinformation and outrage—and how it affects people’s behavior on social media—is the subject of new research by Brady and his colleagues Killian McLoughlin, Ben Kaiser, and M.J. Crockett of Princeton; Aden Goolsbee of Yale; and Kate Klonick of St. John’s University.
Across ten studies analyzing more than a million social-media posts, the researchers find that misinformation is more likely to trigger outrage than trustworthy news.
Outrage, in turn, drives people to share or retweet these often-misleading social-media posts. And it makes them more willing to do so without actually reading the article linked in the post, even if they are otherwise good at identifying misinformation.
“We actually find people are not terrible at discerning between misinformation versus trustworthy news,” Brady says. “But here’s the key: if you give them an outrage-evoking article with misinformation, that ability they have, it goes out the window; they are more likely to share it anyway.”
Misinformation and outrage
For their research, Brady and colleagues conducted eight observational studies across two social-media platforms, multiple time periods (2017 and 2020–2021), and similar networks of people. They examined 1,063,298 posts on Facebook and 44,529 tweets by 24,007 people on X (Twitter at the time of the study), covering a range of topics. Each post contained a link to a news article.
They categorized a post as misinformation if its link came from a “low-quality” news source known to produce false or misleading content, based on reports by independent fact-checking organizations.
“Outrage combined with the social-media context tends to put people in this kind of impulsive-sharing mode.”
—
William Brady
In contrast, they considered a post trustworthy if it was rated as coming from a “high-quality” source. The sources they classified as misinformation were six times more likely to produce content that was fact-checked as false or misleading.
The researchers also conducted two experiments including about 1,500 participants to rate either the accuracy of twenty news headlines or their likelihood of sharing the news.
“All of these factors, when you put them together, cover a wide range of data that sets the study apart,” Brady says. “It was very ambitious in its scope.”
Groupthink
Through these tests, three notable findings emerged about misinformation on social media.
First, “misinformation is highly likely to evoke moral outrage, more so than factually accurate information,” Brady says.
Misinformation was more likely than trustworthy news to lead to outrage on both social-media platforms, regardless of audience size. On Facebook, misinformation received more anger emojis than did trustworthy news, while on X, it spurred more outrage-filled comments, according to machine-learning-driven sentiment analysis.
Second, when people feel outraged by news on social media, they are more likely to share it with others. “Outrage combined with the social-media context tends to put people in this kind of impulsive-sharing mode,” Brady says.
This is true whether or not a post contains misinformation. People were more likely to share a Facebook post the more anger reactions it had, and they were more likely to retweet an X post the more outrage it elicited in the comments.
And finally, outrage increases people’s willingness to share posts without checking them for accuracy.
The more anger reactions a Facebook post had, the more willing people were to repost the news without first reviewing it. The effect was stronger for posts linking to misinformation than trustworthy sources.
Of note, one of the behavioral studies confirmed that people were able to correctly discern between trustworthy news and misinformation regardless of how much outrage it evoked. In other words, outrage does not necessarily make people less capable of detecting misinformation. Rather, people seem willing to overlook the accuracy of content on social media when they feel outraged, in part because the feeling puts them in a “group identification mindset.”
“When we’re in a context where our political-group identities become salient, it starts to make us think about ourselves more in terms of the group’s outrage rather than in terms of the self,” Brady says. “That is why people are thinking less about accuracy and are more likely to express outrage on behalf of the group.”
Countermeasures
One of the implications of these findings is that many of the existing measures to reduce or counter misinformation on social media, like accuracy nudges, may be less effective when it comes to outrage-evoking content.
Instead, the results show that people are often willing to share misinformation on social media—especially when it evokes outrage—because of their political affiliation or moral stance. They could always defend themselves by claiming that they only intended to highlight that the content is “outrageous if true,” according to the researchers. In this way, people can exploit outrage-inducing misinformation to get more engagement or visibility on social media.
Even people who disagree with the misinformation they see on social media might unintentionally be helping to promote the content simply by interacting with it, Brady notes. “The misinformation ecosystem is not just driven by user behavior, it’s also driven by algorithms,” he says. “When you engage with misinformation—even in disagreement—you’re actually contributing to the increase of misinformation in the ecosystem because you’re letting the algorithm know that it’s drawing engagement.”
For policymakers, the research offers concrete evidence to consider when designing solutions for misinformation. This is an effort that becomes especially important when politics take center stage.
“Content moderation always becomes a big deal during political seasons,” Brady says. “If you want to predict the misinformation that is most likely to spread through a network, then you need to be measuring its potential to elicit outrage among political groups … because that’s the misinformation that spreads the most and that people are not reflecting on very much.”
Abraham Kim is the senior research editor at Kellogg Insight.
McLoughlin, Killian L., William J. Brady, Aden Goolsbee, Ben Kaise, Kate Klonick, and M. J. Crockett. 2024. “Misinformation Exploits Outrage to Spread Online.” Science.