How We Justify Our Unpopular Opinions
Skip to content
Politics & Elections Social Impact Oct 1, 2022

How We Justify Our Unpopular Opinions

The tactic makes controversial views more palatable to others—and has implications for the rampant spread of fake news.

four people stand in a circle, with one speaking and the others reflecting on their statement.

Lisa Röper

Based on the research of

Leonardo Bursztyn

Georgy Egorov

Ingar K. Haaland

Aakaash Rao

Christopher Roth

In our polarized world, taking a public stance on a political issue can be nerve-wracking, especially if you think your view might be unpopular with many in your social circles.

So, what might induce someone to share a dissenting opinion? In a new paper, Georgy Egorov, a professor of managerial economics and decision sciences at the Kellogg School, and colleagues looked at how so-called rationales can tip the scale.

Rationales are narratives supporting a particular viewpoint that may emerge organically or be pushed by political actors, social movements, and media outlets. Sometimes, they are an attempt to persuade people of a position on its merits. But more often, they function as a social cover, a means of making the unpalatable appear more acceptable. It’s less a real argument and more a justification for a preexisting preference, and one that has the benefit of appearing reasonable to others.

Imagine someone who loves pineapple on pizza, and who, in announcing this wildly controversial opinion, posts an article about the many health benefits of pineapple. This health rationale has nothing to do with the person’s preference, but it might make them feel less sheepish to admit it—and encourage other pineapple-pizza fans to come forward as well.

In the paper, Egorov and his colleagues find that rationales make people more likely to share opinions they would otherwise keep private. What’s more, rationales succeed in changing how audiences interpret those dissenting opinions. In particular, when people use rationales to share positions that could be attributable to prejudice, audiences are less likely to make that unfavorable connection.

“People speak openly and express their preferences depending on what they think about their audience and who they think their audience will be,” Egorov says. “What our paper shows is that rationales, or social covers, play a major role in enabling people to share views that are not concordant with their audience.”

Rationales Make It Easier to Express an Unpopular Opinion

Egorov conducted the research with Leonardo Bursztyn of the University of Chicago, Ingar Haaland of the University of Bergen, Aakaash Rao of Harvard University, and Christopher Roth of the University of Oxford. They began by looking at efforts to defund the police in the United States. They picked this particular topic because, despite the popularity of “defund the police” as a slogan, public opinion polls suggest that only 25 percent of Democrats support cutting police budgets in their areas.

However, the researchers write, because the defunding movement is aligned with concerns about racial injustice, “it seems … plausible that many liberals would feel uncomfortable publicly voicing opposition to defunding.” Would a rationale make it less uncomfortable?

To find out, the researchers recruited a group of 1,122 Democrats and Democrat-leaning Independents, all of whom had active Twitter accounts and who agreed to install an application that would have permission to send tweets from their accounts.

As part of the experiment, participants were shown an op-ed from The Washington Post
in which the author, a Princeton University criminologist, argued against defunding the police, citing a significant body of evidence that shows that increasing policing decreases violent crime.

Rationales “make it more difficult to infer the true reasons for holding a certain opinion. They introduce noise.”

— Georgy Egorov

After reading the op-ed, participants were asked whether they would privately join a campaign to oppose defunding the police. About half said no. The 529 participants who agreed to join the campaign continued on in the study and, importantly, were shown the same article again.

These remaining participants were divided into two groups and presented with one of two tweets. For some participants, in what the researchers call the no-cover group, the tweet read, “I have joined a campaign to oppose defunding the police. After I joined the campaign, I was shown this article written by a Princeton professor on the strong scientific evidence that defunding the police would increase violent crime,” followed by a link to the op-ed. In the cover group, the tweet indicated the participant had seen the article before joining the campaign. Then, participants were asked whether or not they wanted to post the tweet.

Both versions of the tweet were accurate—participants did see the op-ed both before and after agreeing to join the campaign—but they conveyed subtly different messages. The tweet in the cover group implied participants might have been persuaded to join the campaign by the compelling scientific evidence they’d seen in the article, whereas the tweet for the no-cover group suggested they were already opposed to defunding the police before they saw the article.

This one-word difference had a significant influence on participants’ willingness to voice opposition to defunding the police, the researchers found. In the cover group, 70 percent of participants authorized the tweet, as compared with 57 percent in the no-cover group.

Interpreting Intent

In a second experiment, the researchers looked at whether rationales such as the argument in The Washington Post would change how the tweets were received. In particular, the researchers suspected that hesitation to publicly oppose defunding the police might stem from concerns about appearing racially prejudiced, and they wanted to understand whether rationales offered any protection against that perception.

A new group of 1,040 Democrats and liberal Independents was recruited, divided in half, and told they’d been matched with another participant from an earlier study. Then, participants saw a tweet ostensibly from their matched partner—either the cover or no-cover tweet from the previous experiment.

The researchers told participants their partner had been given an opportunity to authorize a $5 donation to the NAACP and then asked them to guess whether their partner had authorized the donation or not. If participants believed their partner donated to the NAACP, an organization that fights for racial equity, it suggested they believed their partner did not harbor prejudice, despite their opposition to defunding the police. Participants were also asked whether to authorize a $1 bonus to their partner—a measure of whether they would socially sanction the tweeters for holding a controversial view among Democrats.

The use of the article from The Washington Post as a cover-providing rationale changed how participants interpreted the tweeters’ motivations. Among participants who saw the no-cover tweet, just 27 percent guessed their partner had donated to the NAACP, as compared with 35 percent in the cover group. Accordingly, 47 percent of participants in the no-cover group denied their partner a $1 bonus, as compared with 40 percent of participants in the cover group.

“The pattern is clear,” Egorov says. “The participants in the first experiment were more hesitant to authorize a tweet without a social cover because they were afraid of social punishment, and they were correct. They probably thought that doing so would signal their racial prejudice, and they were correct as well. The behavior of the person posting the tweet and the audience’s reactions—this all squares up.”

How Rationales Shape Outgroup Perceptions

The first two experiments focused on ingroup perception: that is, how liberals perceive fellow liberals who express dissenting views. To broaden their findings, the researchers decided to study outgroup perceptions as well.

In their next two experiments, they focused on a conservative stance that is stigmatized by liberals and even some Republicans: the belief that all Mexican immigrants living in the country illegally should be deported immediately.

The researchers used a setup identical to the first experiment, recruiting a group of 1,130 Republicans and right-leaning Independents and presenting them with a clip from Tucker Carlson Tonight in which Carlson presents statistics from the U.S. Sentencing Commission to argue that illegal immigration is linked to violent crime. The 517 participants who agreed to publicly voice their support for immediate deportation were presented with a tweet indicating they’d seen the Tucker Carlson clip either before or after agreeing to join the movement.

As before, significantly more participants in the cover group—65 percent—authorized sending the tweet, as compared with 48 percent in the no-cover group, suggesting participants believed this tweet would be more palatable to their audience.

To see how these tweets would be interpreted by an outgroup audience, the researchers recruited 1,082 Democrats and left-leaning Independents—the people most likely to strongly oppose the message—and used the same experimental setup as before.

Because the researchers believed reluctance to openly support deportation stemmed from concerns about appearing anti-immigrant, they tested whether rationales protected against that perception. As in the earlier study, participants guessed whether their partner had authorized a donation to the U.S. Border Crisis Children’s Relief Fund—an organization that supports migrant children—and they also chose whether to authorize or deny a $1 bonus to their partner.

The use of a rationale influenced both measures of audience perception, the researchers found. In the cover group, 13.4 percent of participants believed their partner had donated to the relief fund, as compared with 8.5 percent in the no-cover group. Seventy-four percent of cover-group participants denied their partner a bonus, as compared with 80 percent in the no-cover group.

Rationales, Fake News, and Misinformation

It may seem surprising that liberals would give any credence to a Tucker Carlson clip as rationale. But it’s important to remember that “rationales don’t have to be persuasive,” Egorov says—they just have to convince the audience that someone else could have been convinced. Carlson’s show is popular and easy to find, and it’s possible to imagine someone else finding and being swayed by it, even if you aren’t. That, in turn, makes it easier to imagine the person isn’t necessarily anti-immigrant. In other words, Egorov explains, rationales “make it more difficult to infer the true reasons for holding a certain opinion. They introduce noise.”

Egorov believes that the water-muddying quality of rationales helps, in part, to explain the power of misinformation online.

“If you think of the role of fake news as being not so much persuasive but as being a social cover, it explains why misinformation can have power to influence people without persuading them,” he says. There’s a kind of snowball effect: posting misinformation as a rationale makes it more acceptable to express a stigmatized view and encourages other people to express it as well. Over time, a view that once seemed fringe can move toward the mainstream.

The research also suggests a possible solution for this. Several social-media sites have experimented with labeling misinformation as such. Egorov and his colleagues suggest this would have the effect of showing audiences that the original poster knew information was false and opted to share it anyway, possibly reducing the degree of social cover. And without that protection, fake news is no longer a good rationale—it’s just false.

Featured Faculty

James Farley/Booz, Allen & Hamilton Research Professor; Professor of Managerial Economics & Decision Sciences

About the Writer

Susie Allen is a freelance writer in Chicago.

About the Research

Burszyun, Leonardo, Georgy Egorov, Ingar K. Haaland, Aakaash Rao, and Christopher Roth. 2022. “Justifying Dissent.” Working paper.

Read the original

More in Politics & Elections