Is There a Bot Behind That Tweet?
Skip to content
Organizations Jun 1, 2023

Is There a Bot Behind That Tweet?

When we see messages that contradict our political ideology, we are more inclined to attribute them to bots. It’s making society even more polarized.

donkey and elephant tweeting at each other

Yevgenia Nayberg

Based on the research of

Shane Schweitzer

Kyle Dobson

Adam Waytz

When Elon Musk bought Twitter in 2022, he cited as one of his motivations the desire to rid the platform of bot accounts—fake profiles set up to post everything from relatively innocuous spam to intentionally divisive political content.

While the presence of bots on Twitter is undeniable, the scale of the problem is hard to pin down. Figuring out whether something was posted by a bot account or a real user can be surprisingly complex.

But the presence of bots, regardless of exactly how many there are, has important ramifications for how all content on social media is received, according to a new study by Adam Waytz, a professor of management and organizations at the Kellogg School. The research, coauthored and led by Shane Schweitzer of the Northeastern University and Kyle Dobson of the University of Virginia (both formerly PhD students at Kellogg), finds that partisanship influences whether posts are perceived to be generated by humans or bots.

“When you see something that disagrees with you, you’re more likely to attribute it to a bot than people who agree with it,” Waytz explains. And believing content was written by a bot, the study shows, “makes you discount the information”—a cycle that may contribute to further political conflict.

Understanding the political bot bias

The researchers conducted several experiments to understand how partisanship interacts with perceptions of tweet authorship.

In the first, they recruited 491 Americans for an online study. Participants first read about the existence of Twitter bots, then indicated whether they personally identified more as Democrats or Republicans.

Next, participants saw four real tweets from media organizations, each of which was followed by two replies—one stereotypically liberal and the other stereotypically conservative. Then, participants rated from one to seven the extent to which they thought each response came from a real human or a bot.

Both Republicans and Democrats saw tweets from their political foes as more bot-like than did their political counterparts. On the one-to-seven “human to bot” scale, Republicans rated conservative tweets as less bot-like (3.82) than did Democrats (4.69), while Democrats rated liberal tweets as less bot-like (3.17) than did Republicans (3.69). The researchers dubbed this pattern “political bot bias.”

The researchers noticed another interesting trend: conservative tweets were overall more likely to be attributed to bots than liberal ones. The perception is partly accurate, explains Schweitzer. “There is some evidence that bot-produced content tends to be conservative-leaning,” he says. Even so, “Democrats viewed conservative tweets as more bot-like than Republicans,” suggesting that partisan bias has a role to play.

Comparing bots and humans

While the first experiment reveled the existence of the partisan bot bias, it did not directly test how people perceived human-generated versus bot-generated tweets. Would the partisan bot bias still hold, the researchers wondered, with real human- and bot-generated tweets? So, to answer that question, they gathered a selection posted in the lead-up to the 2016 election.

They presented a new group of 498 participants with tweets on each of four topics that were likely to elicit partisan responses: Trump, Clinton, Black Lives Matter, and Make America Great Again. Specifically, for each topic, participants were shown a conservative tweet from a human, a conservative tweet generated by a bot, a liberal tweet from a human, and a liberal tweet generated by a bot. Once again, participants were asked to rate from one to seven how human or bot-like they perceived the tweets to be.

The research “does make me wonder whether all the hype around bots might actually have more damaging effects, or more biasing effects, than the bots themselves.”

Adam Waytz

On average, participants accurately identified bot-authored tweets as more bot-like than human-authored tweets: bot tweets got an average rating of 4.26, as compared with 3.29 for human tweets. However, partisanship still had a significant effect, with Republicans seeing liberal tweets as more bot-like than did Democrats and Democrats seeing conservative tweets as more bot-like than did Republicans.

This offered important confirmation of what the researchers saw in the first experiment, Schweitzer says. “The political bot bias emerged for both bot and human tweets, meaning that people weren’t simply recognizing actual bots but were also more likely to believe verified humans were bots.”

Discounting bot content

The first two experiments showed that partisan bias influences which social-media posts seem bot-like. In the last experiment, the researchers wanted to understand the downstream effects of that bias—in other words, what are the consequences of believing a post comes from a human as compared with a bot?

A new group of 500 participants first answered a variety of questions designed to gauge their partisan affiliation. Democrats were then presented with a tweet praising Republican Senator Ted Cruz, while Republicans saw a tweet praising Democrat President Joe Biden. Critically, some participants were told that the tweet was authored by a real person, while others were told that the very same tweet had been authored by a bot.

Then, participants answered a series of questions about the tweets they saw, rating from one to seven the extent to which they thought the generator of the tweet was capable of complex thought, how seriously they thought the tweet should be taken, and how much they thought the tweet should be trusted. Not surprisingly, tweets attributed to bots were rated much lower on all three scales than those attributed to humans.

“When people believe they are interacting with a bot, they trust online discourse less and show less willingness to engage with it,” Schweitzer says. Viewed alongside the earlier studies, which show that people perceive opposing views as more bot-like, the results of this experiment “suggest that the political bot bias may contribute to markers of political polarization.”

The dangers of bots (and bot hype)

To Schweitzer, the research suggests how deep the American partisan divide goes. “I was surprised by how consistently this bias emerged—people are very quick to dismiss other viewpoints as not only incorrect but also as not even human,” he says.

The current conversation around bots may not be helping any, Waytz points out. The belief that bot accounts are widespread may have the effect of poisoning the well and making all online content seem suspect. The research, he says, “does make me wonder whether all the hype around bots might actually have more damaging effects, or more biasing effects, than the bots themselves.”

But understanding the problem is critical. After all, “Twitter isn’t the only place you can come across technology posing as human. ChatGPT, deep fakes, and robo calls are just a few examples of how humanlike technology is reaching other parts of society,” Schweitzer says. “More research is necessary to understand the human psychology of this increasingly technological world.”

Featured Faculty

Morris and Alice Kaplan Professor of Ethics & Decision in Management; Professor of Management and Organizations; Professor of Psychology, Weinberg College of Arts & Sciences (Courtesy)

About the Writer

Susie Allen is a freelance writer in Chicago.

About the Research

Schweitzer, Shane, Kyle S. H. Dobson, Adam Waytz. 2023. "Political Bot Bias in the Perception of Online Discourse." Social Psychological and Personality Science.

Read the original

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Business Insights Organizations
close-thin