In middle school, how did you learn what fashions and bands were considered cool? It’s possible someone told you, but there’s a good chance you picked it up simply by watching what your classmates were listening to and wearing. (Certainly, that’s how yours truly ended up with butterfly clips and several Backstreet Boys CDs.)
This process of observing and mimicking is called social learning, and it’s one of the most important ways we make sense of the world around us. But social media has thrown social learning into turmoil, according to William Brady, an assistant professor of management and organizations at Kellogg—and the two in combination are driving misperceptions about the world and the spread of extreme views.
Insight recently wrote about new research from Brady that explains how social media has disrupted our social-learning processes, why it matters, and what we can do about it.
Strategies for social learning
When we’re in the thick of social learning, we use shortcuts to determine which information is most important. After all, we can’t pay attention to everything.
So instead, we direct our attention toward information that Brady dubs “PRIME”: prestigious, in-group, moral, or emotional. Prestigious information comes from someone who is considered successful, in-group information comes from a peer, moral information gets at whether people are behaving in line with shared ethical norms, and emotional information is emotionally charged (and often negative).
There are good reasons to pay attention to PRIME information. For instance, “it can be useful to have a bias toward information learned from successful individuals. They have some kind of knowledge that has helped them succeed in their environment,” explains Brady.
Critically, PRIME information is most useful when it is both rare and diagnostic—meaning the person who seems successful actually is successful, or the behavior characterized as unethical actually is unethical.
When PRIME information breaks down
Social-media algorithms are designed to maximize engagement: clicks, likes, time spent on the platform, and so on. And because our brains are biased to see PRIME information as important—and therefore engaging—algorithms have learned over time to serve us a whole lot of it. As a result, there’s an incentive for users to post in ways that appeal to our taste for prestigious, in-group, moral, and emotional content.
Often, this results in outright fakery. Take a picture in what appears to be a private jet, and you’ll look like you’ve attained prestige and wealth (even if it’s just a $64-an-hour photo studio). And when algorithms amplify social-media influencers who are successful in appearance only, and other users take lessons from their words or actions, the functionality of the bias toward prestigious sources of information has broken down.
This same breakdown can happen with other types of PRIME information, too. The social-learning bias toward in-group information has historically fostered cooperation and understanding within a community—to get along, it helps to have a shared set of norms.
But online, it can play a more divisive role. It’s easy for in-group information to foster groupthink and, eventually, extremism. And when social-media users see extreme views regularly and accompanied by lots of likes, they may begin to believe the viewpoint is more common than it is.
“Take the right context and the right people, add algorithmic amplification, and it can skew social learning in ways that make extreme views seem more legitimate and widely held,” Brady says.
Fixing the problem
So, engagement-based algorithms and PRIME information make for a conflict-laden, unhelpful combination. Can anything be done?
Brady and his coauthors propose two solutions. One is to increase the transparency of social-media algorithms. Simply telling users why they are seeing a given post—because a close friend shared it or because the platform felt it would be engaging, for example—would help users understand how the technology works and think more deeply about what they’re consuming online.
The researchers call the other solution “bounded diversification.” This approach involves tweaking algorithms to limit the amount of PRIME information users see—while still serving lots of the funny memes, historical photography, and cute puppy videos they love.
Brady sees the change as one platforms could stomach. “We propose this approach because it would still give people content that they find interesting but wouldn’t be so dominated by PRIME information,” he says.
You can read the whole article here.
“Investors care about boycotts, not only because they threaten their reputation, but also sometimes because they just reveal new information about the leadership of a company.”
— Brayden King, in BBC News, on the impact of consumer boycotts.