The Stereotypes Lurking in Our Language
Skip to content
Social Impact Jun 1, 2024

The Stereotypes Lurking in Our Language

A new tool can shed light on intersectional biases—and how they may change over time.

Michael Meier

Based on the research of

Tessa Charlesworth

Mahzarin R. Banaji

Kshitish Gate

Summary A new tool allows scholars to look at the language used to describe intersecting sociodemographic groups (e.g., defined by gender and race simultaneously). An initial analysis using the tool found that historically powerful groups—e.g., rich and white—dominate in terms of how frequently they are discussed. Words used to describe these groups are overwhelmingly positive, too, while words used to describe historically overlooked groups tend to be negative.

It’s a well-known riddle: A father and his son are involved in a car crash. They are rushed to the hospital and when the child is wheeled into the operating theatre, the surgeon steps back and says, “I can’t operate on this boy. He’s my son.” How can this be?

The answer, of course, is that the surgeon is his mother.

The setup is used to reveal the hidden biases we share about different groups of people, in this case, women. More recently, however, scholarly investigations of bias have expanded to examine not just a single sociodemographic group on its own—defined by gender, say, or race—but the interaction of several, intersecting groups together (e.g., defined by gender and race simultaneously).

“White, rich women experience a different world than white, poor women. Similarly, Black women experience a different world than Black men, and so on,” says Tessa Charlesworth, an assistant professor of management and organizations at Kellogg. “Because of this, it’s important to understand how to study this intersectionality at scale.”

A new tool developed by Charlesworth, Kshitish Ghate of Carnegie Mellon, Aylin Caliskan of the University of Washington, and Mahzarin Banaji of Harvard provides a way to do precisely this.

Using the novel Flexible Intersectional Stereotype Extraction procedure, or FISE, the researchers analyzed 840 billion words of text to unearth biases surrounding different intersectional groupings of people.

In addition to providing a proof of concept for their procedure, the researchers’ initial analysis found that historically powerful groups—rich and white—dominate in terms of how frequently they are discussed. The words used to describe these groups are overwhelmingly positive, too, while words used to describe groups that are historically overlooked tend to be negative.

The best of two worlds

For decades, scholars have looked to language as a reflection of the biases we hold, the idea being that how we talk about something opens a window into how we think about that thing.

With this logic in mind, social scientists have had success mapping a single trait, like gender, against associated stereotypes. However, mapping the intersection of several traits has proven trickier. Computer scientists, meanwhile, have had more success at mapping stereotypes to intersectional traits, but their methods are more technically complex and computationally intensive, and have typically relied on group information that is encoded in names (e.g., Jamal versus John), while leaving unexplored more concealable groups (i.e., those not easily decoded from names), like sexual orientation, religion, or age, among others.

FISE bridges these shortcomings. “It’s a much lighter model than those typically used to look at intersectionality,” Charlesworth says. “That means it doesn’t require heavy computing resources, and it is more flexible for use in different language contexts. For example, with this tool, we’ll be flexibly able to look at intersectionality in language across non-English languages or even in historical texts from 200 years ago.”

“One of the great things about FISE is that it allows us to look into the directionality of this effect and, for the first time, study the evolution of intersectional stereotypes over history.”

Tessa Charlesworth

The tool works as follows. First, researchers identify the terms they want to study—descriptive qualities like warm, cold, enthusiastic, friendly, and so on, for example. Second, by scanning huge archives of Internet text from Wikipedia and Common Crawl, the model calculates how closely these words are associated with other terms along a first group dimension, like social class (with rich and affluent signaling wealth and poor or needy signaling poverty). Third, the model calculates the difference between how closely the descriptive quality like warm is associated with wealth as opposed to poverty. Finally, this same process is carried out along a second group dimension, like race. These differences in how warmth is perceived across both race and class can then be mapped into four quadrants (rich–Black vs. rich–white vs. poor–Black vs. poor–white), revealing the intersectional stereotypes about people.

Though most of the analyses Charlesworth and her colleagues ran used just two dimensions, “theoretically, you could expand this analysis to as many group dimensions as you want. We could simultaneously look at race, class, and gender, or even more dimensions,” she says. “The interpretation does become trickier, though.”

How biases get amplified

Now that the researchers had built their tool, they wanted to test it against a “ground-truth” by analyzing how accurately it associated gender and race with 143 different occupations.

To do this, they ran FISE, sorting each of these occupations into four quadrants: white–male, white–female, Black–male, and Black–female.

They then cross-checked the results of FISE with a 2022 Bureau of Labor Statistics report. If the model worked as it was supposed to, then occupations associated with Black men in the real world should have the same language-based associations according to FISE.

The tool worked. FISE found, for instance, 59 percent of occupations associated with white men, while BLS lists 48 percent. It found 9 percent of occupations associated with Black women, compared with 5 percent in BLS data. The authors note that though these figures aren’t identical, they are also not statistically different. Similar levels of accuracy were found for the class-by-gender and race-by-class comparisons.

Having demonstrated congruity between FISE and the occupational data, the researchers next used the model to analyze the association between qualities like honest, courageous, greedy, and so on with race, gender, and class. The model revealed two key insights.

“The first is what we call the dominance of powerful groups,” says Charlesworth. “When you look at intersectional groups like white men versus Black women, you see really clear patterns of white men dominating the language space, purely in terms of the percentage of traits associated with that group.” Fifty-nine percent of all traits analyzed refer to white men, while five percent refer to Black women. Thirty percent relate to white women and six percent to Black men.

The second insight was that the traits associated with white men and women are largely positive, while those associated with Black men and women tend to be negative. (To determine this given the lopsidedness of white men’s dominance in the language space, the researchers forcibly assigned an equal number of descriptive traits to all four groups).

That said, when the researchers introduced the dimension of class into their analysis, they found that intersectional biases involving social class had the strongest positive and negative associations of all. White–rich or male–rich was overwhelming positive, while white–poor or male–poor was overwhelmingly negative.

“Though I tend to focus on race and gender in my research, social class kept popping out as the most important dimension that defines which intersectional groups are seen as positive,” Charlesworth says. “Interestingly, although class was definitive when thinking about the quality (positivity/negativity) of traits, it was relatively less important when thinking about the frequency of traits. When it came to frequency, race and gender were the most important dimensions to explain which groups dominated the language space.”

Shining a light through language

Charlesworth’s tool will allow scholars to track how biases, including intersectional ones, change over time.

Using FISE, researchers can now see when and how ideas about, say, Black women have been shifting. Or they could use FISE to determine how long it takes for the demographic shifts in occupational statistics (i.e., changes in the percentage of managers who are Black women) to eventually also trickle into changing the stereotypes embedded in language.

Or, as Charlesworth wonders, perhaps the opposite is true; perhaps changes to language act as prophecy. “It could be that we first see the possibility of women doctors as ‘a thing’ showing up in language. Maybe it’s just a possibility raised about white women at first, or wealthy women. But we can look into how language is a harbinger of change in the world,” she says. “One of the great things about FISE is that it allows us to look into the directionality of this effect and, for the first time, study the evolution of intersectional stereotypes over history.”

But as her initial analysis suggests, a tool like FISE can also shed light on longstanding concerns about algorithmic bias. As Charlesworth notes, we have a problem if white men dominate the training data and greater wealth is synonymous, in our language, with greater goodness. Whatever output AI produces is going to be much more problematic when it comes to poor or underrepresented groups.

We see disparities emerging already: AI-generated faces are judged as more realistic than natural human faces, but only for white faces due to their dominance in training data. And the frequency of women or men in Google search outputs, itself a product of societal bias, becomes self-reinforcing by shaping beliefs about the default “person.”

Charlesworth summarized: “With AI and Large Language Models (LLMs) becoming part of the bread-and-butter of our everyday lives, we, as consumers, researchers, or leaders, need to understand how bias is baked into these tools. In particular, understanding the unique patterns of intersectional biases in AI and LLMs will be necessary to make more fair technologies for the future.”

About the Writer

Dylan Walsh is a freelance writer based in Chicago.

About the Research

Charlesworth, Tessa, Mahzarin Banaji, and Kshitish Ghate. 2024. "Extracting Intersectional Stereotypes from Embeddings: Developing and Validating the FISE Procedure." Proceedings of the National Academy of Sciences.

Read the original

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Leadership & Careers Social Impact
close-thin