AI and the Social Sciences Used to Talk More. Now They’ve Drifted Apart.
Skip to content
Innovation Social Impact Jul 1, 2019

AI and the Social Sciences Used to Talk More. Now They’ve Drifted Apart.

Research shows that the gap between these disciplines is growing, which could make it harder to address social and ethical problems.

AI researchers and scholars of the humanities and social sciences sit at different tables, reflecting the growing gap between their disciplines.

Michael Meier

Based on the research of

Morgan R. Frank

Dashun Wang

Manuel Cebrian

Iyad Rahwan

Artificial intelligence researchers are employing machine learning algorithms to aid tasks as diverse as driving cars, diagnosing medical conditions, and screening job candidates. These applications raise a number of complex new social and ethical issues.

So, in light of these developments, how should social scientists think differently about people, the economy, and society? And how should the engineers who write these algorithms handle the social and ethical dilemmas their creations pose?

“These are the kinds of questions you can’t answer with just the technical solutions,” says Dashun Wang, an associate professor of management and organizations at Kellogg. “These are fundamentally interdisciplinary issues.”

Indeed, economists seeking to predict how automation will impact the labor market need to understand which skills machines are best suited to perform. At the same time, the engineers writing software to diagnose tumors may want to know what philosophers have to say about the moral conundrums their technology poses. And coders and psychologists will need to work together to ensure that algorithms in recruiting software do not amplify human biases.

Some researchers have managed to cross departmental barriers. For example, a groundbreaking study last year explored how millions of people across the globe would make the difficult decisions that autonomous vehicles face (e.g., given a choice between killing a pedestrian or a passenger, whose life would they favor?). The researchers aim to use this work to ensure that new technologies reflect universal values.

Yet a new paper by Wang and collaborators finds that the link between AI and the social sciences (and other fields) has weakened over time.

The researchers analyzed several decades of papers published in the field of AI, as well as those in the social sciences, humanities, natural sciences, engineering, and medicine. They found that, more and more, computer scientists are facing social questions on their own, without relying deeply on insights from scholars who study them. At the same time, scholars of the social sciences, physical sciences, and humanities seem to be losing touch with rapid advances in AI as well.

Taken together, the results speak to a renewed need for researchers to collaborate across disciplines, Wang says.

“Just when AI is becoming more and more relevant to any corner of society, it’s becoming more and more isolated,” Wang says. “We really need to close that gap.”

Artificial Intelligence in Society Raises New Issues

People have been grappling with the social and philosophical consequences of technology for centuries, Wang points out. Take, for example, the 1823 novel Frankenstein. “AI was born from this kind of fascination,” he says. “It had very deep roots in social sciences.”

More recently, AI researchers have begun to face the real-life quandaries that the technology introduces.

Consider, for instance, when Amazon attempted to develop machine learning tools to score job candidates. Because the software used data on past applicants to predict which people were best-suited for the company, a glaring problem emerged: since many previous applicants were men, the program penalized candidates whose resumes contained the word “women’s” or listed certain all-women’s colleges as an alma mater.

Wherever bias already exists, AI “is just going to magnify that bias,” Wang says. (The Amazon program has since been discontinued.)

“Just when AI is becoming more and more relevant to any corner of society, it’s becoming more and more isolated.”

Wang wanted to know how often AI researchers were engaging with disciplines such as psychology, philosophy, economics, and political science, which could help them address these inevitable ethical and social issues. One measure of this engagement is whether AI researchers are citing other disciplines in their academic papers. To investigate, Wang collaborated with Morgan Frank, Manuel Cebrian, and Iyad Rahwan of the Massachusetts Institute of Technology.

The team took advantage of a newly available dataset from Microsoft Academic Graph (MAG), which indexes scholarly papers. The data included traditional journal citations as well as conference proceedings, a major venue for AI findings. And it captured citation relationships between papers—that is, whenever one study referenced another.

The AI Clique

Wang and his collaborators examined MAG data from 1950 to 2018. They found that the number of publications in AI and related subfields (such as computer vision and natural language processing) rose exponentially during that time, from hundreds to tens of thousands of papers per year. These fields now dominate computer science research.

To quantify the interactions between AI and other disciplines, the team developed a measure that captured how frequently papers in one field cited another field, controlling for the total number of papers published in the second field.

First, the team looked at how AI papers cited other academic fields. They found that in the 1960s, AI researchers cited psychology papers more than five times as frequently as would be expected if they had instead chosen papers to cite randomly out of a hat. Today, however, they cite psychology papers less than half that often.

Similarly, dramatic drops occurred in citations of philosophy, economics, and art. Not surprisingly, today’s AI papers cite computer science and math the most heavily.

Next, the researchers considered the reverse problem: How often did other disciplines cite AI papers, controlling for the growing number of AI publications each year? Here, they found that fields such as psychology, philosophy, business, political science, sociology, and economics have all become less inclined to draw on AI research. For example, psychologists in the 1960s cited AI papers about four times as much as would be expected by chance. Today, however, they cite AI less frequently than if they were selecting papers to cite entirely at random.

The overall conclusion: “AI has become more and more cliquish,” Wang says.

One possible explanation is that it has simply become harder for social scientists to keep up with rapid advances in increasingly complex AI research.

Furthermore, the swell of interest in AI could, paradoxically, help explain its isolation. Some AI conferences are so in demand that social scientists may have trouble getting in, according to Wang’s coauthor Morgan Frank. In a blog post, Frank noted that one popular meeting “sold out of registration spots in under 15 minutes, thus making attendance difficult for active AI researchers—let alone interested scientists from other fields.”

Who’s Doing AI Research?

Another factor is the shift in who dominates AI research today.

The researchers examined which institutions were publishing the most “central” AI papers—those papers that were most frequently cited by other highly cited papers.

While schools such as MIT, Stanford, and Carnegie Mellon once served as powerhouses for the most central AI research, the researchers found that today these papers were increasingly likely to come out of private companies like Google and Microsoft.

That may be because these firms have the resources to acquire expensive infrastructure. “This is not a cheap sport,” Wang says. “You need to have a whole stack of graphics processing units and computational power and storage.”

“It goes both ways. AI has to pay more attention to social science. Social scientists have to pay more attention to AI.”

That might help explain the growing disconnect between AI and the social sciences. In their study, Wang and collaborators found that researchers in sociology, philosophy, political science, business, and economics are less likely to cite publications produced by companies than those from academia. As such, the concentration of AI research in private industry could be contributing to the weakened relationship with the social sciences.

And once they get a head start, big companies are more likely to continue producing a disproportionate share of the research—a “rich-get-richer phenomenon,” Wang says. Industry teams develop better systems, attract more users, and generate more data, which can then be used to train their systems to become even more accurate. “It’s a self-reinforcing mechanism.”

Bridging the AI Divide

Despite the rapid growth of AI, Wang fears that new technology will fall short of its full potential if it does not better incorporate insights from social science and other fields.

To bridge this gap, Wang recommends that universities encourage more collaborations between AI and other departments. For example, Northwestern University has started a program called CS+X, which connects computer scientists with researchers in fields such as medicine, journalism, law, and economics.

Some existing research hints at how AI developers can effectively integrate findings from other fields. For instance, the study exploring how self-driving cars can better reflect human morality (coauthored by Wang’s collaborator Iyad Rahwan, an AI scholar) drew upon research from psychology, moral philosophy, economics, and even science fiction.

However, the fact remains that such wide-ranging bibliographies are relatively rare.

And just as computer scientists need to consult experts outside of their discipline, Wang says, social scientists can no longer afford to ignore developments in AI. As machines reshape how we work, think, and make decisions, he argues, it is becoming more crucial than ever that economists, philosophers, and psychologists stay abreast of the latest developments in computer science, and vice versa.

“It goes both ways,” Wang says. “AI has to pay more attention to social science. Social scientists have to pay more attention to AI.”

Featured Faculty

Professor of Management & Organizations; Professor of Industrial Engineering & Management Sciences (Courtesy), Director, Center for Science of Science and Innovation (CSSI)

About the Writer
Roberta Kwok is a freelance science writer based near Seattle.
About the Research
Frank, Morgan R., Dashun Wang, Manuel Cebrian, and Iyad Rahwan. 2019. “The Evolution of Citation Graphs in Artificial Intelligence Research. Nature Machine Intelligence 1: 79–85.
Most Popular This Week
  1. Sitting Near a High-Performer Can Make You Better at Your Job
    “Spillover” from certain coworkers can boost our productivity—or jeopardize our employment.
    The spillover effect in offices impacts workers in close physical proximity.
  2. Will AI Kill Human Creativity?
    What Fake Drake tells us about what’s ahead.
    Rockstars await a job interview.
  3. Podcast: How to Discuss Poor Performance with Your Employee
    Giving negative feedback is not easy, but such critiques can be meaningful for both parties if you use the right roadmap. Get advice on this episode of The Insightful Leader.
  4. 2 Factors Will Determine How Much AI Transforms Our Economy
    They’ll also dictate how workers stand to fare.
    robot waiter serves couple in restaurant
  5. How Are Black–White Biracial People Perceived in Terms of Race?
    Understanding the answer—and why black and white Americans may percieve biracial people differently—is increasingly important in a multiracial society.
    How are biracial people perceived in terms of race
  6. The Psychological Factor That Helps Shape Our Moral Decision-Making
    We all have a preferred motivation style. When that aligns with how we’re approaching a specific goal, it can impact how ethical we are in sticky situations.
    a person puts donuts into a bag next to a sign that reads "limit one"
  7. Will AI Eventually Replace Doctors?
    Maybe not entirely. But the doctor–patient relationship is likely to change dramatically.
    doctors offices in small nodules
  8. What’s at Stake in the Debt-Ceiling Standoff?
    Defaulting would be an unmitigated disaster, quickly felt by ordinary Americans.
    two groups of politicians negotiate while dangling upside down from the ceiling of a room
  9. How to Manage a Disengaged Employee—and Get Them Excited about Work Again
    Don’t give up on checked-out team members. Try these strategies instead.
    CEO cheering on team with pom-poms
  10. One Key to a Happy Marriage? A Joint Bank Account.
    Merging finances helps newlyweds align their financial goals and avoid scorekeeping.
    married couple standing at bank teller's window
  11. Why Do Some People Succeed after Failing, While Others Continue to Flounder?
    A new study dispels some of the mystery behind success after failure.
    Scientists build a staircase from paper
  12. 5 Tips for Growing as a Leader without Burning Yourself Out
    A leadership coach and former CEO on how to take a holistic approach to your career.
    father picking up kids from school
  13. Which Form of Government Is Best?
    Democracies may not outlast dictatorships, but they adapt better.
    Is democracy the best form of government?
  14. Daughters’ Math Scores Suffer When They Grow Up in a Family That’s Biased Towards Sons
    Parents, your children are taking their cues about gender roles from you.
    Parents' belief in traditional gender roles can affect daughters' math performance.
  15. Take 5: Research-Backed Tips for Scheduling Your Day
    Kellogg faculty offer ideas for working smarter and not harder.
    A to-do list with easy and hard tasks
  16. What Went Wrong at AIG?
    Unpacking the insurance giant's collapse during the 2008 financial crisis.
    What went wrong during the AIG financial crisis?
  17. Leave My Brand Alone
    What happens when the brands we favor come under attack?
  18. The Second-Mover Advantage
    A primer on how late-entering companies can compete with pioneers.
  19. Take 5: Yikes! When Unintended Consequences Strike
    Good intentions don’t always mean good results. Here’s why humility, and a lot of monitoring, are so important when making big changes.
    People pass an e-cigarette billboard
More in Innovation