Featured Faculty
Professor of Management & Organizations; Professor of Industrial Engineering & Management Sciences (Courtesy), Director, Center for Science of Science and Innovation (CSSI), Co-Director for the Ryan Institute on Complexity
Michael Meier
Artificial intelligence researchers are employing machine learning algorithms to aid tasks as diverse as driving cars, diagnosing medical conditions, and screening job candidates. These applications raise a number of complex new social and ethical issues.
So, in light of these developments, how should social scientists think differently about people, the economy, and society? And how should the engineers who write these algorithms handle the social and ethical dilemmas their creations pose?
“These are the kinds of questions you can’t answer with just the technical solutions,” says Dashun Wang, an associate professor of management and organizations at Kellogg. “These are fundamentally interdisciplinary issues.”
Indeed, economists seeking to predict how automation will impact the labor market need to understand which skills machines are best suited to perform. At the same time, the engineers writing software to diagnose tumors may want to know what philosophers have to say about the moral conundrums their technology poses. And coders and psychologists will need to work together to ensure that algorithms in recruiting software do not amplify human biases.
Some researchers have managed to cross departmental barriers. For example, a groundbreaking study last year explored how millions of people across the globe would make the difficult decisions that autonomous vehicles face (e.g., given a choice between killing a pedestrian or a passenger, whose life would they favor?). The researchers aim to use this work to ensure that new technologies reflect universal values.
Yet a new paper by Wang and collaborators finds that the link between AI and the social sciences (and other fields) has weakened over time.
The researchers analyzed several decades of papers published in the field of AI, as well as those in the social sciences, humanities, natural sciences, engineering, and medicine. They found that, more and more, computer scientists are facing social questions on their own, without relying deeply on insights from scholars who study them. At the same time, scholars of the social sciences, physical sciences, and humanities seem to be losing touch with rapid advances in AI as well.
Taken together, the results speak to a renewed need for researchers to collaborate across disciplines, Wang says.
“Just when AI is becoming more and more relevant to any corner of society, it’s becoming more and more isolated,” Wang says. “We really need to close that gap.”
People have been grappling with the social and philosophical consequences of technology for centuries, Wang points out. Take, for example, the 1823 novel Frankenstein. “AI was born from this kind of fascination,” he says. “It had very deep roots in social sciences.”
More recently, AI researchers have begun to face the real-life quandaries that the technology introduces.
Consider, for instance, when Amazon attempted to develop machine learning tools to score job candidates. Because the software used data on past applicants to predict which people were best-suited for the company, a glaring problem emerged: since many previous applicants were men, the program penalized candidates whose resumes contained the word “women’s” or listed certain all-women’s colleges as an alma mater.
Wherever bias already exists, AI “is just going to magnify that bias,” Wang says. (The Amazon program has since been discontinued.)
“Just when AI is becoming more and more relevant to any corner of society, it’s becoming more and more isolated.”
Wang wanted to know how often AI researchers were engaging with disciplines such as psychology, philosophy, economics, and political science, which could help them address these inevitable ethical and social issues. One measure of this engagement is whether AI researchers are citing other disciplines in their academic papers. To investigate, Wang collaborated with Morgan Frank, Manuel Cebrian, and Iyad Rahwan of the Massachusetts Institute of Technology.
The team took advantage of a newly available dataset from Microsoft Academic Graph (MAG), which indexes scholarly papers. The data included traditional journal citations as well as conference proceedings, a major venue for AI findings. And it captured citation relationships between papers—that is, whenever one study referenced another.
Wang and his collaborators examined MAG data from 1950 to 2018. They found that the number of publications in AI and related subfields (such as computer vision and natural language processing) rose exponentially during that time, from hundreds to tens of thousands of papers per year. These fields now dominate computer science research.
To quantify the interactions between AI and other disciplines, the team developed a measure that captured how frequently papers in one field cited another field, controlling for the total number of papers published in the second field.
First, the team looked at how AI papers cited other academic fields. They found that in the 1960s, AI researchers cited psychology papers more than five times as frequently as would be expected if they had instead chosen papers to cite randomly out of a hat. Today, however, they cite psychology papers less than half that often.
Similarly, dramatic drops occurred in citations of philosophy, economics, and art. Not surprisingly, today’s AI papers cite computer science and math the most heavily.
Next, the researchers considered the reverse problem: How often did other disciplines cite AI papers, controlling for the growing number of AI publications each year? Here, they found that fields such as psychology, philosophy, business, political science, sociology, and economics have all become less inclined to draw on AI research. For example, psychologists in the 1960s cited AI papers about four times as much as would be expected by chance. Today, however, they cite AI less frequently than if they were selecting papers to cite entirely at random.
The overall conclusion: “AI has become more and more cliquish,” Wang says.
One possible explanation is that it has simply become harder for social scientists to keep up with rapid advances in increasingly complex AI research.
Furthermore, the swell of interest in AI could, paradoxically, help explain its isolation. Some AI conferences are so in demand that social scientists may have trouble getting in, according to Wang’s coauthor Morgan Frank. In a blog post, Frank noted that one popular meeting “sold out of registration spots in under 15 minutes, thus making attendance difficult for active AI researchers—let alone interested scientists from other fields.”
Another factor is the shift in who dominates AI research today.
The researchers examined which institutions were publishing the most “central” AI papers—those papers that were most frequently cited by other highly cited papers.
While schools such as MIT, Stanford, and Carnegie Mellon once served as powerhouses for the most central AI research, the researchers found that today these papers were increasingly likely to come out of private companies like Google and Microsoft.
That may be because these firms have the resources to acquire expensive infrastructure. “This is not a cheap sport,” Wang says. “You need to have a whole stack of graphics processing units and computational power and storage.”
“It goes both ways. AI has to pay more attention to social science. Social scientists have to pay more attention to AI.”
That might help explain the growing disconnect between AI and the social sciences. In their study, Wang and collaborators found that researchers in sociology, philosophy, political science, business, and economics are less likely to cite publications produced by companies than those from academia. As such, the concentration of AI research in private industry could be contributing to the weakened relationship with the social sciences.
And once they get a head start, big companies are more likely to continue producing a disproportionate share of the research—a “rich-get-richer phenomenon,” Wang says. Industry teams develop better systems, attract more users, and generate more data, which can then be used to train their systems to become even more accurate. “It’s a self-reinforcing mechanism.”
Despite the rapid growth of AI, Wang fears that new technology will fall short of its full potential if it does not better incorporate insights from social science and other fields.
To bridge this gap, Wang recommends that universities encourage more collaborations between AI and other departments. For example, Northwestern University has started a program called CS+X, which connects computer scientists with researchers in fields such as medicine, journalism, law, and economics.
Some existing research hints at how AI developers can effectively integrate findings from other fields. For instance, the study exploring how self-driving cars can better reflect human morality (coauthored by Wang’s collaborator Iyad Rahwan, an AI scholar) drew upon research from psychology, moral philosophy, economics, and even science fiction.
However, the fact remains that such wide-ranging bibliographies are relatively rare.
And just as computer scientists need to consult experts outside of their discipline, Wang says, social scientists can no longer afford to ignore developments in AI. As machines reshape how we work, think, and make decisions, he argues, it is becoming more crucial than ever that economists, philosophers, and psychologists stay abreast of the latest developments in computer science, and vice versa.
“It goes both ways,” Wang says. “AI has to pay more attention to social science. Social scientists have to pay more attention to AI.”