When the Experts Are Biased
Skip to content
Innovation Strategy Oct 7, 2013

When the Experts Are Biased

Do experts on committees help or hinder decision making?

Based on the research of

Danielle Li

Each year, committees composed of scientists review thousands of grant applications for the National Institutes of Health (NIH). Reviewers are tasked with deciding which of their peers will be funded and which will be left scrambling for other ways to pay for their laboratories, research assistants, or even their own salaries. The process is not for the faint of heart: exact approval rates vary, but generally almost eighty percent of applicants are rejected.

Since not all projects can be funded, the NIH naturally wants to use its limited resources to fund only the best projects. But of course no single person can know which labs consistently produce the highest quality research, or which theoretical frameworks are most promising. Hence, the NIH must rely on experts—peer reviewers. “If you’re evaluating something complicated about neurotoxicology and you don’t know much about it, you’re going to want to ask a neurotoxicologist,” explains Danielle Li, an assistant professor of management and strategy at the Kellogg School. It is these experts who are best equipped to discern high-quality projects from low-quality ones—and the closer a reviewer’s own work is to the work she is reviewing, the better informed she is likely to be.

Seeking a Balance

And yet, these experts are also most likely to be biased toward (or against) certain investigators, fields, or research agendas for reasons having nothing to do with quality. Scientists tend to like their own fields; they may also personally benefit when similar research is funded and conducted. As a scientist asked to review two different researchers, says Li, “I’m just going to on average prefer the one who works in my area, and that’s going to lead to cases in which less-qualified people in my field are funded ahead of more-deserving people in other fields.”

In this way, expertise and bias are inexorably bound—a problem not unique to the NIH. Who, for instance, is best suited to help regulate Wall Street? Should the government hire someone from Goldman Sachs, who might hold deep knowledge about current banking practices but also conflicting interests, or would a complete outsider be preferable? The problem also seeps into medical care. “A lot of doctors get their information about new drugs from representatives of the pharmaceutical company,” says Li. “Those people actually do know a lot about the drugs. But of course they want to sell the drugs to you. The question is, when someone gives you advice and you know there might be bias—and you know that there might also be information—how should you think about that balance?”

Simulating the Review Process

Li built a mathematical model to simulate these competing interests. She then applied her model to a dataset of over 100,000 NIH actual grant applications, taking into account the score each received from reviewers, whether or not it was funded, the names of the applicants, as well as the names of the reviewers. In addition, Li determined how many members of a review committee did work related to the applicants’, as well as the actual quality of the applications.

The latter was especially challenging to measure, says Li. She could not simply look at the publications and citations that resulted from the grant applications, as this would have favored those applications that were actually funded, making committees look considerably better at detecting quality than they actually were. Instead, Li looked only at publications and citations that resulted from the preliminary data used by committees to review the applications. (“Even when people don’t get funded, the research that went into their preliminary results tends to be advanced enough that they publish it anyway,” explains Li.)

The Importance of Expertise

She found that, as expected, both bias and expertise affected the NIH’s funding decisions. Reviewers did appear to be more likely to fund their own fields: for every additional permanent member of the review committee whose work was related to an applicant, the chances of the grant being funded increased by 3%. (Permanent members are considered more influential than those who serve only on a temporary basis.) Also as expected, reviewers who did related work did a better job distinguishing between high-quality and low-quality research: a permanent reviewer with related interests increased the sensitivity of the committee to an application’s quality by over 50%.

Biases are not always bad things.

Indeed, says Li, in this case the upsides of expertise actually outweighed the downsides of bias. This suggests that that the NIH should be cautions about overly restricting reviewer relationships in an effort to stamp out bias. “For the average committee, if the thing that you want to maximize is the citations that accrue to the body of grants that you fund, then … you don’t want to have such a strong set of conflict-of-interest rules that if we work closely we can’t review each other’s work,” explains Li. “You lose too much information.”

In other words, biases are not always bad things, if only because of the qualities with which they are so tightly associated. “In lots of cases it is impossible to disentangle the information someone has from the preferences they have about it,” says Li. “And in a world where you have a setting like that, you have to be aware of the trade-offs as opposed to viewing them as two separate things.”

The Dangers of Bias

Still, Li is careful to point out that biases are in no way benign. When it comes to, say, financial regulation, the downsides to bias may well outweigh the benefits of expertise: at this point, we just do not know. Even at the NIH, simply ignoring reviewer biases could also be extremely detrimental. Imagine a scenario, says Li, in which a researcher knows ahead of time that the NIH is biased against her field or her institution. “Am I going to invest as much in doing good research? Am I going to put effort into making a good application?” she asks. “Bias can still have strong costs in the dynamic sense because if people perceive that there’s going to be bias in the future, it’s going to change their investments earlier on.”

Featured Faculty

Former member of the Strategy Department faculty

About the Writer
Jessica Love is the staff science writer and editor for Kellogg Insight.
About the Research

Li, Danielle. 2012. “Expertise vs. Bias in Evaluation: Evidence from the NIH.” Working paper.

Read the original

More in Innovation