When the Experts Are Biased
Skip to content
Innovation Strategy Oct 7, 2013

When the Experts Are Biased

Do experts on com­mit­tees help or hin­der deci­sion making?

Based on the research of

Danielle Li

Each year, com­mit­tees com­posed of sci­en­tists review thou­sands of grant appli­ca­tions for the Nation­al Insti­tutes of Health (NIH). Review­ers are tasked with decid­ing which of their peers will be fund­ed and which will be left scram­bling for oth­er ways to pay for their lab­o­ra­to­ries, research assis­tants, or even their own salaries. The process is not for the faint of heart: exact approval rates vary, but gen­er­al­ly almost eighty per­cent of appli­cants are rejected.

Add Insight
to your inbox.

We’ll send you one email a week with content you actually want to read, curated by the Insight team.

Since not all projects can be fund­ed, the NIH nat­u­ral­ly wants to use its lim­it­ed resources to fund only the best projects. But of course no sin­gle per­son can know which labs con­sis­tent­ly pro­duce the high­est qual­i­ty research, or which the­o­ret­i­cal frame­works are most promis­ing. Hence, the NIH must rely on experts — peer review­ers. If you’re eval­u­at­ing some­thing com­pli­cat­ed about neu­ro­tox­i­col­o­gy and you don’t know much about it, you’re going to want to ask a neu­ro­tox­i­col­o­gist,” explains Danielle Li, an assis­tant pro­fes­sor of man­age­ment and strat­e­gy at the Kel­logg School. It is these experts who are best equipped to dis­cern high-qual­i­ty projects from low-qual­i­ty ones — and the clos­er a reviewer’s own work is to the work she is review­ing, the bet­ter informed she is like­ly to be.

Seek­ing a Bal­ance

And yet, these experts are also most like­ly to be biased toward (or against) cer­tain inves­ti­ga­tors, fields, or research agen­das for rea­sons hav­ing noth­ing to do with qual­i­ty. Sci­en­tists tend to like their own fields; they may also per­son­al­ly ben­e­fit when sim­i­lar research is fund­ed and con­duct­ed. As a sci­en­tist asked to review two dif­fer­ent researchers, says Li, I’m just going to on aver­age pre­fer the one who works in my area, and that’s going to lead to cas­es in which less-qual­i­fied peo­ple in my field are fund­ed ahead of more-deserv­ing peo­ple in oth­er fields.”

In this way, exper­tise and bias are inex­orably bound — a prob­lem not unique to the NIH. Who, for instance, is best suit­ed to help reg­u­late Wall Street? Should the gov­ern­ment hire some­one from Gold­man Sachs, who might hold deep knowl­edge about cur­rent bank­ing prac­tices but also con­flict­ing inter­ests, or would a com­plete out­sider be prefer­able? The prob­lem also seeps into med­ical care. A lot of doc­tors get their infor­ma­tion about new drugs from rep­re­sen­ta­tives of the phar­ma­ceu­ti­cal com­pa­ny,” says Li. Those peo­ple actu­al­ly do know a lot about the drugs. But of course they want to sell the drugs to you. The ques­tion is, when some­one gives you advice and you know there might be bias — and you know that there might also be infor­ma­tion — how should you think about that balance?”

Sim­u­lat­ing the Review Process

Li built a math­e­mat­i­cal mod­el to sim­u­late these com­pet­ing inter­ests. She then applied her mod­el to a dataset of over 100,000 NIH actu­al grant appli­ca­tions, tak­ing into account the score each received from review­ers, whether or not it was fund­ed, the names of the appli­cants, as well as the names of the review­ers. In addi­tion, Li deter­mined how many mem­bers of a review com­mit­tee did work relat­ed to the appli­cants’, as well as the actu­al qual­i­ty of the applications.

The lat­ter was espe­cial­ly chal­leng­ing to mea­sure, says Li. She could not sim­ply look at the pub­li­ca­tions and cita­tions that result­ed from the grant appli­ca­tions, as this would have favored those appli­ca­tions that were actu­al­ly fund­ed, mak­ing com­mit­tees look con­sid­er­ably bet­ter at detect­ing qual­i­ty than they actu­al­ly were. Instead, Li looked only at pub­li­ca­tions and cita­tions that result­ed from the pre­lim­i­nary data used by com­mit­tees to review the appli­ca­tions. (“Even when peo­ple don’t get fund­ed, the research that went into their pre­lim­i­nary results tends to be advanced enough that they pub­lish it any­way,” explains Li.)

The Impor­tance of Exper­tise

She found that, as expect­ed, both bias and exper­tise affect­ed the NIH’s fund­ing deci­sions. Review­ers did appear to be more like­ly to fund their own fields: for every addi­tion­al per­ma­nent mem­ber of the review com­mit­tee whose work was relat­ed to an appli­cant, the chances of the grant being fund­ed increased by 3%. (Per­ma­nent mem­bers are con­sid­ered more influ­en­tial than those who serve only on a tem­po­rary basis.) Also as expect­ed, review­ers who did relat­ed work did a bet­ter job dis­tin­guish­ing between high-qual­i­ty and low-qual­i­ty research: a per­ma­nent review­er with relat­ed inter­ests increased the sen­si­tiv­i­ty of the com­mit­tee to an application’s qual­i­ty by over 50%.

Bias­es are not always bad things.

Indeed, says Li, in this case the upsides of exper­tise actu­al­ly out­weighed the down­sides of bias. This sug­gests that that the NIH should be cau­tions about over­ly restrict­ing review­er rela­tion­ships in an effort to stamp out bias. For the aver­age com­mit­tee, if the thing that you want to max­i­mize is the cita­tions that accrue to the body of grants that you fund, then … you don’t want to have such a strong set of con­flict-of-inter­est rules that if we work close­ly we can’t review each other’s work,” explains Li. You lose too much information.”

In oth­er words, bias­es are not always bad things, if only because of the qual­i­ties with which they are so tight­ly asso­ci­at­ed. In lots of cas­es it is impos­si­ble to dis­en­tan­gle the infor­ma­tion some­one has from the pref­er­ences they have about it,” says Li. And in a world where you have a set­ting like that, you have to be aware of the trade-offs as opposed to view­ing them as two sep­a­rate things.”

The Dan­gers of Bias

Still, Li is care­ful to point out that bias­es are in no way benign. When it comes to, say, finan­cial reg­u­la­tion, the down­sides to bias may well out­weigh the ben­e­fits of exper­tise: at this point, we just do not know. Even at the NIH, sim­ply ignor­ing review­er bias­es could also be extreme­ly detri­men­tal. Imag­ine a sce­nario, says Li, in which a researcher knows ahead of time that the NIH is biased against her field or her insti­tu­tion. Am I going to invest as much in doing good research? Am I going to put effort into mak­ing a good appli­ca­tion?” she asks. Bias can still have strong costs in the dynam­ic sense because if peo­ple per­ceive that there’s going to be bias in the future, it’s going to change their invest­ments ear­li­er on.”

Featured Faculty

Danielle Li

Former member of the Strategy Department faculty

About the Writer

Jessica Love is the staff science writer and editor for Kellogg Insight.

About the Research

Li, Danielle. 2012. “Expertise vs. Bias in Evaluation: Evidence from the NIH.” Working paper.

Read the original

Suggested For You

Most Popular


How Are Black – White Bira­cial Peo­ple Per­ceived in Terms of Race?

Under­stand­ing the answer — and why black and white Amer­i­cans’ respons­es may dif­fer — is increas­ing­ly impor­tant in a mul­tira­cial society.


Pod­cast: Our Most Pop­u­lar Advice on Advanc­ing Your Career

Here’s how to con­nect with head­hunters, deliv­er with data, and ensure you don’t plateau professionally.

Most Popular Podcasts


Pod­cast: Our Most Pop­u­lar Advice on Improv­ing Rela­tion­ships with Colleagues

Cowork­ers can make us crazy. Here’s how to han­dle tough situations.

Social Impact

Pod­cast: How You and Your Com­pa­ny Can Lend Exper­tise to a Non­prof­it in Need

Plus: Four ques­tions to con­sid­er before becom­ing a social-impact entrepreneur.


Pod­cast: Attract Rock­star Employ­ees — or Devel­op Your Own

Find­ing and nur­tur­ing high per­form­ers isn’t easy, but it pays off.


Pod­cast: How Music Can Change Our Mood

A Broad­way song­writer and a mar­ket­ing pro­fes­sor dis­cuss the con­nec­tion between our favorite tunes and how they make us feel.