Expert or Charlatan?
Skip to content
Economics Strategy Feb 2, 2011

Expert or Charlatan?

A test to tell the difference between authentic experts and flimflam artists

Based on the research of

Nabil Al-Najjar

Alvaro Sandroni

Rann Smorodinsky

Jonathan Weinstein

When ancient Greeks had questions about the future, they consulted the Delphic oracle. Today individuals with queries on such issues as future stock prices, energy trends, football point spreads, and even next week’s weather consult experts in those fields. Many of those experts base their advice on scientific reasoning. But some rely on clever marketing, pseudoscience, and carefully calibrated predictions rather than genuine understanding of the areas on which they prognosticate. All too frequently those individuals maintain their reputations for predictive skill because it is surprisingly difficult to tell the difference between bogus and genuine experts. Now, however, a Kellogg team has produced a test that does precisely that in certain circumstances.

“Our test looks at informed and uninformed experts,” says Alvaro Sandroni, a professor of managerial economics and decision sciences who worked on the project with Nabil Al-Najjar, also a professor of MEDS, Jonathan Weinstein, an associate professor of MEDS, and Rann Smorodinsky, an associate professor at Israel’s Technion Institute. “If you know what’s happening, you will pass the test. If you don’t know, we’re not saying that you won’t pass the test, but there’s no absolute guarantee that you’ll pass.”

Sandroni continues, “It’s part of a research agenda underlain by a simple question: How do we know that an expert has information that we don’t and how do we know that science is based on something and goes beyond ordinary understanding of things?” Weinstein adds, “Expertise is very important to evaluate. We need to be able to tell whom to trust.”

Al-Najjar puts the work into a broader context. The basic idea, he says, “is to understand the boundaries between parts of knowledge that are testable and others that are not. This is the first paper that introduces restrictions on the structure of beliefs that makes these beliefs testable.”

A Startling Finding

The research expands on a series of studies on testing expertise that have reached what the team’s paper calls a “most robust—and startling—finding…that all reasonable tests can be manipulated.” That fact may be counterintuitive, but researchers universally accept it. “It is possible to hide absolute ignorance through the language of probability, and showing up that ignorance is very difficult,” Sandroni summarizes. “You give a false impression of expertise when there is nothing but ignorance. Of course, you have to do it in a very special and specific way—extremely carefully and in a very precise manner—to be successful.”

Weinstein points out the finding’s implications for would-be testers of experts. “It means that we can’t have a perfect world with a perfect test,” he says. Tests with certain restrictions remain possible, however. The Kellogg team set out to identify limitations that are neither too restrictive nor too lenient, and to incorporate them into a test of expertise.

“We assume the worst case: that the false experts are very understanding of the test and how to manipulate it. We assume that there are master manipulators out there,” Weinstein says. “But we recognize that there’s a fine line. If your test is too restrictive, real experts will fail. But if it’s not fine enough, you’ll pass people who try to flimflam you.”

Learnability and Predictiveness

The test, developed using standard mathematical tools, relies on two key phenomena: learnability and predictiveness. “We look at predictions and what actually happened, and based on that, we want to know if the predictor knows something about what he’s trying to predict,” Sandroni says. Weinstein outlines the process in detail. “First of all, experts being tested have to set in advance an amount of time they’ll need to learn something about what they’re trying to predict; that’s learnability,” he explains. “Then, when they reach that deadline, they have to make a very specific prediction that can be checked—for example, that over fifty days the market will move up 80 percent of the time; that’s predictiveness.”

The team sums up those two requirements in its paper. “There must be a point at which [the expert’s] theory makes predictions that can be tested,” the researchers note. In addition, Sandroni says, “the expert has to give reasons for why he is predicting this way or that way—reasons based on a bunch of parameters. Then data is used to a certain point and the parameters are tested separately. The test follows customary scientific procedures to identify core elements using data and, having identified core elements, uses more data to confirm them. The point is that, in this particular context, our test cannot be manipulated while other similar-looking tests can be manipulated.”

“The more restrictions there are, the harder it is to manipulate the test, because you have to manipulate it in a certain way.” — Sandroni

As Weinstein’s example of market prediction shows, the test requires experts to make their prognostications on the basis of probabilities, rather than simple yes or no answers. Weather forecasters illustrate that criterion. They typically frame their forecasts in terms of the percentage likelihood of rain, snow, sunshine, or other meteorological phenomena. “This is becoming a proper way of presenting consultations in other fields, such as political analysts, medical studies, and sports betting—any area in which consultations come in terms of odds,” Sandroni says. The demand for percentages puts an automatic restriction on the test. It requires what Weinstein calls “a fairly long repeated data stream”—continuously variable factors such as financial information that moves with the market or point spreads that change on the basis of injury reports and game day weather forecasts.

How Much Leeway?

Restrictions such as that add to the value of the new test. “The more restrictions there are, the harder it is to manipulate the test, because you have to manipulate it in a certain way,” Sandroni points out. But the test also gives its subjects a certain amount of latitude. “We try to acknowledge that people can make accurate predictions even if they’re not precise,” Weinstein says. “We have to allow them some leeway. The issue is how much leeway to give.”

Sandroni emphasizes that the test focuses on individuals who set out to give false impressions of their predictive skills. “The honest uninformed expert would be very easy to differentiate from the honest informed expert,” he says. “We are looking at the dishonest expert who can maintain a false reputation by making strategic predictions.”

What is the take-away message of the research project? “If we make specific requirements on how long the expert has to learn and the precision with which he has to predict, then we can tell the difference between the flimflam artists and the people who are able to make that kind of prediction,” Weinstein says. Al-Najjar looks at the implications of the work for corporations and other institutions. “The tension between commitment and flexibility has been recognized by philosophers and strategists,” he explains. “This paper’s higher-level message is that this tension is rooted in the problems of testing and learning. Both testing one’s frameworks and learning from a changing environment are essential to a dynamic, adaptive organization.”

Related reading on Kellogg Insight

When What You Know Is Not Enough: Expertise and gender dynamics in task groups

The Price of Advice: Why do consultants charge fees depending on their clients’ decisions?

Featured Faculty

John L. and Helen Kellogg Professor of Managerial Economics & Decision Sciences

E.D. Howard Professor of Political Economy; Professor of Managerial Economics & Decision Sciences

Faculty member in the Department of Managerial Economics & Decision Sciences until 2013

About the Writer
Peter Gwynne is a freelance writer based in Sandwich, Mass.
About the Research

Al-Najjar, Nabil I., Alvaro Sandroni, Rann Smorodinsky, and Jonathan Weinstein. 2010. Testing theories with learnable and predictive representations. Journal of Economic Theory 145:2203-2217.

Read the original

Most Popular This Week
  1. 3 Things to Keep in Mind When Delivering Negative Feedback
    First, understand the purpose of the conversation, which is trickier than it sounds.
  2. Podcast: Workers Are Stressed Out. Here’s How Leaders Can Help.
    On this episode of The Insightful Leader: You can’t always control what happens at work. But reframing setbacks, and instituting some serious calendar discipline, can go a long way toward reducing stress.
  3. What Went Wrong at Silicon Valley Bank?
    And how can it be avoided next time? A new analysis sheds light on vulnerabilities within the U.S. banking industry.
    People visit a bank
  4. How Are Black–White Biracial People Perceived in Terms of Race?
    Understanding the answer—and why black and white Americans may percieve biracial people differently—is increasingly important in a multiracial society.
    How are biracial people perceived in terms of race
  5. Will AI Eventually Replace Doctors?
    Maybe not entirely. But the doctor–patient relationship is likely to change dramatically.
    doctors offices in small nodules
  6. Leaders, Don’t Be Afraid to Admit Your Flaws
    We prefer to work for people who can make themselves vulnerable, a new study finds. But there are limits.
    person removes mask to show less happy face
  7. Which Form of Government Is Best?
    Democracies may not outlast dictatorships, but they adapt better.
    Is democracy the best form of government?
  8. What Went Wrong at AIG?
    Unpacking the insurance giant's collapse during the 2008 financial crisis.
    What went wrong during the AIG financial crisis?
  9. What Happens to Worker Productivity after a Minimum Wage Increase?
    A pay raise boosts productivity for some—but the impact on the bottom line is more complicated.
    employees unload pallets from a truck using hand carts
  10. At Their Best, Self-Learning Algorithms Can Be a “Win-Win-Win”
    Lyft is using ”reinforcement learning” to match customers to drivers—leading to higher profits for the company, more work for drivers, and happier customers.
    person waiting for rideshare on roads paved with computing code
  11. When You’re Hot, You’re Hot: Career Successes Come in Clusters
    Bursts of brilliance happen for almost everyone. Explore the “hot streaks” of thousands of directors, artists and scientists in our graphic.
    An artist has a hot streak in her career.
  12. Why Do Some People Succeed after Failing, While Others Continue to Flounder?
    A new study dispels some of the mystery behind success after failure.
    Scientists build a staircase from paper
  13. Immigrants to the U.S. Create More Jobs than They Take
    A new study finds that immigrants are far more likely to found companies—both large and small—than native-born Americans.
    Immigrant CEO welcomes new hires
  14. Take 5: Tips for Widening—and Improving—Your Candidate Pool
    Common biases can cause companies to overlook a wealth of top talent.
  15. Why Well-Meaning NGOs Sometimes Do More Harm than Good
    Studies of aid groups in Ghana and Uganda show why it’s so important to coordinate with local governments and institutions.
    To succeed, foreign aid and health programs need buy-in and coordination with local partners.
  16. How Has Marketing Changed over the Past Half-Century?
    Phil Kotler’s groundbreaking textbook came out 55 years ago. Sixteen editions later, he and coauthor Alexander Chernev discuss how big data, social media, and purpose-driven branding are moving the field forward.
    people in 1967 and 2022 react to advertising
  17. How Peer Pressure Can Lead Teens to Underachieve—Even in Schools Where It’s “Cool to Be Smart”
    New research offers lessons for administrators hoping to improve student performance.
    Eager student raises hand while other student hesitates.
  18. How Much Do Campaign Ads Matter?
    Tone is key, according to new research, which found that a change in TV ad strategy could have altered the results of the 2000 presidential election.
    Political advertisements on television next to polling place
  19. Take 5: How Fear Influences Our Decisions
    Our anxieties about the future can have surprising implications for our health, our family lives, and our careers.
    A CEO's risk aversion encourages underperformance.
Add Insight to your inbox.
More in Economics