The data that matter
Skip to content
Insight in your inbox
Receive our newsletters to keep up with the latest research and ideas from faculty at the Kellogg School of Management.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
The Insightful Leader Logo The Insightful Leader Sent to subscribers on April 15, 2026
The data that matter

We live in an age where we have access to incredible amounts of data that can inform our decision-making. But leaders often have to figure out the best way to sift through all that data to inform their next moves. 

Kellogg’s Amine Bennouna and colleagues have developed an algorithm that can help determine which data to use for a particular decision.

We also look at how biases make their way into AI models—and what that means for business leaders. 

A new tool for choosing the right data 

In general, business leaders like to make informed decisions. Given the speed with which AI can process large amounts of data, quantitative approaches to decision-making have never been more readily available. 

But some tasks just need the right information, rather than the most information. A new algorithm co-created by Bennouna offers leaders a roadmap to the most-relevant data for an optimal solution—and can save time and resources in the process.

“It’s not about the size [of the data] itself; it’s about what data matters,” Bennouna says. “Instead of scaling and scaling, it’s more strategic to target where to study your system or where to get data.”

For example, an engineer planning a new subway line can’t afford to study every potential route. The new model can help such a project determine the most-critical places to conduct field studies in order to make the best decision more efficiently.

“​A million data points can be equivalent to two data points depending on how relevant they are for what we’re trying to do with them,” Bennouna says. “We want to reduce the uncertainty that matters most for the decision—determining precisely the data that enables you to find the optimal decision.”

In the real world, that decision-making process also involves budgetary constraints—so the researchers are working on an extension to their algorithm that can crunch data for “good-enough” solutions.

“We want to be able to quantify the trade-off of optimality of the decision and type and size of data,” Bennouna says. “The key idea is finding data that informs decisions in the best way possible.”

Read more in Kellogg Insight.

Beware the biases lurking in AI

As organizations across the global economy integrate AI into their strategies and operations, Kellogg professors Tessa Charlesworth and William Brady are closely monitoring how these models exhibit signs of a very human trait—bias.

“Bias is coming in from many angles, all at once. We are interested in identifying all the places across the whole AI pipeline where biases—especially psychological biases—can creep in,” Charlesworth says. 

Bias can be introduced at several levels: what data the model was trained on, how its creators tested and annotated the model, and how it is continuously recalibrated according to user feedback and other factors. 

Those biases can be hard to root out, Brady says, especially since people tend to prefer models that reflect their worldviews, rather than those that challenge it. So if a model is being designed for engagement, it may default to “in-group information,” which can have negative consequences such as polarization. 

“Anytime you choose, ‘I want to optimize for X,’” that’s a biased decision: by definition, the system will amplify X at the expense of other things,” Brady says. “And the reason why that’s important is because the people deciding what to optimize an AI system for have specific incentives that may not always be aligned with the incentives of the consumers of the AI system.” 

To stay aware of these influences, Charlesworth and Brady have advice for business leaders and other users looking to incorporate AI into their workflows: the more transparent the AI model, the better.

“Because then, when they’re implementing it into their workflows, they can actually understand what went into that cocktail of bias,” Charlesworth says. 

Read the whole conversation in Kellogg Insight.

“To look at larger structures and make breakthroughs, you have to put all these separate points of view back together. And that’s really what teams do.”

Brian Uzzi in The Wall Street Journal discussing his research on teamwork, and the “myth of the inventor.” Read more on Uzzi’s research at Kellogg Insight.

© Kellogg School of Management, Northwestern
University. All Rights Reserved. Privacy Policy.