
Like many of you, I’m trying to figure out what artificial intelligence can do for me. The promises made by AI companies are so broad and bold that it’s hard to zero in on what I actually find useful.
When it comes to preparing business leaders for the future of AI, it’s not one-size-fits-all either. Today, we hear about how Kellogg faculty are drawing upon their research expertise to create a new AI course that adapts this multifaceted technology for different audiences.
Plus, one way AI is repeating the mistakes of humans.
AI from every angle
If you are a regular reader of Kellogg Insight, you know that Kellogg researchers are exploring AI from a variety of angles, from the technology’s impact on creativity and science to its role in workplace inequality and misinformation.
The school’s MBA program is leaning into that broad expertise to help aspiring business leaders learn about the aspects of AI most relevant to their interests and career goals.
A recent piece in the business education publication Poets & Quants highlighted Kellogg’s new AI Foundations for Managers initiative.
Instead of a uniform curriculum, AIML 901 offers five different sections, each one led by faculty from the departments of operations, finance, marketing, strategy, and management and organizations. The goal, says Kellogg’s Sébastien Martin, is to provide training and assignments that are most relevant to each field—while also keeping pace with a fast-moving technology.
“AI arrives and then for a second you’re like, ‘Oh my God, what’s happening?’” Martin says. “But at the same time, it feels like a huge opportunity. Everything is moving so fast.”
Martin, who will lead the operations section, has previously built an AI teaching assistant and studied the potential and consequences of tools like generative AI and self-learning algorithms. In the class, he’ll have students build AI agents and use AI-driven case studies but won’t neglect the “soft” skills needed to get the most out of these innovations.
“AI automates things,” Martin says. “But it automates technical things. The things it can’t do—human interaction, leadership, structure, creation—are exactly what MBAs are trained for.”
In the finance section, Kellogg assistant professor Bryan Seegmiller will expand upon his research into the opportunities created by AI to teach students how AI models arrive at their conclusions.
“This foundation is crucial: without basic knowledge of how these technologies work, it’s impossible to critically analyze outputs or choose the right tool for a given problem,” Seegmiller says. “My rule of thumb: if you can’t reconstruct and critique the reasoning behind an AI-generated result, you’re relying on it too heavily.”
Read more in Poets & Quants.
When AI is too human
Recent research from Kellogg professor Blakeley McShane echoes that point: before you trust an AI insight, make sure you know what assumptions it’s making.
More specifically, he found that AI models fall victim to the same mistake humans make in interpreting research results. Because of overreliance on a common measure of statistical significance, both computers and humans tend to see experimental outcomes as black-or-white successes or failures, instead of a more-complex reading.
“AI that better mimics humans generally seems like a good thing,” says McShane, a professor of marketing. “But when AI mimics human errors, that’s obviously a bad thing when accuracy is the goal.”
For example, when AI models were fed hypothetical results that just missed the standard statistical significance threshold of 5 percent, they often tossed those findings out in favor of results that just barely cleared the test. AI turned out to be no better than humans at viewing scientific results as continuous, rather than as a dichotomy of “it worked” or “it didn’t.”
“People are asking AI models to do things that are much more complicated than the basic little multiple-choice questions that we asked,” McShane says, “but if they perform so erratically on our questions, it raises doubt about its capability for these much more-ambitious tasks.”
Read more in Kellogg Insight.
“When change or a negative event happens, you’ve got to be able to own it, accept it, and move past it. Part of it has to do with the stories we tell ourselves about why things happen and reframing challenges to better serve us if the story we’re telling doesn’t serve us well.”
— Carter Cast, in the new issue of Kellogg Magazine, on how to stay resilient in uncertain times.
See you next week,
Rob Mitchum, editor in chief
Kellogg Insight