“Aeroplanes for the mind”
Skip to content
The Insightful Leader Live: AI and Advertising … This Time It’s Personal | Register
Insight in your inbox
Receive our newsletters to keep up with the latest research and ideas from faculty at the Kellogg School of Management.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
The Insightful Leader Logo The Insightful Leader Sent to subscribers on March 11, 2026
“Aeroplanes for the mind”

When I attended the Kellogg Annual Technology Conference last month, the main subject of nearly every panel was, predictably, artificial intelligence. But most of the current excitement seemed to cluster around a particular flavor of AI: agents.

While understanding AI’s various applications can sometimes suffer from hype and exaggeration, agents are easier to grasp. Imagine a fleet of computerized interns, handling the tedious tasks while you spend your time doing more-thoughtful work. It’s appealing.

This week, Kellogg’s Dashun Wang talks about the great potential—and unexpected challenges—of agentic AI for scientific research. His insights are applicable for business leaders tantalized by this new opportunity as well.

Plus, we look at what one Kellogg professor learned when he pitted his students against AI on a creative task.

Aeroplanes for the mind

Steve Jobs once described the computer as “a bicycle for our mind”—a tool that enables people to travel further and faster with less effort.

In an article for Nature, Wang proposes a different metaphor for agentic AI: aeroplanes for the mind. The change in transportation is meant to reflect the technology’s elevation of both risk and reward.

“[Agents] can speed things up for humans even more than bicycles do, but they are harder to control, and the consequences of mistakes can be huge,” says Wang, Kellogg Chair of Technology and a professor of management and organizations.

Wang highlights several lessons from his team’s construction of SciSciGPT, a suite of AI agents that supports research in the field of the science of science. The model performs many scientific tasks, from statistical analysis to data visualization, much more efficiently than do human researchers.

But with this added efficiency and speed comes a new appreciation for the role of humans in keeping AI accountable and transparent. “Fast science without reflection risks converging on mistakes at scale,” Wang says.

“When bicycles crash, the consequences are generally localized. Aeroplanes are different: when they crash, it can be catastrophic for everyone on board, often with collateral damage on the ground. That is the difference in scale that we face with AI agents.”

Wang’s recommendation is a “pilot-in-command” model, where the researcher is assisted by a crew of agents. That arrangement is about more than just trust; humans also bring the creative spark that often generates the biggest breakthroughs.

“In the long run, AI’s value will depend not just on how well agents perform but on how differently they ‘think’ from each other. Ultimately, just as our collaborators shape us, sometimes in profound and unexpected ways, so, too, will AI agents as partners in discovery.”

Read more in Nature and Kellogg Insight.

The machine plays the greatest hits

After a recent classroom experiment, Kellogg’s Brian Uzzi reached a similar conclusion to Wang’s.

“Creativity is a very human activity,” Uzzi says. “If you want a competitive advantage that builds on what makes you special, don’t ask a bot what to think; ask a bot how to think.”

Uzzi asked his students to complete a standard creativity test where a person has four minutes to come up with a list of ten words that are as different as possible from one another. They did it themselves first, then asked an AI bot to do it for them.

Though most students predicted they’d lose to the computerized competition, that didn’t happen. The class average matched the bot’s, and a few students beat the AI handily.

The result emphasizes a key limitation of today’s AI models: their tendency to produce unexceptional ideas instead of outliers.

“Human language has around 50,000 words, and that’s where all the separate human perspectives can be so powerful,” Uzzi says. “The machine plays the greatest hits, so to speak, over and over again—and it misses out on those gems.”

That middle-of-the-road approach may work well for some use cases, like when you’re trying to find the answer to a well-studied question. But for creativity and innovation, it’s better to treat AI as more of an advisor than an answer machine.

This framework can help organizations think about how to apply AI for business decisions. A director, for instance, might ask AI how to assess locations for a new factory. The system would outline the key factors to consider, and the director could then add their own knowledge before asking the bot for feedback.

Uzzi calls this kind of back-and-forth “the best kind of collaborative AI work.”

Read more in Kellogg Insight.

“When you see an ad that feels completely weird, sometimes you take a step back and you’re like, ‘Is this not targeting consumers? Is this influencing stakeholders, politicians, or public opinion? Maybe they’re just trying to paint themselves as, ‘Hey, we’re good guys.’”

Kevin McTigue, in Marketing Brew, on recent marketing stunts by prediction platforms Kalshi and Polymarket focused on high grocery prices.

See you next week,

Rob Mitchum, editor in chief
Kellogg Insight

© Kellogg School of Management, Northwestern
University. All Rights Reserved. Privacy Policy.