3 Questions to Ask Before Launching a New AI Tool
Skip to content
Organizations Jan 21, 2025

3 Questions to Ask Before Launching a New AI Tool

Like any other new technology, AI should be vetted through a strong product-development cycle.

person and AI figure carry object up stairs in a business office setting.

Riley Mann

Based on insights from

Hatim Rahman

Elizabeth Gerber

Summary The race to capitalize on generative AI has spawned a slate of tools without a clear application, according to Kellogg’s Hatim Rahman and McCormick’s Liz Gerber. Instead, the products most likely to succeed will have strong answers to questions about their purpose, whom they affect, and the potential issues that could arise—key tenants of the traditional product-development process.

There have always been new technologies with the potential to change the way we work, but AI’s capacity to upend business as usual feels more radical.

There have always been new technologies with the potential to change the way we work, but AI’s capacity to upend business as usual feels more radical.

Predictably, some organizations have reacted by pretending the technology doesn’t exist. Northwestern’s Liz Gerber has also seen the other extreme.

“I’m hearing, ‘everyone is doing it. If we don’t, we’ll be left behind.’ And this kind of anxious thinking is not allowing people to really think through what the business case really is,” said Gerber, a professor of mechanical engineering and communication studies at Northwestern.

Gerber, who has researched and consulted on innovation for years, says that a better approach for thinking about how to incorporate AI into an organization is already familiar to most leaders: the product-development process.

She spoke about this at a recent The Insightful Leader Live event alongside Hatim Rahman, an associate professor of management and organizations at Kellogg. The two shared tips for navigating the tougher parts of the product-development process for AI tools in particular.

Here are some highlights from their discussion.

“What is your end goal?”

Why should your AI product exist in the world? What void is it filling, either for potential customers or within your own company?

A comprehensive answer will save you trouble down the line, says Rahman, who offers the example of adopting large language models without sufficient forethought.

“We’ve known now for a long time that we have too many emails and too much information. And so just saying, ‘we can use [AI] to generate more content, more information,’ maybe that makes sense. But for a lot of organizations, you actually need less generation of emails and less overall information,” Rahman says.

To contrast, Rahman gave an example of a hospital system that is developing an AI-powered ultrasound probe to improve pregnancy outcomes, especially in countries with sparse access to clinical care. The AI model was initially designed so that anyone using a handheld ultrasound probe could scan the fetus and learn relevant information, like expectancy date.

From a product-development perspective, the need for the device was pretty clear: to help pregnant patients with limited resources decide whether additional care was worth the expense.

“How does this fit into existing processes?”

Gerber and Rahman recommend that companies be thoughtful about not just the end goal of an AI tool, but also how it will be adopted into people’s workflow. After all, AI usually works best alongside humans.

“What are tasks that people should be doing, want to be doing, and are good at doing?” Gerber asks. “Slowing down and developing cognitive models of what people are doing is critically important, as opposed to just assuming AI is going to take over the task.”

This question is all the more crucial because not everyone will be enthusiastic about the addition of AI. Gerber gave another example from a hospital environment: a hypothetical AI tool that would help hospitals get a handle on the spread of infection. The tool would need data to work, especially data about who had interacted with whom. But not everybody working at a hospital will necessarily want to be tracked. They may feel scrutinized and distrusted.

In other cases, people may be quite willing to adopt a new tool—but doing so might disrupt their workflow in difficult-to-anticipate ways. In the case of the AI ultrasound probe, for instance, the hospital system soon realized that standard procedures—such as a nurse stopping to show the patient if the baby was doing something interesting—would actually interfere with data collection. In order to get usable data from the scans, they would need to retrain their staff on how to use the ultrasound probe.

In other words, companies would do wise to remember that any potential AI tool will exist in a complex ecosystem amongst complex human beings, habits, and processes—and plan accordingly.

“What will you do when things go awry?”

Speaking of planning: researchers also discussed the importance of developing a plan for how organizations would respond when (not if) their rollout of the new technology failed.

Gerber shared an anecdote from a company that often deployed new software. “When they deployed the software,” she explained, “they gave everybody a punching bag. Like, literally a punching bag to put on their desk. ... They already are assuming that people are going to be frustrated with this ... and they want to have a close feedback loop with those folks so that they can correct it as soon as possible.”

You can learn more by watching the rest of Gerber and Rahman’s webinar here.

Featured Faculty

Associate Professor of Management and Organizations

About the Writer

Isabel Robertson is a freelance writer and audio producer in Chicago.

More in Business Insights Organizations
close-thin