What Should Leaders Make of the Latest AI?
Skip to content
Leadership Apr 26, 2023

What Should Leaders Make of the Latest AI?

As ChatGPT flaunts its creative capabilities, two experts discuss the promise and pitfalls of our coexistence with machines.

person working on computer next to computer working at a computer

Jesús Escudero

Based on insights from

Brian Uzzi

David Ferrucci

The latest artificial intelligence tools like ChatGPT are already capable of developing a travel itinerary, generating a simple business plan, and writing marketing copy—and the technology is getting better.

Large-scale language models like ChatGPT-4 are trained on massive amounts of human writing, which in turn makes them very good at sounding human. While they lack the kind of logical understanding of the world that even young children possess, they are expert pattern-matchers, predicting which word will come next based on distributions they’ve calculated around how often words co-occur.

“These distributions have such powerful and accurate probabilities because of how expertly they’ve been trained on so much data,” says Dave Ferrucci, the founder, CEO, and Chief Scientist of the hybrid AI company Elemental Cognition. “They produce these really coherent linguistic structures.”

So what should we make of this technology—as business leaders and as citizens?

In a recent The Insightful Leader Live webinar, Brian Uzzi and Dave Ferrucci offered their perspectives on how AI might change our working lives and our society. Uzzi is a professor of management and organizations at Kellogg, while Ferrucci, who started and led the IBM Watson team through its landmark Jeopardy success in 2011, is an adjunct professor of management and organizations at Kellogg.

Uzzi and Ferrucci argue that most of us will one day be partnering with machines—and while there is much to be gained from such collaborations, guardrails will also be needed. Here are some highlights from the conversation.

Impact on the labor market

AI is likely to reshape the labor market as we know it.

Because AI models like ChatGPT excel at synthesizing vast troves of data, the jobs of knowledge workers are the most likely to be affected, including accountants, attorneys, interpreters, and journalists.

“Implications are really far-reaching,” says Uzzi. This doesn’t mean that every white-collar worker will lose their job—but it does mean they will need to start experimenting with how these tools can be used to assist them.

But in the long term, there is no job that is obviously “safe” from AI, says Ferrucci. That’s because the same kind of deep-learning principles that can pick up patterns in linguistic data can be applied to other data—meaning even auto mechanics and chefs may one day have algorithmic collaborators.

“It’s getting harder and harder to imagine that we will be working in isolation from artificial intelligence,” he says. “We will be working with artificial intelligence in some form or another.”

Human–machine partnerships

While human–machine partnerships appear to be in our future, there may be some hiccups along the way.

Uzzi is concerned that a more urgent, machine-driven pace will have some downsides. In our race to innovate more quickly, he says, “we might actually be putting ourselves on a path that ruins our best competitive advantage” over machines: our own human creativity. “There is a whole body of research in human creativity that says that speeding up your life eats away at your creativity.”

In Ferrucci’s view, this is less of an issue, as what humans actually bring to the partnership is meaning and value. Machines are perfectly capable of being creative, he says, “because with enough data, they can consider so many different alternatives, and ultimately any creation is some permutation of the data.” However, they have no connection to human values, and as such, it will be humans who decide how these creations are used to further society.

Risks to society

As AI is more widely deployed, both Ferrucci and Uzzi see a need to put guardrails or regulations in place to minimize its risks and maximize its benefits.

In particular, we will need mechanisms to ensure transparency (around how the model was trained and prompted) and accountability (over both what the model produces and any social effects that follow). Issues around ownership (if the model generates something particularly useful) will also need to be worked out.

Ferrucci is particularly concerned about AI’s ability to flood information channels with coherent, credible-sounding propaganda—something chatbots are uniquely equipped to do. In the process, he says, they might “drown out potentially all other voices.”

Neither think regulation will come easily, however.

Consider the earliest days of social media, which seemed to promise so much in terms of connecting individuals and disseminating ideas. But eventually, under the pressures of monetization, social media “had a completely different trajectory, and it turned into a world of incivility, misinformation—and we still can’t decide what kind of regulation we want for that,” says Uzzi.

How to prepare

In the near term, as companies consider how they might launch their own fledgling human-AI partnerships, Uzzi advises firms to conduct an audit on their internal data, as a precursor to using it to train an AI system. Is it clean and accessible and well-managed? If not, it will need to be: that data will be “their secret sauce relative to their competitors.”

Ferrucci points out that firms will also need to dedicate time to understanding both the technology, and how it could fit into their operations. “Where can this help? Where can this make us more efficient?”

But both are bullish about the long-term prospects of artificial intelligence and in particular its ability to complement humans.

“If we can find a right way to partner with machines so that we all do better, I think that is a potential way to solve the real grand challenges” facing the species, says Uzzi, like climate change.

Ferrucci is even more enthusiastic, envisioning human ingenuity finally freed from the all-too-human constraints of only having so many hours in the day. “It’s a new age,” he says. “There is no future of the human species without AI.”

You can watch the entire webinar here.

Featured Faculty

Richard L. Thomas Professor of Leadership and Organizational Change; Professor of Management and Organizations

Adjunct Professor of Management and Organizations

About the Research

Jessica Love is editor in chief of Kellogg Insight.

More in Leadership