Podcast: AI Is a Tool. How Do We Want to Use It?
Skip to content
Leadership Apr 3, 2024

Podcast: AI Is a Tool. How Do We Want to Use It?

Generative AI is like “a hammer looking for a nail.” On this episode of The Insightful Leader: we have to decide what the nail should be.

Based on the research and insights of

Hatim Rahman

Listening: AI Is a Tool. How Do We Want to Use It?
download
0:00 Skip back button Play Skip forward button 17:57

For all of the anxiety-tinged conversations we’ve had about what generative AI will do, we haven’t had a society-wide discussion about what we want it to do.

This is a mistake, says Hatim Rahman, an assistant professor of management and organizations at Kellogg.

“We should be thinking first and foremost about what outcome is the AI trying to measure, predict, or generate in?” Rahman said. “Very relatedly, should we use AI to make such predictions?”

On this episode of The Insightful Leader, Rahman offers an optimistic view of where our intentions can take us.

Podcast Transcript

Laura PAVIN: We’ve been hearing a lot about AI lately and what it could mean for the future of work. From tools like Chat GPT, to Claude, to Microsoft Copilot, it feels like we’re living through this historic shift where AI is suddenly everywhere, in all the ways you’ve likely already heard about.

It’s been used to generate books, music, trailers for fake movies, the voices of real people, you name it. And it’s all disturbingly decent.

NEWS CLIP: The DJ, playing an AI-generated sample of rapper Eminem’s voice in front of a live audience, “This is the future rave sound, I’m getting awesome and [unintelligible]” …
NEWS CLIP: This summer, I did a little research project, where I asked my freshman-year professors to grade essays written by ChatGPT… and they got all A’s and B’s …

PAVIN: A few months ago, Kellogg Insight reached out to one of our faculty to help us make sense of this moment. His name is Hatim Rahman, and he’s an assistant professor of management and organizations at Kellogg. And one of the things he looks at is how advances in technology can impact jobs.

Hatim RAHMAN: How many jobs in the U.S. have been replaced by technology since 1950? It turns out that just one of the 270 jobs listed in the 1950 census has been eliminated by automation. And in case you are wondering, that job was an elevator operator.

PAVIN: Rahman studies algorithms and all of the ways that they underpin work and the decisions that we make about work. So he’s thought a lot about this latest technology revolution we’re experiencing right now: generative AI. That’s those large models like ChatGPT that take in huge amounts of data. They might be images, or tons of words, or even entire novels. And then essentially these models produce new content in a way that sounds like it was done by a human.

For example, I asked ChatGPT to rewrite the lyrics of Twinkle Twinkle Little Star as an emo song, and I’ll give you a little taste of that:


Twinkle, twinkle, little star,

Guide me through this endless dark.

Lost in pain, lost in fear,

Shedding tears, but no one’s near.


Like a diamond in the sky,

Yet I’m broken, asking why.

Underneath this facade I wear,

Lies a soul in deep despair.


Twinkle, twinkle, little star,

Fading out, so very far.

In this emo symphony,

I find solace, just in me.


Kind of beautiful, honestly.

Needless to say, a lot of people are kind of worried that generative AI is going to take away jobs and that we’re heading toward a dystopian future when it comes to AI and work.

But Professor Rahman doesn’t feel this way. In fact, he thinks that we can actually thrive in this age of AI—if we don’t mess this opportunity up. You’re listening to The Insightful Leader. I’m your host Laura Pavin, and today, I’m joined by Jess Love. She’s the editor in chief of Kellogg Insight.

Jess LOVE: Hi, Laura.

PAVIN: Hey, Jess. So a few months ago, you invited Professor Rahman to give this webinar called “How to Thrive in the Age of AI.” So, kind of, what’s his overall view? What does he think the effects of generative AI will be on work?

LOVE: I think his number one takeaway is that the future of work is … there is still going to be work! There are still going to be jobs. And this idea that automation is going to come and take all of our jobs is just not true.

Remember what he said at the top of the episode, that when you look back from the 1950s to today, the only job listed in the U.S. Census that went away because of technology was the elevator operator?

RAHMAN: So the reason I’m bringing this up is to first, head-on, address some of the fear that AI, and technology, are going to suddenly lead to mass unemployment. Decades of research shows that fear is unfounded.

PAVIN: I find that shocking that it only took away the elevator operators of the world. There really weren’t any other jobs that this took away?

LOVE: Well, I think it’s fair to say that a lot of other jobs have been impacted, but they haven’t necessarily been made obsolete. And so he actually gives a really good example of the airline pilot.

RAHMAN: For decades now, the software to automate much of pilots’ responsibilities has existed. And it’s only gotten better over time, taking over more and more. Some estimates indicate that 90 to 95 percent, if not more, of a pilot’s responsibilities can be automated. Despite this technology existing for decades and improving, pilots have not disappeared. In fact, in aggregate, the number of pilots and their pay has increased for years. Why is that?

PAVIN: Okay, yeah, I am curious to know why exactly that is.

LOVE: So I think these are very specific to a given occupation. But for pilots, you know, on the one hand, you can think about pilots as having a very strong labor organization. There’s pilots’ unions that can kind of help to ensure that pilots get to continue manning the plane. But more broadly, Professor Rahman points out that we have pilots in the sky because we want to have somebody there to make sure that every single person that, you know, takes off in the plane is able to land safely. And so we’ve decided we are going to continue to train pilots. We’re going to continue to pay them and pay them quite well to do that role for us. And he points this out as an example of what’s really possible for any job. So just because a job, many aspects or many tasks in that job, can be automated, it doesn’t actually mean that the entire job would disappear.

PAVIN: Um, so it’s sort of like, as different technological advances have come along, we as a society do realize the importance of having a conversation around what is important to us. And it seems like being in the air, flying people in the air, is a very high-stakes example.

LOVE: I don’t think it’s fair to think most employees are going to have quite that same level of protection. They might not have this robust union looking out specifically for their interests. And so what we need to do as a society is make sure that we use the technology to amplify our societal priorities, and that means getting a lot of people in the room to help us figure out what those priorities are in the first place.

RAHMAN: Right now, a lot of the way we talk about AI is a hammer looking for a nail. Here is large language models. How can we use it? But I don’t think that approach is going to help us thrive. Instead, we should be thinking first and foremost about, what outcome is the AI trying to measure, predict, or generate in? Very relatedly, should we use AI to make such predictions? So this is where, again, we need diverse voices and experts in the room to be answering this question.

PAVIN: So he’s saying that we’ve got a solution for a problem that we don’t even really know what it is yet.

LOVE: That’s exactly right. And I think his point is that right now, the conversation is really being driven by a lot of these technology companies that are producing these models. So of course, the way they’re thinking about it is, “Hey, here’s this amazing new hammer I have. What nail do you have? Let’s see if we can do this.” And that starts getting into this, you know, “How do we keep automating? What can we automate? What can we get rid of?” And what that gets you into is this very single-minded focus on all of the things that AI can do, as opposed to this broader conversation about, you know, what we value as a society, what we want to prioritize, and then how we can use AI to support those priorities, amplify those priorities, as opposed to coming in and undercutting them or subsuming them.

PAVIN: Mhm. So before we have that conversation, I guess, who are the stakeholders that do need to be having that conversation? We’re saying society, but on a practical level, is that Congress? Is it a businesses? Is it “we, the people”? Like, where should those formalized conversations be happening?

LOVE: I think Professor Rahman would say, “all of these places.” You know, some people are going to have more power than others. That’s inevitable. But even somebody without a lot of power, formal power, could consider being part of a grassroots organization. Or, you know, fighting back when AI gets something consequential wrong. I’m thinking about that recent case against AirCanada when their chatbot gave this traveler the wrong information about the company’s refund policy. And then the traveler sued the company and was largely successful with his suit.

NEWS CLIP: Canada’s largest airline has been ordered to pay compensation after its chatbot gave a customer inaccurate information…

LOVE: We expect companies to be able to speak truthfully about their own policies! And so as a society, we can decide, “Hey we’re not going to take ‘sorry, my chatbot hallucinated’ as an excuse.”

PAVIN: I feel like whenever we talk about generative AI or AI in general, there’s sort of two minds: One is this utopia where AI is going to solve all of our most pressing problems. And then the other one is just a lot of doom and gloom. And how is Rahman thinking about this?

LOVE: That’s a great question. And I think he sees a third option. You know, you can think about it certainly as a utopia: all of our problems are solved. You can think about it as a dystopia: you know, everything is horrible and we all just serve robots their meals. But there is this third way, though, and he actually quotes from another academic here, Ruha Benjamin.

RAHMAN: Whereas utopias are the stuff of dreams, dystopias, the stuff of nightmares, Ustopias are what we create together. When we are wide awake, Ustopias invite us into a collective imagination in which we still have tensions, but where everyone has what they need to thrive. Utopias require inequality and exclusion. Ustopias center collective well-being over gross concentrations of wealth. They’re built on an understanding that all of our struggles, from climate justice to racial justice, are interconnected, that we are interconnected.

LOVE: So as he starts to think about, you know, what it takes to create an ustopia as opposed to a dystopia, or probably an unrealistic utopia, this is where this idea about bringing as many voices into the room as possible is very important, and it’s also where it’s important to keep in mind the things that humans are really good at, and how those are different from the things that machines are really good at, which helps us think about this a little bit less as a competition and more as a collaboration.

PAVIN: Yeah, and what does he say about what humans are good at versus what AI is good at?

RAHMAN: Humans excel in creativity and innovation. Emotional intelligence. Adapting to emergent situations and learning. Right? So the classic example is that generally for toddlers, if you show them one picture of a dog, they can recognize thousands of species of dogs, right? Um, or an animal. Whereas a machine, you have to put tons of examples in order for it to accurately and precisely learn those situations and then moral and ethical judgments. It’s not that necessarily that humans excel in this regard, but these decisions have to come from our values and priorities. And then conversely, AI tends to excel with efficiency and speed, and the scale at which it can be implemented. The amount of data it can store and recall far exceeds humans. And then the physical capabilities, meaning that, generally speaking, AI doesn’t get tired or bored in the same way humans do.

LOVE: So one of the ways that you could actually take these different skill sets and put them together, as opposed to having one replace the other, might be to, you know, in lieu of using AI to completely replace human resources for hiring, you might use it to find talent where you didn’t know it existed.

RAHMAN: To me, an ustopia version is, how can AI help organizations find candidates that are systematically ignored or overlooked, either due to implicit or explicit bias, or because it’s hard for humans to go through thousands of applications, or even—for some organizations—millions of applications. It’s impossible. But can we use AI to help us find and bring to our attention candidates that otherwise would have been overlooked?

PAVIN: That is totally an example of using our values to guide the technology.

LOVE: That’s exactly right. And Professor Rahman also makes the point that as we think about automating certain tasks, we shouldn’t just be asking, you know, which tasks to automate and how, but also how we’re going to build some backstops into the system.

RAHMAN: If you’re going to use AI to automate a process, at what point should the human be in the loop? When can a human intervene so that they can override the AI system so that they can provide their own situational expertise and judgment?

LOVE: So I wanted to end this conversation with a question that does pop up pretty often when we start to talk about advancements in AI, and that is this question about whether there will ever be artificial general intelligence. You sometimes hear about it called AGI, and that’s basically this idea that a machine could be as smart as a human, have kind of human-level intelligence or more. And Professor Rahman had a pretty adamant response.

RAHMAN: No, it’s not going to happen. Um, certainly not in our lifetimes. Of course, no one can perfectly predict the future, but I’m fairly confident that it’s not, based on my own assessment and those of other experts that we are not going to reach AGI. And I don’t know—if I’m wrong, I don’t know, I’ll treat you to dinner. I don’t know what would be fun, but you can hold me to that prediction.

PAVIN: Yeah, that is comforting to hear.

LOVE: It is comforting to hear, and it is also a call to action. I think he really does believe that all of this change, you know, as transformative as it could be, we have time to decide who these transformations will benefit.

You know, think about those pilots. They were able to adopt automation in mutually beneficial ways for themselves and public safety. And so we really do have time to make deliberate decisions about how we do want to use AI. The question is kind of whether we can collectively get our act together.

PAVIN: Well, Jess, this has been really informative. Thank you.

LOVE: Thank you so much. I was excited to chat with you about this.

[CREDITS]

PAVIN: This episode was produced and mixed by Nancy Rosenbaum. Music was by Podington Bear. The Insightful Leader is produced and edited by Laura Pavin, Jessica Love, Susie Allen, Fred Schmalz, Maja Kos, and Blake Goble. Special thanks to Hatim Rahman. Want more The Insightful Leader episodes? You can find us on iTunes, Spotify, or our website: insight.kellogg.northwestern.edu. We’ll be back in a couple weeks with another episode of The Insightful Leader Podcast.

Featured Faculty

Associate Professor of Management and Organizations

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Leadership & Careers Leadership
close-thin