Quick: guess how many of the 270 jobs listed in the 1950 census have since been eliminated by automation. Are you thinking a quarter? A third?
It’s a statistic that he wants us to keep in mind as we ponder the future of work—and specifically, the “fear that AI is going to suddenly lead to mass unemployment.” Rahman spoke about how AI will affect our careers, and our society, on a recent The Insightful Leader Live webinar.
His take? “Decades of research shows that fear is unfounded.”
Instead, he says, where we’re headed is largely up to us—though we’ll need to be proactive to ensure it isn’t chosen for us. Here are four highlights from his presentation.
Even “rapid” change happens more slowly than we think
As much as it feels like technology is advancing so rapidly that we can barely keep up, it takes much longer for those advances to fully embed themselves in society. This will be true of AI, too. “It’s going to take a long time for it to penetrate an industry, especially in ways that will affect your career,” says Rahman.
This may be cold comfort to individual illustrators, translators, and journalists who have already lost some work to generative AI. But zoom out a bit, and it’s clear that while these professions are the first to be affected by the technology, they are nowhere near obsolete. Instead, they are changing, as AI is gradually incorporated into various work streams and new infrastructure emerges to support it. “The more complex the technology, the more technical, human, and monetary resources are needed to develop, integrate, and maintain [the] technology,” says Rahman.
This means that we really do have enough time to decide, as a society, how we want to use artificial intelligence
By now you’ve surely heard the argument that AI is neither good nor bad, but simply a tool. Much will depend, then, upon how we choose to deploy it. And there is time for us to make this choice collectively and deliberately.
We can choose to use AI to replace as many workers as possible—or we can instead choose to use AI to bolster talent and identify it in underrecognized places. We can choose to let machines make the bulk of the decisions around our healthcare, education, and defense—or we can choose to keep humans at the helm, ensuring that human values and priorities rule the day.
And it’s not pollyannaish to think we’ll actually have a choice. There are already areas where we’ve opted to prioritize human involvement. Take aviation. Despite estimates that more than 90 percent of a pilot’s responsibilities can be automated, our society has still decided to put well-trained pilots, capable of flying manually, in the cockpit in case things go awry.
Automation has worked out pretty well for pilots, says Rahman. “In fact, in aggregate, the number of pilots and their pay has increased for years.”
But … we’ll need to listen to as many voices as possible when setting our priorities
Of course, pilots have more than their training: they also have strong professional organizations that can advocate for them. Not all workers are so fortunate, and it’s unlikely that everyone will have an opportunity to shape the employment decisions that affect them.
This is a real problem, says Rahman, “because without diverse voices and stakeholders, the design and implementation of AI has [reflected], and will reflect, a very narrow group of people’s interest.”
For instance, much of the current discourse around generative AI has come from tech companies themselves, eager to find profitable-use cases for their products. Perhaps it is no surprise, then, that “a lot of the way we talk about AI is a hammer looking for a nail: ‘Here’s large language models. How can we use them?’” says Rahman. “I don’t think that approach is going to help us thrive. Instead, we should be thinking first and foremost about: What outcome is the AI trying to measure, predict, or generate … and should we use AI to make such predictions? This is where we need diverse voices and experts in the room answering this question.”
He advises that individuals do whatever they can to join the conversation. Even grassroots organizations composed of likeminded laypeople can be effective at pressuring companies and local governments to develop and deploy AI in ways that are mutually beneficial.
And because it is impossible to intelligently advocate for our own interests if we’re kept in the dark about the decisions that affect us, we will also need to demand transparency around how AI systems are being trained, used, and double-checked.
Let machines be machines and humans be humans
Finally, Rahman points out, artificial intelligence is a bit of a misnomer in that the technology is neither “artificial” nor “intelligent.”
AI is not “artificial” in that it is trained on gigantic quantities of human data and further fine-tuned by a small army of low-paid human workers. (It also has a very real carbon footprint.) And AI is not “intelligent” in that it still can’t think in any meaningful way. Rahman showed that if you ask a model like GPT4 “What is the fifth word in this sentence?” it will tell you a different answer every time—generally the wrong one.
Instead, AI takes human inputs and manipulates them probabilistically.
Still, it can be incredibly powerful. “AI tends to excel with efficiency, speed, and the scale at which it can be implemented,” says Rahman. “AI doesn’t get tired or bored in the same way humans do.”
Conversely, he explains, humans excel at innovation, emotional intelligence, and quick adaptation to new situations. Whereas it might take AI thousands of training cycles to learn to differentiate cats from dogs, a human toddler might make the same inferences after spotting just a few goldendoodles.
With these relative strengths and weaknesses in mind, it becomes easier to think about AI systems less as our replacements and more as our collaboration partners, capable of amplifying whichever human values and priorities we dictate.
You can watch the rest of Rahman’s webinar here.
Jessica Love is the editor in chief of Kellogg Insight.