Don’t run from conflict
Skip to content
The Insightful Leader Live: Leadership in a Politically Charged Age | Register Now
The Insightful Leader Logo The Insightful Leader Sent to subscribers on March 13, 2024
Don’t run from conflict

At work, as in life, conflict is inevitable (though not much fun). Sometimes we disagree over how to handle a particular project or client. Sometimes we just don’t vibe with a colleague.

But what matters, as professor of management and organizations Leigh Thompson explained in the Financial Times, is how we handle the tension.

This week, we’ll discuss some helpful ways to address discord at work. (Tragically, none involves my preferred method of dealing with conflict: hide under a blanket and hope it goes away.) We’ll also share some interesting new research on the puzzle of how to regulate AI without stifling innovation.

Reframing conflict

When experiencing conflict at work, we tend to see it as signaling a big problem. But, Thompson pointed out, it can actually be a symptom of meaningful engagement. “Conflict is a sign of a high-performance workplace ... of people who care, people who are passionate,” she said.

But that doesn’t mean you can just avoid the discord. No matter how difficult things have gotten between you and a colleague, it’s almost always worthwhile to at least try repairing the relationship. Chances are, the other person isn’t enjoying the status quo any more than you are.

So summon your courage, and find a good time and place to have the conversation.

Framing your intention clearly at the start will help, says Thompson: “Recognize that this is an uncomfortable conversation.” She suggests starting with something like: “There’s been tension between us, and I think I have contributed to that.”

She argues that by offering to change your own behavior, you set the stage for your colleague to do the same. You can read more about navigating conflict among colleagues in the Financial Times (paywall).

Regulating the robots

If all that talk of interpersonal conflict has you longing for a workplace full of robots that will never leave dishes in the office sink, well, I don’t blame you! But the AI future has its own challenges—especially when it comes to regulation.

In fact, for as long as artificial intelligence has been in the news, governments have been struggling with the question of how to regulate it.

When ChatGPT set off a flurry of regulatory activity in the United States and the European Union, Kellogg finance professor Sergio Rebelo noted that each new framework “tended to emphasize one solution.”

But which approach—bans, tests, or legal liability—works best? After all, any AI regulation needs to balance the technology’s potential risks to society (like destabilizing democracy through rampant disinformation) and its potential rewards (like increasing economic productivity).

To find this balance—and which policy levers might be pulled to encourage it—Rebelo and his coauthors devised a theoretical model that could represent the regulation of any hypothetical new AI algorithm, regardless of its technical details. They then tested different AI-regulation scenarios.

They quickly discovered that, in isolation, none of the three approaches currently in use deliver a “social optimum.” However, putting two specific kinds of regulation together—safety testing and limited liability—did deliver that all-important balance of risk and benefit.

With this two-part approach, governments can mandate testing of all algorithms to evaluate their safety in controlled settings before releasing them to the public. Very novel algorithms would receive more-rigorous vetting than very familiar ones, preserving incentives for innovation without encouraging recklessness.

And introducing limited liability—with robust consequences for tools that wreak havoc—would give companies a strong incentive to self-regulate how far they push the envelope with their algorithms in the first place.

“It’s a nuanced policy,” Rebelo says, “but it corrects the misalignment between private and social incentives.”

You can read more about the research in Kellogg Insight.

“It’s long past time that players are able to capture some of the value they’ve been able to create.”

— Professor of strategy Craig Garthwaite, in The New York Times, on unionization efforts by college athletes.