Making Employees Compete for Rewards Can Motivate Them—or It Can Backfire
Skip to content
Organizations Careers Leadership Nov 2, 2017

Making Employees Compete for Rewards Can Motivate Them—or It Can Backfire

When employees care about each other, rewarding group performance may be the better strategy.

when do relative incentives work to motivate employees

Lisa Röper

Based on the research of

Pablo Hernandez-Lagos

Dylan Minor

Dana Sisak

In the movie Glengarry Glen Ross, Alec Baldwin’s character Blake gives a famous speech to motivate a sales team: “As you all know, first prize is a Cadillac Eldorado. Anybody wanna see second prize? Second prize is a set of steak knives. Third prize is you’re fired.”

The scheme is a classic example of relative incentives: rewards based on how an employee’s work ranks among team members, instead of absolute measures such as number of sales. These days, the best workers usually receive promotions or bonuses rather than cars or cutlery, but the premise is the same. By pitting workers against each other, the reasoning goes, relative incentives can motivate employees to work harder.

But could this strategy backfire if colleagues care about each other? After all, every additional bit of effort one person puts in reduces the chances that others on the team will win the reward. Instead of clawing for the top spot, unselfish employees might scale back their productivity to go easier on their coworkers.

“Maybe you don’t try quite as fiercely,” says Dylan Minor, an assistant professor of managerial economics and decision sciences at Kellogg.

In a series of experiments, Minor and colleagues found evidence to support that idea. They also found that a few selfish team members can change these dynamics considerably.

Do workers’ benevolent feelings toward each other make relative incentives less effective?

The results suggest that in workplaces where people are more caring—say, a company that emphasizes corporate social responsibility—relative incentive schemes may provide less motivation. But for teams composed of many selfish people, “relative incentives are still probably going to work wonderfully,” Minor says.

Incentivizing Employees

The researchers wanted their study to answer several questions. The first was whether workers’ benevolent feelings toward each other could make relative incentives less effective.

Second, they were interested in exploring the possibility of collusion under relative incentive schemes. Would everyone agree to slack off in order to reduce the amount of work needed to get rewards?

The idea is that “we’d rather not strain ourselves too greatly and then still be able to get the reward,” Minor says. Otherwise, “they’re all putting in this massive amount of effort, getting burnout, having heart attacks, not seeing their kids graduate from school, all these kinds of things—for the same Cadillac.”

Third, the researchers wanted to know what type of person—benevolent or selfish—would lead a group’s collusion. “Someone needs to step up and even suggest such a thing,” he says.

Do Relative Incentives Work to Motivate Employees?

Minor collaborated with Pablo Hernandez-Lagos at New York University Abu Dhabi and Dana Sisak at Erasmus University Rotterdam to investigate. The team recruited 147 undergraduate students at the University of California, Berkeley to participate in a series of experiments on computers.

The participants were told that during the experiment they would be using a made-up currency called “Berkeley Bucks,” where 1 Berkeley Buck was equivalent to about 1.5 cents. At the end of the experiment, the students received the cash value of their Berkeley Bucks.

First, the researchers evaluated the students’ level of selfishness.

In this part of the experiment, participants were given Berkeley Bucks nine times and could choose each time whether to share any of it with two anonymous team members. The students had no incentive to share the money other than altruism. People who always kept all the money for themselves were classified as selfish. About one-fifth of the participants fell into this category. The remaining four-fifths gave at least some money to their peers, showing care for others.

In the second part of the experiment, the researchers tested whether these caring participants would tend to hold back and put in less effort than selfish ones so that their teammates would be rewarded.

Participants were teamed up with two different anonymous peers to play 29 rounds of a game. In every round, each person started with 12 Berkeley Bucks. Participants had to decide how many units of effort, each of which cost 1 Berkeley Buck, to contribute. Then a reward pot—always 45 Berkeley Bucks—was divided among the participants based on their relative contributions for each round. Participants also kept whatever money remained from their initial allocation of 12 Berkeley Bucks.

Though the specifics of the game were complex, the dynamics were fairly simple: since the reward pot was always the same, and payouts were relative, the entire group would benefit if every team member held back.

After all, if everyone put in the maximum effort, they were all considered average and no one got the big payout. Yet if they all put in the minimum effort, they were also considered average—but got to pocket more of their unused Berkeley Bucks, making the overall payout heftier.

Finally, in order to test for collusion, some teams were allowed to chat online during these 29 rounds, while others had no communication with teammates.

When Selfishness Leads to Collusion

The researchers found that more caring participants, indeed, tended to hold back. On average, benevolent team members contributed 15% less effort than selfish people did, which made it more likely that their teammates would get the bigger payout.

The ability to chat with teammates did lead to collusion—about three-quarters of teams that were able to chat ended up colluding. And this impacted overall team effort. For example, a player would suggest that everyone put in only 1 unit of effort, thus ensuring that everyone would receive a large payoff since they would be keeping all their unused bucks.

So who were the prime colluders?

One might have expected that the caring people would suggest that everyone cooperate. But it turned out that selfish participants were nearly three times as likely as their caring colleagues to lead collusion. This pattern might have arisen because overall, selfish people tend to be more strategic, Minor speculates.

But collusion succeeded only if the team contained a single selfish person. If two or three selfish people were involved, “it basically never worked out,” Minor says. Instead of putting in the minimum effort, as the team had agreed, one selfish person would stab teammates in the back and contribute more effort, thus grabbing more prize money.

Sometimes these participants even appeared to lie about what had happened. In chat conversations, they might say, “Whoops, sorry about that, guys” and say they had pressed the wrong button, Minor says. While it is possible that their actions were accidental, he says, “it’s a very suspicious and convenient accident.”

Benevolent people might be more likely to stick with plans for collusion because they want everyone to get a good payoff, Minor says. “That creates a little bit more glue for them to cooperate.”

Lessons for Leaders

So how can managers use this information to help improve team performance?

The results suggest that relative incentives may not be ideal in companies composed of more caring people, such as nonprofits or firms that emphasize social responsibility, Minor says. Instead, team-based rewards that allocate prizes based on the whole group’s performance might yield better results. Companies could even consider using different incentive schemes for different teams, depending on the personalities of people in each group.

In the real world, messy personal interactions also might come into play. If the person suggesting collusion is a bully, other workers might be more likely to agree to collude. But unselfish employees in a small company might not agree to game the system.

Collusion at small companies may be “a bit less likely because you actually do care about the owners,” Minor says.

Overall, the study supports Minor’s theory that benevolent people may throw a wrench into relative incentive schemes. They are less likely to want their office to be a “dog-eat-dog world,” he says, “so they’re more prone to back off on their efforts.”

Featured Faculty

Member of the Department of Managerial Economics & Decision Sciences faculty until 2018

About the Writer
Roberta Kwok is a freelance science writer based near Seattle.
About the Research
Hernandez-Lagos, Pablo, Dylan Minor, and Dana Sisak. “Do People Who Care About Others Cooperate More? Experimental Evidence From Relative Incentive Pay.” Experimental Economics. doi: 10.1007/s10683-017-9512-9.

Read the original

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Business Insights Organizations
close-thin