On a Friday morning in May 2009, conservative radio host Erich “Mancow” Muller decided to prove that waterboarding was not torture—by getting waterboarded himself. A military officer present at the stunt—which took place live, on the air—noted that “the average person can take this for 14 seconds.” Muller lasted less than half that time before halting the procedure. He went on to renounce his previous position on waterboarding, claiming that it is “absolutely torture.”
According to Loran Nordgren, an assistant professor of management and organizations, Muller’s about-face on waterboarding was an unusually public demonstration of a phenomenon known as the “hot-cold empathy gap.” “We often need to reason about situations that are inherently emotional,” Nordgren explains, “but we do so without experiencing the emotional state that we have to reason about. Much of my research is interested in how the ‘cold’ rational self predicts how the ‘hot’ emotional self will behave. People get this wrong all the time, and getting it wrong has interesting consequences.”
Psychologists have already studied hot-cold empathy gaps in contexts ranging from addiction treatment to anger management. After observing the political controversies associated with prisoner abuses at Abu Ghraib and Guantanamo Bay—not to mention stunts like Muller’s—Nordgren and his colleagues set out to study whether the legal definition of torture might be biased by a similar empathy gap.
A Blurry Line on What Constitutes Torture
Nearly all nations legally prohibit physical torture of prisoners. But waterboarding, solitary confinement, exposure to extreme temperatures, and other “enhanced interrogation techniques” blur the distinction between torture and ethical treatment. “The legal standard for defining torture is pain severity,” Nordgren explains. “This standard requires that people can form an accurate view of how bad these experiences are. Our argument is that people are inaccurate in a systematic direction—that these techniques are probably worse than we imagine them to be.”
“Considerable evidence suggests that the pain derived from social distress shares phenomenological, neurological, and psychological correlates to physical pain.”
Muller’s waterboarding experience may have provided anecdotal support for Nordgren’s hypothesis, but performing an identical procedure on dozens of research participants would be impractical as well as unethical. So Nordgren and his colleagues conducted a quartet of studies designed to “give people a taste of the specific form of discomfort produced by a specific interrogation tactic, while also staying within ethical boundaries.”
The first study aimed to examine whether hot-cold empathy gaps influenced participants’ opinions on solitary confinement—a commonly used enhanced interrogation technique. According to Nordgren and his co-authors, “considerable evidence suggests that the pain derived from social distress shares phenomenological, neurological, and psychological correlates to physical pain.” In order to induce the “social pain” of exclusion in his cohort of 83 undergraduates, Nordgren utilized a video game designed by psychologists, called Cyberball, which simulates a ball-tossing task between the participant and two other players. (The other players are automated by the game, but are displayed to the participant as “real” players with names.) The “no-pain group” of undergraduates experienced normal gameplay, receiving tosses from the other players about one third of the time. But the “social pain group” only seldom received tosses.
“It seems like a trivial exercise,” Nordgren explains, “but people are so sensitive to issues of fairness that people who are being excluded find it to be very uncomfortable. We thought that would be about as close as we could get to simulating the social pain of solitary confinement without locking people away in the lab for a few weeks.” Indeed, after playing Cyberball, both groups were asked whether they support or oppose solitary confinement in U.S. jails—and 63 percent of the “social pain group” answered in the negative, compared to 33 percent of the “no pain group.” The former group also rated the pain of solitary confinement as more severe than those in the latter group.
The experimenters’ three other studies were similarly designed to expose empathy gaps in assessing the ethicality of other interrogation techniques, such as sleep deprivation and exposure to cold temperatures. Sleep deprivation was modeled by a cohort of students with full-time jobs taking night-school classes, while cold-temperature exposure was modeled by having participants either immerse their dominant arm in a bucket of ice water or stand outside in 38-degree weather for three minutes without a jacket. In each case, participants experiencing the “hot” emotional state consistently rated the severity of pain associated with sleep deprivation or extreme-temperature exposure to be worse compared to ratings given by “no-pain” participants.
Past Experience Important
These results tended to confirm the intuitions that Nordgren and his co-authors had about how empathy gaps can bias ethical assessments. But one particular result surprised them. The third study in their quartet was designed to examine whether prior experience of a “pain condition” could help people bridge the empathy gap in accurately assessing its severity. As the authors state in their paper: “It is well documented that psychologists helped conduct advanced interrogations at Guantanamo Bay; one justification for their participation was that during previous training, they had endured the precise techniques they used on detainees.”
But results of Nordgren’s experiment showed that experiencing a painful technique (such as exposure to cold) just ten minutes before answering a questionnaire about its severity opened up an even wider empathy gap than the others they had observed. Surprisingly, participants in this “prior pain condition” rated its severity as lower than those who experienced no pain at all. Undergoing an enhanced interrogation technique in the past “doesn’t seem like it fosters empathy,” Nordgren says. “If anything, it may make people more callous.”
Why might the simple passing of time make people who have personally experienced pain less able to accurately assess its severity? “They are armed with the knowledge that they made it through,” Nordgren says, “and so they tend to say, ‘It must not have been that bad’.” In other words, if “Mancow” Muller had been asked to rate the severity of his waterboarding experience several hours, days, or weeks after he had gone through it, his prior insistence that the procedure did not constitute torture may have actually been bolstered. The same could be true for the military interrogators who undergo the same procedures they utilize on detainees. “What we try to indicate here is that [prior experience] doesn’t help close the empathy gap, and we have reason to believe it probably makes it worse,” Nordgren explains. “Because now the interrogator has no more visceral memory for that experience. All they know is that they lived through it.”
So is torturing people—and then immediately surveying their assessment of the experience—the only way to accurately define what constitutes torture? Nordgren assures us that such an impractical interpretation of his research is not his intention. “The point that we make is that the legal standard for defining torture is psychologically untenable,” he says. “We’re identifying the nature of the bias created by empathy gaps, which suggests the need for a more restrictive policy when defining torture. If you’re skiing and you see a red line and it says ‘cliff,’ that doesn’t mean that you should ski as close to the line as possible without going over. It means this is an area you should steer well clear of.”
Related reading on Kellogg Insight