How Governments Can Better Defend Themselves Against Cyberattacks
Skip to content
Policy Aug 3, 2018

How Governments Can Better Defend Themselves Against Cyberattacks

The threat of retaliation can keep the peace. But that assumes you know who is attacking you.

Militaries defend themselves against cyber attacks.

Yevgenia Nayberg

Based on the research of

Sandeep Baliga

Ethan Bueno de Mesquita

Alexander Wolizky

In the early 2000s, hackers successfully infiltrated a series of secure military computer networks across the United States. From that breach, later deemed “Titan Rain,” the hackers would successfully pilfer a wealth of sensitive data including Army helicopter specs, the Air Force’s flight-planning software, and schematics for a NASA Mars orbiter.

American leaders have typically vowed swift and fierce retaliation for any attack on the United States by a foreign actor. So why was there no retaliation for this provocation?

The answer comes down to attribution. “If North Korea attacks us with nuclear weapons, we observe that it is North Korea, so we retaliate against North Korea,” explains Sandeep Baliga, a professor of managerial economics and decision sciences at Kellogg.

But in cyber warfare, attributing an attack is not so easy. While experts suspected that the Chinese government was behind Titan Rain, it was possible that it had been the work of rogue Chinese civilians, or even another nation that manipulated its digital footprints to make China appear responsible.

This uncertainty presents a dilemma. For decades, the U.S. military has relied on the threat of retaliation to deter would-be aggressors. Most famously, the doctrine of “mutually assured destruction” warded off Soviet nukes during the Cold War. But if the U.S. can no longer pinpoint and retaliate against its aggressors, then that doctrine is hard to apply, Baliga says.

Simply getting better at identifying one’s attacker, it turns out, is not enough.

In a new paper with University of Chicago’s Ethan Bueno de Mesquita and MIT’s Alexander Wolitzky, Baliga formulates a deterrence theory for the Internet age

Using a mathematical model of multiplayer interaction using the tools of game theory, he finds that if any party becomes more aggressive, attacks increase across the board. The reason: When one party appears especially hostile, they tend to take the blame (and receive the retaliation) for any attack that is hard to attribute. That gives the other actors an incentive to launch more attacks, since they can avoid repercussions by letting the victim believe that the most hostile aggressor was responsible—or, more nefariously, by making it look like that aggressor was responsible.

“If somebody wants to trigger a war between us and China, then they have every reason to do a hack that looks like China did it,” Baliga says.

But the same model also offers a new strategy to deter cyberattacks: if nations get better at both detecting attacks and identifying their perpetrators, then cyber peace is more likely to prevail.

Modeling Cyber Warfare

To analyze what happens in a cyberattack, the researchers conceived of a straightforward scenario. “There’s one defender, and multiple possible attackers,” Baliga explains, “and any of the attackers can attack the defender.”

If an attacker chooses to attack the defender, they receive some payoff. “It could be something quite concrete, like if you find the plans for a stealth bomber,” Baliga says. “Or you find a bunch of credit card numbers, and you use those to do illegal trades.”

Next, the defender receives some “signal” suggesting whether they have been attacked and who is to blame. In the real world, this signal typically includes the digital footprints left in the wake of a suspected cyberattack (such as the signs suggesting, but not proving, that China was behind Titan Rain).

However, this signal is rife with ambiguity—it conveys only that the defender may have been attacked, and that a certain party is responsible, leaving plenty of room for error.

Sometimes, defenders will not even realize that they have been attacked (such as when the Iranian government failed to detect malware installed in their nuclear facilities, instead blaming malfunctions on faulty parts). The researchers call this “detection failure.” Other times, defenders believe that they have been attacked even when they have not (as perhaps was the case in 2008, when the Department of Defense suspected Russia of installing a worm that came from a U.S. soldier’s USB drive). The researchers call this a “false alarm.” And sometimes the signal will lead them to blame the wrong party for the attack. The authors call this “mis-identification.”

Based on the imperfect signal, the defender must choose whether, and against whom, to retaliate.

Obviously, every attacker wants to avoid retaliation. “Maybe I’ve hacked and found the stealth bomber plans—but if I get attacked and some secret stuff of mine gets taken away or some cyber infrastructure is destroyed, then I might regret my attack altogether,” Baliga explains. “So the payoff depends on both what I find through my hack, and whether I’m retaliated against or not.”

At the same time, the defender does not want to counterattack willy-nilly. After all, if they retaliate against an innocent party, it can set off a chain reaction of back-and-forth aggression between two formerly peaceful parties.

The researchers translated this scenario into mathematical language. From there, they could deduce the strategies that provide the optimal result for each party, given all of the other parties’ strategies. (This is the concept of Nash equilibrium named for its inventor, John Nash, the subject of the movie “A Beautiful Mind.”)

Under which circumstances would attackers decide to attack? And how would defenders retaliate, given so much uncertainty? “Our first objective was just to provide a structure to think though the various attribution problems that might arise,” Baliga explains. This led to the taxonomy of detection failure, false alarms, and misidentification. “With that in place, we didn’t know how the analysis would go.”

Aggression Breeds More Aggression

The most important result of the model: once one potential attacker becomes more aggressive, all of the other attackers also become more aggressive. This connection between attackers’ strategies stems from the problem of attribution.

Baliga explains the logic behind this odd conclusion. If a defender—the U.S., for example—sees signals indicating that a particular party—say, Russia—has cyberattacked them, they become more likely to blame Russia for any subsequent attack. Other countries observe this and realize that now they can likely hack the U.S. and collect their payoffs with little risk of being retaliated against.

“That then means that China can hack us, or even France can hack us—anybody can hack us and we would think it’s likely Russia,” Baliga says.



“If somebody wants to trigger a war between us and China, then they have every reason to do a hack that looks like China did it.” 

By looking closely at the mechanics of the model, the authors also discovered what it would take to deter cyber warfare in this context. Simply getting better at identifying one’s attacker, it turned out, was not enough.

To explain why, Baliga offers another example. If the U.S. receives a weak signal of a cyberattack, as well as a weak signal that it was committed by Russia, they may choose to retaliate against Russia. But after U.S. identification abilities improve, that same weak evidence of there having been an attack now seems less convincing than it did before—after all, if it was really a Russian attack, the new, more sophisticated intelligence would have picked it up, or so the thinking goes. So, it is now more likely that the weak signal is a false alarm, and the U.S. may choose not to retaliate after a weak signal that previously would have triggered an aggressive response.

“If I’m the defender and I’m retaliating less aggressively after some signals and more aggressively after others, it is not clear how the net effect goes. It could turn out that I retaliate less on average after my identification technology improves,” Baliga says.

That makes everyone more aggressive, since other countries now see an opportunity to attack with fewer consequences, he explains. So a policy of reducing misidentification alone can backfire.

The model suggests a better way to improve deterrence: become more proficient at identifying attackers as well as detecting attacks. By correctly pinpointing the perpetrator without increasing the chance of false alarms, a defender is more likely to retaliate against an attacker—and that increased threat of retaliation leads to peace.

“Cyberwarfare Is Inherently Multilateral”

Baliga believes that his findings have implications for military strategy, suggesting that militaries worldwide should not only invest in better detection and attribution capabilities, but also abandon the simplistic doctrine of mutually assured destruction. While the threat of retaliation is necessary to maintain peace, vowing fiery retribution for any sign of a foreign attack does not work when we have imperfect information about those attacks.

“The policy world wants to be super macho,” he says. “They want to say, ‘If we detect a signal that China attacked, let’s retaliate with overwhelming force.’ We tell that to all our cyber-military commanders. But that’s going to backfire in many ways.”

The paper indicates that such a policy could be exploited, for instance, by someone seeking to foment a conflict between U.S. and Russia at a time when distrust of Russia is high.

Rather than maintaining a blanket policy for how to react to cyberattacks, Baliga implores leaders to first evaluate what they know for certain about an attack. “If the standard of proof is satisfied, yeah, then you should react super aggressively,” he says. “But when there’s a lot of noise, you might actually want to back off because others may exploit your policy by hiding behind misidentification.”

More than anything, Baliga hopes that his research forces policymakers to acknowledge that Cold War–era thinking, in which deterrence involved just one attacker and one defender, no longer holds. As the model demonstrates, small nations and even rogue civilians now have the power to spark global conflict between superpowers.

“It’s not bilateral anymore because of the attribution problem,” he says. “The main new thing everybody has to think through, and which we provide the first step for, is that cyber warfare is inherently multilateral.”

Featured Faculty

John L. and Helen Kellogg Professor of Managerial Economics & Decision Sciences

About the Writer
Jake J. Smith is a freelance writer and radio producer in Chicago.
About the Research
Baliga, Sandeep, Ethan Bueno de Mesquita, and Alexander Wolizky. 2018. "Deterrence with Imperfect Attribution." Working paper.

Read the original

More in Policy