How Companies Can Do Data Privacy Better
Skip to content
Policy Oct 4, 2021

How Companies Can Do Data Privacy Better

Not all efforts are costly, and being known for strong protections could give firms a competitive advantage.

computer surrounded by chains and padlocks

Yevgenia Nayberg

Based on the research of

Itay P. Fainmesser

Andrea Galeotti

Ruslan Momot

Yanzhe (Murray) Lee

Sentao Miao

As consumers in today’s digital world, we’re used to giving away huge amounts of personal data. We enter our age and credit card number when we register for an online service; we allow companies to track what we click on and buy; we often broadcast our geographical location.

In theory, much of these data are intended to help firms provide better, more personalized service. But as customers become increasingly aware of the risks of their information being stolen by hackers or misused or sold to third parties, they’re looking for stronger privacy protections, says Ruslan Momot, a visiting assistant professor of operations at Kellogg.

So how should regulators and firms approach preserving privacy?

In two papers, Momot and colleagues examine the issue. They find that it’s not enough for policymakers to choose between requiring safeguards against breaches or restricting the amount of data that companies can gather. “You need to regulate two sides of the company’s data strategy, both data protection and data collection,” says Momot, who is on leave from his position as assistant professor of operations management at HEC Paris.

The good news: not all privacy measures are costly to the bottom line. One protective measure, which involves adding “noise” to the output of personalization algorithms, is unlikely to substantially cut company revenue in many circumstances.

“Preserving privacy is not that costly,” he says.

Momot argues that firms should not resist pressure to better protect their customers’ data. In fact, beefing up privacy measures could not only help companies comply with new privacy laws and regulation but could also bring in more business.

“Companies should actually embrace this because this might become a competitive edge,” he says.

Growing Concerns

Amazon’s product recommendations, based on the data it’s collected about us, are often useful. And, of course, Uber wouldn’t work if we didn’t share our location.

But a stream of news about hackers breaching companies’ databases, as well as growing uncertainty around how information is used and whether it is being shared with third parties, has put customers on edge.

Momot points to a couple of indicators that customers are becoming more guarded. A Pew Research Center survey performed in 2019 found that 81 percent of participants felt that the risks of companies’ data collection outweighed the benefits and 52 percent had recently decided to avoid a product or service due to concerns about giving away personal information. This year, Apple started requiring apps to ask users for permission to track their activity and share those data with other apps. Among U.S. users who made a choice during the first three weeks after this new feature became available, 94 percent of them elected not to allow tracking, according to one industry analysis.

The Downside of Network Effects

So what can regulators do to respond to this growing anxiety?

In one study, Momot and his collaborators, Itay Fainmesser at Johns Hopkins Carey Business School and Andrea Galeotti at London Business School, began by investigating how businesses might, or might not, be incentivized to invest in data privacy.

The team developed a mathematical model of the parties involved in the data market. This included a company, its users, and so-called “adversaries” who wanted to access consumers’ data for harmful purposes. This could include hackers and criminals, as well as entities like governments—basically anyone whose possession of data could make users uncomfortable.

The model predicted that as the firm began gathering data, user activity increased: customers were benefiting from more personalized service. At that stage, the size of the company’s database was small, so it didn’t hold much allure for adversaries. But as the company amassed more and more information, the data trove became more attractive to hackers and other third parties. Privacy risks started to outweigh benefits, and user activity dropped.

“As users become more and more aware, they start to choose companies based on whether the companies are preserving privacy.”

— Ruslan Momot

One key point, Momot says, is that privacy risk, at its core, turns out to be driven by negative network effects.

“Network effects” refers to the idea that a user’s decision to participate in an activity on a platform depends partly on how many other people are using it. Companies such as Facebook have relied heavily on this phenomenon. If a person is the only one in their social circle on the site, it’s not particularly useful; but as more people sign up, the service becomes more beneficial to each person.

Privacy risks, however, are driven by negative network effects. The larger the number of users, the bigger the company’s database, and the more lucrative a target it becomes for adversaries to attack.

Network effects “brought these companies a big chunk of business,” Momot says. But in the realm of data privacy, “they are working in the opposite way.”

Internalizing Risks

The team then used the model to explore the types of regulations that would effectively protect consumers.

First, they examined a hypothetical scenario where policymakers set requirements for data protection but didn’t limit data collection. As one might expect, companies gathered more personal information than they needed. Conversely, if regulators restricted data collection but ignored data protection, firms didn’t guard customers’ data strongly enough.

The problem: a data leak simply wouldn’t affect a company as much as it affected customers. “Companies don’t internalize the privacy risks that the consumers are facing,” Momot says. While a firm might lose some users after a data breach, many large companies are monopolies that enjoy positive network effects. If most of a customer’s friends remain on Facebook, that person can’t get the same benefits by moving to another social-media site.

The team concluded that policymakers must regulate both data protection and collection. Protection might be required in the form of certain encryption techniques or antivirus software.

Collection could be restricted in a couple of ways. Regulators could impose liability fines on companies whose data were leaked, with the amount reflecting how much users were harmed. Or policymakers could tax data collection, thus discouraging firms from gathering personal information indiscriminately.

Personalized Services

Of course, these measures impose a range of costs on businesses—but another study suggests that such costs are not always onerous.

In this study, Momot collaborated with Yanzhe (Murray) Lei at Queen’s University and Sentao Miao at McGill University to explore how a particular privacy measure would hit a company’s bottom line.

The researchers focused on firms that provide personalized service to users, based on how other users have behaved in the past. For instance, the company might store customers’ demographic information and purchasing behavior in a database and run algorithms to predict the products that similar people would want or the prices that people with similar backgrounds are likely to pay.

The problem is that this strategy puts users’ personal information at risk—even if hackers don’t directly breach the database.

For example, hackers might register thousands of fake users, entering slightly varying demographic details for each one. They could then monitor how the offered product choices or prices change if one piece of a user’s profile, such as gender, is altered—essentially giving them a window into how the algorithm works. If hackers then get access to the algorithms’ output for real users, they can reverse-engineer that information to figure out each person’s characteristics.

So the team explored how firms providing personalized services to users could do that without compromising users’ data privacy. They chose to use a common privacy standard called differential privacy, which means that a system’s output, such as a product recommendation, does not depend on the data of any individual customer. (Companies such as Apple, Google and Microsoft use this standard, Momot explained.)

The researcher’s strategy involved adding some “noise” to data that hackers might obtain.

In one variation of this technique, companies would add noise to the output of personalization algorithms. Let’s say that a health insurance company determined that the ideal monthly premium for a user’s policy was $326. The firm would then perform the digital equivalent of flipping a coin; if it landed heads, the software would add a small pre-calculated amount, such as $1, to the price. Similarly, a shopping website might present a slightly different product or assortment of products to a customer than the optimal one—for instance, brown instead of black shoes.

The downside, of course, is that the companies are deliberately deviating from their optimal decisions, such as the price to offer or a product to recommend, which may reduce the chances of customers buying a product, and consequently reduce company revenue.

But when the researchers implemented this privacy-protection approach in a mathematical model, they found that such deviations, if done right (based on the algorithms developed by the team), do not cut firms’ profits much—as long as the company had a large database of past user behavior. “This reduction in revenue is not that large,” Momot says. Having extensive records allowed the personalization algorithm to make reasonably accurate predictions and, consequently, decisions. So fudging the results a bit didn’t have a dramatic effect on profits.

To test the idea on real-world data, the team calibrated their model on a dataset of about 208,000 auto-loan applications and about 45,000 resulting loans from 2002 to 2004. They found that if the company had 1,000 data points about past users, it reached 80 percent of its maximum possible profit when no privacy protections were in place. (Reaching 100 percent profit would require an algorithm that could perfectly predict consumer behavior.) When noise was added to its algorithms’ output, that figure was 76 percent. And the difference shrank as the database grew. If the firm had 6,000 data points, the profit gap was 2 percentage points.

Prioritizing Privacy

Understanding privacy issues is complex. While there are some existing companies, such as Skyflow, that help firms with data privacy and compliance, Momot envisions more will soon be created to give companies a set of tools to better handle their users’ data.

Overall, user-privacy protection is not an issue that companies should avoid. Even if regulations don’t require firms to step up consumer protections, doing so may give companies an advantage over competitors with more lax protocols.

“As users become more and more aware, they start to choose companies based on whether the companies are preserving privacy,” Momot says.

Some companies may put data privacy on the back burner and hope it doesn’t become a major issue. But that focus needs to shift, he says.

“Along with maximizing revenues and profits, this should be one of the first priorities,” Momot says.

Featured Faculty

Ruslan Momot was a Visiting Assistant Professor of Operations from 2021 to 2022

About the Writer

Roberta Kwok is a freelance science writer based in Kirkland, Washington.

More in Policy