We’re at a Data Privacy Crossroads. Where Do We Go From Here?
Skip to content
Policy Marketing Jul 1, 2019

We’re at a Data Privacy Crossroads. Where Do We Go From Here?

What individuals, regulators, and companies need to consider as we live more of our lives online.

Data makes up a woman's profile

Lisa Röper

Based on insights from

Jennifer Cutler

Samuel Goldberg

These days, most of us understand that when we visit a website, download an app, or post on social media, we are sharing data about ourselves with companies. We also understand that the consequences of such sharing will only become more profound as we live more of our lives online. So it is perhaps unsurprising that companies are facing new pressures to protect our digital privacy.

And it’s about time. According to Jennifer Cutler, data sharing is not inherently bad—but it is inherently risky.

“There are plenty of benefits to collecting this data and analyzing it,” says Cutler, an associate professor of marketing at the Kellogg School. “For example, researchers can try to use it to make the world a better place. But the risks are also enormous, because the same information can be used nefariously.” And even when studied with the best of intentions, this data can produce unintended negative side effects.

According to Cutler and Kellogg PhD student Sam Goldberg, individuals, regulators, and companies all need to understand the privacy implications of an increasingly data-driven world.

Challenges Facing Individual Privacy

Data breaches and egregious cases of data misuse regularly provoke an outcry from consumers. Still, it is curious just how much misuse we actually tolerate.

One reason why we don’t see more consumer demand for privacy protections is that often consumers are not aware of their role as “participants” in both formal and informal studies involving giant data sets.

When signing up for a free service, or even applying for a lower interest rate on a credit card, “most people have no idea what they’ve agreed to, or what can be done with their information,” Cutler says. “There might be a general sense of unease, but not a concrete understanding of how their data is being used, and how these uses could potentially harm them.”

There have been some intriguing attempts to quantify the extent to which consumers value their privacy; the challenge is that consumers often lack the information they need to assess the risk in a meaningful way and thus have no reliable means of attaching a specific value to their privacy. If they knew the likelihood of harm to their credit score, reputation, access to health or life insurance, or even physical safety, they might value their personal data differently.

Moreover, most people have little choice but to accept the standard onerous trade-off that comes with adopting a new technology, because the costs of opting out altogether are often too great. “People use the apps they need for work and to stay on pace with life—they don’t read through a hundred-page terms-of-service agreement to find the one clause that’s unacceptable,” says Cutler.

People have little choice but to accept the standard onerous trade-off that comes with adopting a new technology, because the costs of opting out altogether are often too great.

Consumers also badly underestimate the amount of information that can be gleaned from even the most pedestrian online behaviors. One of Cutler’s research areas is making predictions based on people’s public Twitter and Facebook posts. While we might expect that these seemingly innocuous actions are not private per se, we might not consider the extent to which researchers are mining them.

“Even really innocuous things such as ‘liking’ a supermarket’s page can eventually build a very predictive profile of a person, including basic demographics, but also more sensitive things like someone’s political leanings, religious preferences, and health conditions,” Cutler says. “It’s hard to determine the cost of that.”

Finally, data collection is cumulative, and not always in ways that are easy to explain, track, or predict.

“A consumer might not care that a specific small app can access her iPhone photos,” Cutler says. “But when she realizes that companies are merging together, or merging their data, suddenly there’s this data beast that knows everything about you. That consumer might want to exercise more control over her data set.”

What Regulators Are Doing

Individual control is exactly what the EU is hoping to provide with the General Data Protection Regulation (GDPR), which has been pitched as the gold standard of data-protection regulation.

The GDPR, which went into effect in May 2018, is the most comprehensive policy attempt to establish rules for how companies process and handle data.“ The idea behind the regulation is to set the standard for how to treat data gathering and usage,” says Sam Goldberg, a Kellogg Ph.D. student focused on privacy issues. “And it does so within a framework that requires individual consent.”

Among several new rights and responsibilities in the GDPR for both users and firms—including the “right to be forgotten,” new rules for the reporting of incidents, and data-security requirements—is the shift from an “opt-out” to an “opt-in” default privacy setting. This new setting means that companies’ default state is not to track visitors. The GDPR additionally specifies that companies cannot deny service to those who opt out of having their data collected.

“So technically, a news publisher can’t say, ‘you have to opt in before you can read any articles,’” Goldberg says.

While overall the GDPR’s changes make significant, clear advances in firm reporting and structure, there are potential loopholes in this regulation. Companies are allowed to collect data for “necessary processing”—for instance, they can use cookies if they’re essential for running the website—and also for reasons of “legitimate interest,” an admittedly vague and broad stipulation which might undermine individual consent.

This raises questions about just how effective the GDPR will actually be in curbing data-collection abuses.

“‘Legitimate interest’ is potentially broad,” Goldberg says. “While the GDPR is clearly written, the way regulators might interpret compliance may not yet be clear, so companies are still experimenting and testing the legal boundaries.”

What about here in the U.S.?

Experts say that robust legal frameworks like the GDPR are unlikely to pass in the U.S., but they may not have to. For companies like Google and Facebook, maintaining country-specific privacy policies as they operate globally might be too cumbersome and risky, not to mention expensive, to make sense. Adhering to the most most stringent of these regulations may become the norm for global tech companies, similarly to the way airlines have adapted to the patchwork of national safety standards imposed upon them.

Privacy and the Structure of Our Society

However, providing individuals with more control over their own data may only go so far in alleviating the larger-scale, longer-term effects of having so much data collected about all of us.

It’s clear, for example, that large-scale data collection and analysis could have profound effects on the structure of our society, as predictive analytics can reinforce preexisting biases. If a bank uses social media to predict the likelihood of someone defaulting on a loan, or a police department relies on predictive models to determine whom it profiles, any racial, gender, or class prejudice found in the data can be reflected and potentially magnified by these models to yield discriminatory results. Correcting this isn’t always straightforward, and will require reflection on the part of the modelers.

It’s also important to keep in mind what researchers call “second-order effects,” or the repercussions that result from the consequences of an action. Using social-networking data for alternative credit scoring, for instance, does more than just help or hurt lenders and loan applicants—it may alter the very nature of credit access and how people use social-networking sites. Such shifting incentives may, for example, change the way individuals connect and disseminate information on social media. Similarly, using existing data to determine the likelihood of someone committing a crime may reinforce the racial and class disparities in our criminal justice system.

To Cutler, this suggests that additional data protections may be necessary, beyond simply asking individuals to opt in.

Given how rapidly this ecosystem has evolved—consider the rise of facial-recognition software or payment options connected to social media—even full transparency on the part of companies is unlikely to resolve the deeper uncertainty around how this escalation of data collection and usage will play out.

“We’re in uncharted territory,” says Cutler. “We are going to need interdisciplinary efforts by researchers in many fields to investigate how to balance innovation with consumer and societal protections as data availability increases.”

About the Writer
Drew Calvert is a writer based in Los Angeles.
Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Policy & the Economy Policy
close-thin