Podcast: Platforms Are Experimenting on Their Users … a Lot. Is That Okay?
Skip to content
Organizations Aug 14, 2023

Podcast: Platforms Are Experimenting on Their Users … a Lot. Is That Okay?

On this episode of The Insightful Leader: Opaque algorithms on platforms like LinkedIn, Uber, and TaskRabbit have more power than ever. It’s starting to impact livelihoods.

Based on the research of

Hatim Rahman

Listening: Platforms Are Experimenting on Their Users … a Lot. Is That Okay?
download
0:00 Skip back button Play Skip forward button 15:18

When a study showed that LinkedIn had quietly toyed with a key networking feature for more than 20 million users, people were upset—plenty of professionals rely on LinkedIn for work opportunities.

But LinkedIn is far from the only site that does this kind of tinkering. A/B testing is used by most tech companies to test out new products and features.

Hatim Rahman, an assistant professor of management and organizations at Kellogg, wanted to know how this rampant experimentation was affecting people who used these sites for their livelihood.

On this episode of The Insightful Leader: how a culture of experimentation stands to change workers—and society at large.

Editor’s note: To take our listener survey, visit kell.gg/podsurvey.

Insightful Leader Podcast Listener Survey

We want to hear more about what you think of The Insightful Leader and what we can be doing better. Take our survey and be entered into a drawing to win a $100 gift card.

Take survey

Podcast Transcript

Laura PAVIN: Hey there, it’s Laura Pavin. Before the episode starts, I have a request for you. The request is this: Could you fill out our quick listener survey? We want to know more about you and what you think of The Insightful Leader podcast. So fill it out, give us your email, and you will be entered for a chance to nab a prize. You can find the survey link in the show note for this episode…or by going to kell.gg/podsurvey. Again, that’s kell.gg/podsurvey. Okay! Back to the show.

...

PAVIN: Hey, it’s Laura Pavin. Last fall, The New York Times reported that the online platform LinkedIn had run experiments on 20 million users over a five-year period. But according to the Times, LinkedIn did not tell their users about the experiments while they were running.

You probably know, LinkedIn is a platform , specifically designed to help people network, market themselves, and strengthen their career opportunities. Among other services, LinkedIn suggests new contacts to their users, using a feature called “People you might know.”

The experiment they ran altered the algorithms behind the “People you might know” feature for different users: some were shown people who represented more close relations—friends, or friends of friends—whereas others were shown more potentially distant relationships.

The New York Times quoted an ethicist who said: “The findings suggest that some users had better access to job opportunities or a meaningful difference in access to job opportunities.” The article in the Times received over a hundred comments, many expressing anger or distrust at the idea LinkedIn was hiding these experiments from its users:

Voice Actor 1: They do not know what harm may have occurred because they (1) do not care, and (2) have no way to assess it since they did not plan to do so. This should result in a fine but won’t.

Voice Actor 2: Ultimately, we learn once again that participation in social media is risky to one’s well-being. It might be good for us, or they might be messing with us.

Voice Actor 3: I remember getting emails purportedly from colleagues but actually from LinkedIn bots. That’s when I decided it was run by charlatans.

PAVIN: Now LinkedIn pointed out that they had disclosed there would be experiments in the user agreements—the fine print so to speak. And it’s not clear that the experiment harmed any of the platform’s users. But learning about the experiments through a news article was clearly jarring and frustrating for some users.

And researchers tell us LinkedIn is not alone. Experiments, like the ones LinkedIn was running, are extremely common among online platforms today. And they’re not always bad.

Hatim RAHMAN: There’s nothing inherently wrong with experiments or algorithms but how they are used, how they are implemented, and the professional implications for people subject to them is where my work raises interest.

PAVIN: That’s Dr. Hatim Rahman, a professor of Management and Organizations at the Kellogg School. He has been studying the experiments apps and web-based companies conduct on their users, and he says that sometimes, those experiments are done without transparency and in ways that can affect the users’ ability to earn income or network through platforms.

RAHMAN: That begins to impact livelihoods; that impacts emotions and wellbeing. You know, these experiments have huge consequences on the ability for people to get jobs and earn income.

PAVIN: Rahman recently completed a study on platform experiments, and we’ll talk more about what he learned. But first, I want you to imagine a fictitious company.

Let’s say that I started a new online company. It’s an app. Let’s call it CUTZ. With a Z. And when you decide you need a haircut, it will send somebody to your house to cut your hair.

You can choose a haircut you want from a menu, enter some times you are available, and the app will pick a hair cutter for you.

They’ll come to your house, give you the haircut, and then you pay them through the app, just like a ride share. You can tip, leave a rating, a review, and they can review you as a customer.

RAHMAN: Yeah…

PAVIN: I described my fictitious business, CUTZ, to Rahman, and asked him what kinds of experiments I might want to run..

RAHMAN: You might be interested in how experimenting with people’s availability; their stated availability to take on projects impacts the matching and success.

PAVIN: So Cutz, my app, might experiment with only showing people hair cutters who can make ALL of their preferred times work instead of just one or two. Which means some people will start losing out on opportunities to work.

RAHMAN: But if this is experimented in a way that people who are providing haircuts don’t know about, they aren’t necessarily aware of why their demand is going down. They may begin to think that, oh, it’s because I’m not logging onto the platform enough, or it’s because, you know, my profile picture isn’t good enough.

PAVIN: So, let me just pause here and say again, CUTZ is a fictitious company. But there are companies out there with very similar business models. And this scenario where the company is playing with search results in ways that can affect a user’s ability to earn revenue through the app … that’s not fictitious. That happens.

Rahman collected data spanning fifteen years of a platform that connects workers to possible business clients, not unlike CUTZ. He joined the app, and also conducted interviews with some of the gig workers and the clients who hired them.

And he found that when the app starts experimenting with the search results algorithms, it can mystify and frustrate the work. They try to reverse engineer the platform. They log onto the platform more. They spend money on graphic designers or hire somebody to do professional profile pictures. But it doesn’t seem to help.

And then, Rahman says, users start sharing information with other users to try to understand why they’re not getting the results they want from the app.

RAHMAN: But as people begin to realize that you are that, you know, through word of mouth, or through other things, people begin to speculate that the platform is running an experiment, and they don’t reveal why or how the results are, who’s enrolled or who’s not enrolled. You know, somebody might not actually be involved in the experiment, but they hear through the grapevine or other things like that. If they’re unable to confirm it, it leads to behaviors that are not beneficial for the platform or anybody involved.

PAVIN: And it erodes trust.

With my fictitious company, CUTZ, I market it by saying what many platforms say: You’re gonna be your own boss. Make your own hours. Work when you’re available.

RAHMAN: You have control over your wages; you have control over how many projects you take. But you begin to realize that that’s not entirely true.

PAVIN: Rahman says many platforms are running experiments on their users. Networking platforms like Indeed or LinkedIn, or gig-based apps like Taskrabbit, Uber, Lyft, or Doordash, Grubhub: they run experiments, with varying degrees of transparency.

And one effect of those experiments? Once the users realize they’re happening, they stop trusting that they’ll be able to successfully do the things the company promised them: make money driving people around, find a new job, make their own hours. Sure, some people find success through the platform, but it’s less and less clear what those people are doing that works. As Rahman puts it:

RAHMAN: “The rules and criteria for success are now embedded in opaque algorithms.”

PAVIN: Opaque algorithms. The rules for success, i.e., the ways to actually make money, or network, are a mystery to the users. It’s very frustrating.

And, Rahman has found, users express that frustration. Sometimes they post on public forums. They might seek out competitors, think about how rideshare drivers or delivery-app drivers switch platforms. They might find ways of subverting the platform like asking people to pay in cash, or giving out their phone numbers to customers so they don’t have to rely on the app.But Rahman says, most users would rather just continue using the platform … if they could only trust that it will deliver what it promises.

RAHMAN: For a lot of people in this work, it’s not that they want to, you know, stop using the platform, or people who are an Uber or Lyft, they recognize there are benefits to the platform, right? And so they want to try to maintain those benefits of the flexibility of work, the ability to be on this platform that I studied, gain access to organizations and clients all over the world. But I think that the role for us or for myself, as academics and as policymakers, is to try to push towards more mutually beneficial outcomes, rather than, you know, eliminate the entire platform or eliminate experimentation, which is very useful.

PAVIN: Especially when the results of the experiment benefit the users of the app.

Rahman says there are a couple of solutions that would allow these companies to keep running experiments without alienating their users.

One is informed consent: letting users know they will be subject to experiments, not in the fine print, but in more visible ways. And giving as much information as they can, without influencing the experiment.

And here’s a big one: sharing the results of what they learned from the experiment. Going back to CUTZ: Let’s say that we learn that most people want a haircut in the afternoon, between 1pm and 4pm. We should tell that to the haircutters using the app. Yes, you can set your own hours, but if possible, try to be free between 1 and 4, at least a few days a week.

Rahman has found some companies do share these results, and when they share them in a transparent and easy-to-understand way, the platform users are appreciative.

Rahman acknowledges that these companies are unlikely to regulate themselves without some external pressure. That could come from the government, a regulatory body that regulates rideshare apps, or social media.

PAVIN: Or maybe the private sector could get involved.

RAHMAN: A third-party system that rates companies and organizations on how well they implement experimentation, right? So that that could be an interim way. That’s a business opportunity for people that are looking for it, and ways that compel organizations to adhere to more-ethical experimentation without necessarily having to wait for regulation to catch up.

PAVIN: Is there an example of this kind of business activity we can point to, to show us there’s hope?

RAHMAN: We’ve seen that, right? Like, think about, like, Consumer Reports, Better Business Bureau, like LEED certification for built right, that we’ve seen, again, that the third-party certifications initially may be hard to catch on. But over time, businesses pay a lot of attention to that, even though, you know, again, there’s nothing compelling them to participate.

PAVIN: Right.

PAVIN: So far, Rahman says he hasn’t seen any third parties regulating or serving as watchdogs on these experiments. And the platforms aren’t policing themselves.

RAHMAN: Then, why we haven’t necessarily seen this happen is because the things that I mentioned, they introduced friction into the process. And this is what I would call kind of beneficial friction, right? Where the slowing down in the long term does create better outcomes, I think, for organizations, for people involved. I think that introducing this friction of informed consent and debriefing now, in the long term, beneficial outcomes are more trust of people in organizations, right? Right now, you know, people say there is a kind of lower trust in social-media organizations, because we saw this whole misinformation occur. And part of that that isn’t necessarily discussed more is that they were running experiments, right, to see what created more engagement.

PAVIN: Rahman points out that in the past, there are famous examples of experiments that went awry, harming people and leading to a scandal. One famous one: the Tuskegee Syphilis experiment. That’s when scientists withheld treatments to nearly 400 Black men with Syphilis in order to study the long-term effects of the disease. The resulting scandal led to laws and ethics governing how scientists conduct medical studies on human subjects.

RAHMAN: The hope is that we don’t have the equivalent of a Tuskegee or Stanford Prison Experiment or misinformation. In other contexts, whether it’s in the labor platforms or in government or others’ work, kind of embracing experimentation without informed consent debriefing. The hope is we don’t have one of those moments.

PAVIN: Drug companies, academic researchers, scientists all have rules designed to balance protecting the subjects of experimentation with the benefits of research. And Rahman does see some reason for hope that online platforms will realize they need some best practices—before we have a scandal.

RAHMAN: There isn’t systematic change that I’ve observed yet. But there is a recognition that you need to kind of create more long-lasting relationships with people on the platform, even though you don’t necessarily have to. Because if a competitor comes in, or if regulation comes down, hopefully those relationships sustain themselves, when there are threats to their business model.

PAVIN: And then you’ll have more loyal users when a competitor comes along. And people want to go to them because they treat them better.

RAHMAN: That’s right.

[CREDITS]

PAVIN: This episode of The Insightful Leader was written by Jesse Dukes. It was produced and edited by Laura Pavin, Jessica Love, Susie Allen, Fred Schmalz, Maja Kos, and Blake Goble. It was mixed by Andrew Meriwether. Special thanks to Hatim Rahman. Want more The Insightful Leader episodes? You can find us on iTunes, Spotify or our website: insight.kellogg.northwestern.edu. We’ll be back in a couple weeks with another episode of The Insightful Leader Podcast.

Featured Faculty

Associate Professor of Management and Organizations

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Business Insights Organizations
close-thin