How Algorithms Keep Workers Under Their Control
Skip to content
Insight Unpacked Season 2, "American Healthcare and Its Web of Misaligned Incentives" | Listen Now
Organizations Aug 5, 2024

How Algorithms Keep Workers Under Their Control

More than ever, even highly skilled workers find themselves being evaluated, rewarded, and punished by opaque algorithms. A new book, Inside the Invisible Cage, investigates.

composite image of employee productivity monitoring

Riley Mann

Summary In this excerpt from the book Inside the Invisible Cage: How Algorithms Control Workers, author Hatim Rahman describes how algorithms are used to evaluate workers’ performance. This often happens with little transparency, leaving workers guessing how they are being rated. The opaque nature of the algorithmic rating systems used by platforms such as TalentFinder—and the increasing reliance on them by companies hiring workers—enable platforms to control high-skilled workers within an “invisible cage”: an environment in which organizations embed rules and guidelines for how workers should behave in algorithms that shift without providing no­tice, explanation, or recourse for workers.

What are the implications of companies’ increasing reliance on machines to monitor and evaluate the performance of their workforce?

In this excerpt from the introduction of the new book Inside the Invisible Cage: How Algorithms Control Workers, author Hatim Rahman, an assistant professor of management and organizations at the Kellogg School, describes how algorithms are used to evaluate workers’ performance. This often happens with little transparency, leaving workers guessing how they are being rated.

The opaque nature of the algorithmic rating systems used by platforms such as TalentFinder—and the increasing reliance on them by companies hiring workers—enable platforms to control high-skilled workers within an “invisible cage”: an environment in which organizations embed the rules and guidelines for how workers should behave in opaque algorithms that shift without providing no­tice, explanation, or recourse for workers. This signals a profound shift in the way markets and organizations try to categorize and ultimately control people.

Tyra nervously refreshed her browser after completing her latest project on TalentFinder and received the following feedback from her manager: “She was fast, accurate, and easy to work with.” It was a succinct, positive review. But now Tyra had to wait. She clicked “re­fresh” again, waiting to see when and how TalentFinder’s algorithm would update her rating evaluation score. So much hinged on the al­gorithm’s verdict: receiving a higher wage, being noticed by more prestigious clients, and gaining visibility in search results, for starters.

The problem, however, was that Tyra had no way of knowing how the algorithm controlling her visibility and success on TalentFinder behaved. She had virtually no access to what the algorithm’s criteria were, how its criteria were weighted, or even when the algorithm would update her score. After refreshing her page for the tenth time to no avail, Tyra closed out of the window and ruminated on the un­knowable algorithm controlling her fortunes. Frustrated, she turned to express her predicament in the best way she knew, by writing a poem:

The Algorithm,
None can explain;
To attempt to decipher,
Is an effort in vain.

It’s up, or it’s down,
With no reason in sight;
Accept or do not,
You won’t win the fight.

So work and work,
Leave the mysteries be;
To ponder the Algorithm,
Is a path to misery.

Tyra’s poem was not an exaggeration. Experienced workers, new workers, workers with high and low rating scores, workers located in different countries—all reported similarly befuddling experiences with TalentFinder’s algorithm. Sometimes the algorithm increased their rating evaluation score, sometimes it decreased it, and some­times it did nothing at all. These outcomes had significant ramifica­tions for workers’ ability to find work on TalentFinder, but decipher­ing how the algorithm arrived at its decision was maddeningly impossible—it was, as Tyra put it, “a path to misery.”

Inside the Invisible Cage examines how organizations’ use of algo­rithms is reconfiguring our understanding of control for Tyra and millions of other high-skilled workers who use online labor market platforms (e.g., Upwork, TopCoder, Gigster) to find their work. An explosion of online labor market platforms has transformed the na­ture of work in the past two decades. In 2021, over forty million peo­ple used an online labor platform to find work in the United States. To put that number in context, according to recent estimates, retail, the industry with the most employees in the United States, had 3.6 million workers. In fact, if the five largest occupations in the United States were combined, that still would not equal as many peo­ple as those who use online labor platforms to find work.

The issue is not just that lots of people use online labor platforms to find work; it is that these platforms have transformed how organiza­tions and workers find and work with each other. The goal of these platforms is to create an Amazon for labor: a platform that provides or­ganizations and individual clients with instant access to top talent from around the world with the click of a button. Organizations, for example, can use online labor platforms to hire high-skilled workers, such as software engineers, graphic designers, data scientists, engi­neers, architects, and even lawyers and doctors, from around the world to complete primarily unstructured, knowledge-intensive projects. The appeal of online labor market platforms increased with the dra­matic rise of remote work precipitated by the COVID-19 pandemic.

Algorithms power the growth of online labor market platforms. Millions of participants are registered on platforms, and it would be impossible for a platform to individually match each job opportunity with workers whose skills, wages, and scheduling fit the position. In­stead, these platforms use algorithms to match organizations and cli­ents with workers, much like YouTube or Netflix use algorithms to match viewers’ interests with video content. Yet, I argue that plat­forms use these algorithms to do much more than match jobs with workers.

The invisible cage signals a profound shift in the way markets and organizations try to categorize and ultimately control people.

Hatim Rahman

This book argues that algorithms enable platforms to control high-skilled workers within an “invisible cage”: an environment in which organizations embed the rules and guidelines for how workers should behave in opaque algorithms that shift without providing no­tice, explanation, or recourse for workers. The invisible cage pro­vides platform organizations with predictability because they can use algorithms to more efficiently collect data and monitor, evaluate, and categorize which workers are rewarded and sanctioned on a glo­bal scale. At the same time, the opaque, dynamic algorithms in the invisible cage make life more unpredictable for workers because they do not know which actions will be rewarded or punished. I show that workers remain enmeshed in the invisible cage because the platform organization’s algorithms control a worker’s ability to get jobs within and outside the platform. As a result, workers like Tyra largely see their attempts to comply with its opaque algorithms as the only option they have, even though they can theoretically leave the platform at any time. The invisible cage concept thus reflects how workers must contend with an ever-changing and opaque set of algo­rithms that control their job opportunities and success within and be­tween labor markets.

Platforms maintain the invisible cage by leveraging the weak in­stitutional oversight and regulations that govern their activities to cultivate and harness power and information asymmetries through concealed data collection, processing, and experimentation. These asymmetries have significant implications for platform organiza­tions and workers. A major finding of this book is that algorithms can prove especially disruptive to the way workers find and complete work; this is especially the case for workers with college and ad­vanced degrees—precisely those workers who have long been thought to be immune to technological disruption. This argument is primarily derived from six years of ethnographic data collection con­ducted on one of the world’s largest online labor market platforms for high-skilled work, TalentFinder (a pseudonym).

More broadly, the invisible cage signals a profound shift in the way markets and organizations try to categorize and ultimately control people. Previously, markets and organizations classified people into categories based on group-level characteristics such as education, gender, location, and age (e.g., females with an engineering degree in their 20s living in Chicago). However, my analysis shows that organ­izations can use algorithms to categorize them based on more granu­lar, individual-level data in an attempt to “know” people in the invis­ible cage better than they know themselves. I show that the act of defining what algorithms should “know” about workers is an organi­zational decision that reveals what an organization prioritizes and what it wishes to (de)value. Whereas high-skilled workers tradition­ally had some degree of control over how they were evaluated and ranked, in the invisible cage organizations use algorithms to transfer this control to themselves, all the while removing workers’ ability to influence or contest the consequences of this transfer of control. In particular, organizations collect people’s data, dynamically classify­ing them using various algorithmic ratings, rankings, and categories. People cannot verify what data are collected and may struggle to un­derstand how or why an algorithm categorizes them in a given way. Unlike previous forms of control used in bureaucratic organizations or market settings, the invisible cage is ubiquitous yet opaque and shift­ing, making it difficult for workers to break free from it.

By examining the implications of using algorithms to control high-skilled work, this book moves beyond existing scholarship, which has mostly focused on how platforms’ algorithms facilitate lower-paying work (e.g., Uber, Instacart, and Amazon Mechanical Turk). In lower-paying contexts, organizations use algorithms to nudge workers towards standardized behavior, revealing an en­hanced form of Taylorism. In contrast, in the invisible cage the plat­form organization does not want workers to think about the algo­rithm or which behaviors are desired; rather, it encourages workers to behave “naturally” so that it can “objectively” categorize, rank, and recommend workers depending on their actions. In choosing which information is objective, measured, and valued, the organiza­tion’s algorithm reifies certain worker characteristics and choices while stripping out the complexity and unpredictability inherent in high-skilled work. Thus, as organizations increasingly use algorithms to make consequential decisions for people inside the invisible cage (such as deciding who can rent and buy property, who goes to jail, and whom to hire), this form of control increasingly determines our opportunities without allowing us to understand or respond to the factors that govern our success.

Featured Faculty

Assistant Professor of Management and Organizations

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Business Insights Organizations