Studying Customer Behavior? Use These Sample-Size Calculators
Skip to content
Feb 26, 2016

Studying Customer Behavior? Use These Sample-Size Calculators

By Kellogg Insight | Based on insights from Blakeley McShane and Ulf Böckenholt

Say you want to run an A/B test to compare the effectiveness of two drugs or two advertising campaigns. One of the first things you will have to determine is how big your study needs to be. What is the right sample size?

“The basic idea is if you have too few subjects—if your sample size is too small—statistically you won’t be able to detect anything. You’ve wasted resources by running the study in the first place because it never had any chance to find anything,” says Blakeley McShane, an associate professor of marketing at the Kellogg School. “On the other hand, let’s say you’ve got millions of subjects. Well, then you’ve probably wasted resources because you could have learned the answer to your question with many fewer subjects.”

So how can you determine an appropriate sample size? This depends on how big of an effect you are expecting to see. If one drug or campaign is vastly more effective than another, you may only need about twenty people to determine that. But, if the differences you are testing are small—like two ads with only minor differences in font size or spacing—then it may take hundreds of participants for a convincing effect to emerge.

Standard statistical techniques for determining sample size require researchers to know or assume a value for the effect size before running the study. But this, says McShane, can be problematic. “If I knew that this ad were more effective than that ad—let alone exactly how much more effective—then I wouldn’t need to do the study in the first place,” he says.

In practice, people estimate the effect size by looking at the results from any previous, related studies that happen to be available. However, these studies can yield only an approximation of the effect size (because they too had finite sample sizes).

Thus, in recent research with Ulf Böckenholt, a Kellogg School marketing professor, McShane developed a way to calculate a sample size that also takes into account the uncertainty involved in this approximation. “The kind of information we use to quantify that uncertainty is the same idea as a margin of error in a poll,” he explains. The new technique represents a particular improvement when effect sizes are small—as is often the case for many online A/B tests, such as tweaks to a search algorithm.

Researchers are welcome to determine their own sample size using this calculator.

McShane and Böckenholt have also built a second calculator. This one helps researchers deal with another troublesome assumption—that if you run an experiment multiple times, every one of those experiments studies the same underlying effect size.

“The problem with this is that no two studies in behavioral research are ever exactly the same,” says McShane. “Maybe you designed the study somewhat differently; maybe you’re testing it on a different population of subjects. These can all lead to differences in the effect size under study and thus the sample size required.” Interested parties can find this other calculator here.

The researchers created these tools with their fellow academics in mind. But McShane says that anyone who runs studies that use human participants might find them useful.

“There’s variability out there in the world,” he says. “To get reliable results, we want to account for as many of those sources of variability as we possibly can. This includes ones we understand and can measure and control for. However, it also includes those we don’t understand but can still quantify.”

Editor’s Picks

A mentor puts capes on mentees.
Careers

Podcast: How to Be a Great Mentor

Plus, some valuable career advice that applies to just about everyone.

Kids decide whether to buy water or soda.
Marketing

A New Way to Persuade Kids to Drink More Water and Less Soda

Getting children to make healthy choices is tricky—and the wrong message can backfire.

Computational Social Scientists discuss solutions.
Innovation

How Can Social Science Become More Solutions-Oriented?

A conversation between researchers at Kellogg and Microsoft explores how behavioral science can best be applied.

An entrepreneur enters an established company.
Innovation

Buying a Company for Its Talent? Beware of Hidden Legal Risks.

Acquiring another firm’s trade secrets—even unintentionally—could prove costly.

Careers

Take 5: Tips for Widening—and Improving—Your Candidate Pool

Common biases can cause companies to overlook a wealth of top talent.

Drug innovation at a pharmaceutical company
Innovation

Everyone Wants Pharmaceutical Breakthroughs. What Drives Drug Companies to Pursue Them?

A new study suggests that firms are at their most innovative after a financial windfall.

How to be prepared
Careers

4 Key Steps to Preparing for a Business Presentation

Don’t let a lack of prep work sabotage your great ideas.

Healthcare workers meet in a hospital corridor.
Healthcare

Video: How Open Lines of Communication Can Improve Healthcare Outcomes

Training physicians to be better communicators builds trust with patients and their loved ones.

A man tries to improve OR scheduling.
Operations

Here’s a Better Way to Schedule Surgeries

A new tool could drive savings of 20 percent while still keeping surgeons happy.

Voters who do not trust each other.
Politics & Elections

Why Economic Crises Trigger Political Turnover in Some Countries but Not Others

The fallout can hinge on how much a country’s people trust each other.

A clerk scans brand trademarks.
Marketing

Building Strong Brands: The Inside Scoop on Branding in the Real World

Tim Calkins’s blog draws lessons from brand missteps and triumphs.

Two coffee growers harvest safely
Economics

How the Coffee Industry Is Building a Sustainable Supply Chain in an Unstable Region

Three experts discuss the challenges and rewards of sourcing coffee from the Democratic Republic of Congo.