Featured Faculty
Clinical Associate Professor of Development Economics; Director of Research Methods Cluster in the Global Poverty Research Lab
Michael Meier
A wide variety of organizations incorporate social impact into their mission. This ranges from impact investment funds looking to finance initiatives, to companies or foundations hoping to invest in their communities, to nonprofits that want to better understand their own impact—and identify ways to improve.
But despite well-designed programs and dedicated staff, few organizations actually measure their impact. So how do they know what works and why? Could investments or programs be designed to be more effective? What could organizations do to get more bang for their buck?
Andrew Dillon, a development economist and clinical associate professor at Kellogg, knows a thing or two about measuring social impact. That’s because he helps the researchers at the Global Poverty Research Lab design studies that measure the impact of various poverty interventions around the world.
Dillon offers several suggestions for organizations that want to better understand and grow their impact on the communities they serve.
“Impact measurement is important not only to quantify the return on a social investment,” says Dillon, “but it’s also important for cost efficiency and delivering impact at the lowest cost per beneficiary.”
Whether you are attempting to have an impact halfway around the world or in your own backyard, you first need to understand the outcomes you are hoping to affect and how you plan to affect them.
The latter, says Dillon, is often neglected. “It’s not just about choosing metrics or choosing indicators or choosing some sort of variable to measure. Those things are important,” he says, but the real key “is to link whatever the investment or program is to a ‘theory of change’ as to how that investment or program is actually going to change the lives of beneficiaries.”
In other words, you need to not only understand the final outcome but also how you get there: By which mechanism will your program or investment change someone’s life? What does that pathway look like?
Take, for instance, a mentorship program that connects teenagers to working professionals, with the relatively concrete goal of helping those teens achieve economic success. In order to design an effective program, you will first need to consider the mechanism by which your program helps teens. Will spending time with a mentor help them feel more informed and empowered about their career opportunities, leading them to pursue more lucrative choices? Will the benefit come from access to the mentor’s network? Or will it come from keeping teens more focused on school—or less engaged in antisocial activities?
Each of these theories is plausible but would lead to drastically different offerings. A theory of change geared around empowerment or access might include a job-shadowing component, for instance, while one geared toward staying out of trouble might be accompanied by safe after-school programming.
“Because mentorship programs can be designed in such different ways, it’s really important to have an impact-measurement framework,” says Dillon.
Once you understand your theory of change, it is time to determine whether that theory holds up. Does that mentorship program actually empower teens to consider a more ambitious career path?
Whenever possible, Dillon recommends conducting a randomized controlled trial: that is, a study of your program’s effectiveness with both a treatment group that participates in your program and a control group that is as similar as possible to the treatment group. (In a randomized controlled trial, this would be done by random assignment to either the treatment or control group.) If you don’t have the know-how to conduct your own trial, consider hiring an impact consultant.
A trial like this will help you determine that any impact on your desired outcome is actually caused by your program, as opposed to some other factor. For instance, if particularly career-minded teens self-select into your mentorship program at higher rates than less career-minded ones, then it will look as though your program is wildly successful, even if it isn’t doing anything at all.
The real key “is to link whatever the investment or program is to a ‘theory of change’ as to how that investment or program is actually going to change the lives of beneficiaries.”
— Andrew Dillon
With a well-designed experiment, you can even learn something unexpected. Dillon points to a recent study on the effects of workplace health interventions on worker productivity.
“It was known at the time that treating workers in physical occupations for malaria would reduce absenteeism and on-the-job productivity of workers who were sick, but it wasn’t anticipated that the health intervention could help even those workers who were not sick with malaria at the time,” says Dillon. But when researchers conducted a large, randomized controlled trial, that’s exactly what they found.
“Assuring testing and treatment for all workers allowed well workers to increase their effort and productivity on the job because they weren’t worried about becoming sick from the physical nature of their work. Because they knew they were not sick, they could increase their effort,” says Dillon.
Before the study, researchers’ theory of change did not account for how these interventions would affect well workers. The experiment allowed them to update their theory of change—and make a solid case for further investment.
Not all organizations are in the position to conduct experiments (or even to hire someone to conduct one on their behalf). So the next best step is to look at whether there are any existing studies that validate your theory of change. You may not have the resources to conduct an experiment on your mentoring program’s effectiveness, for instance, but at a minimum you should determine that there is evidence that the mechanism you’ve selected—empowerment, access, staying out of trouble—is casually linked to the goals you’ve laid out.
Dillon points to the work of organizations like the Global Poverty Research Lab and its partner organization, Innovations for Poverty Action, which conduct large-scale randomized controlled trials in a range of communities in order to determine exactly which interventions are effective at achieving which outcomes through which mechanisms.
Moreover, large foundations, donors, and even impact-investment funds regularly assemble large evidence reviews—a trend that Dillon says is increasing.
“Having studies that are well-designed, that people can point to so that they can clearly understand how program-design features linked to impact, is greatly useful and is a great public good, even if all parts of that study don’t completely replicate in other communities,” says Dillon. “It allows us to validate our theories of change.”
Dillon acknowledges that historically, impact evaluation has had a bad rap.
“A lot of evaluations in the past were from external consultants, who would come in and just assess program models or assess efficacy of the organization, based on their own experience or their experience in other types of social-impact spaces,” says Dillon. “You don’t want to necessarily let someone external to your organization come in and just criticize you.”
But by collecting data—data linked to a specific theory of change—evaluations can be a very positive experience, whether your analysis is conducted internally or by a third party.
“In organizations that have a learning culture, and that want to learn how to improve, impact evaluations can be very well received, because program designers and program staff have lots of innovative ideas about how to create impact and why things work.”
Even if you learn the disappointing news that your intervention is not particularly effective, you can now try to determine what else you can do. If your nonprofit provides laptops to children to help them learn, for instance, but a lack of electricity or access to high-quality teachers is preventing learning from actually taking place, then you now have a good starting point for designing new interventions and a new trial.
“Those ideas can actually be tested and thought about and tried and refined. And that can be really exciting,” says Dillon.
Jessica Love is editor in chief of Kellogg Insight.