Why a Scientist’s Big Break May Be Just Around the Corner
Skip to content
Webinar: AI and the Global Economy | Register Now
Careers Nov 2, 2016

Why a Scientist’s Big Break May Be Just Around the Corner

Researchers, have hope: your most successful paper can occur at any point in your career.

Successful scientists hope their next paper will be a hit.

Lisa Röper

Based on the research of

Roberta Sinatra

Dashun Wang

Pierre Deville

Chaoming Song

Albert-László Barabási

Conventional wisdom holds that a scientist’s best work is usually published mid-career, in the sweet spot after they have learned the ropes, but before administrative duties or thoughts of retirement encroach upon research. So is an aging academic with an underwhelming research career a lost cause?

That was a motivating question behind a recent study by Kellogg’s Dashun Wang. “Sometimes when I give talks, I say this is ‘the hope project,’” says Wang, an associate professor of management and organizations. It is hopeful because Wang and colleagues find that a scientist’s most-cited paper is equally likely to pop up at any point in her career.

“It may occur in your first work, or it may be the last work that you publish,” Wang says. “This was a very surprising finding.”

There is more than just researcher ego at stake. The success of scientific research has major implications for both individual scientists and the universities that employ them, since weighty matters of tenure and research funding depend on a scientists’ ability to make a splash in their field.

Discovering Random Impact

The paper—coauthored with Roberta Sinatra of Central European University, Pierre Deville of Swan Insights, Chaoming Song of University of Miami, and Albert-László Barabási of Northeastern University—is a serious contribution to what Wang calls “the science of science.” This is a rapidly expanding field that seeks to turn the microscope back on the scientific world itself to answer fundamental questions about how research is produced.

The team used the research databases Google Scholar and Web of Science to compile a list of more than 10,000 scientists who had published for at least 20 years in the disciplines of biology, chemistry, cognitive science, ecology, economics, and neuroscience.

“Sometimes when I give talks, I say this is ‘the hope project.’”

Success in the world of academic publishing is often equated with how often a paper is cited by other academics. So the researchers pinpointed the most-cited paper for each scientist and looked carefully at the papers preceding and following that big hit.

That is where they noticed something surprising: A typical scientist’s publications did not tend to ramp up in citation counts leading to the big hit—nor did papers published after the big hit receive a citation boost. In the aggregate, the trend line for citations before and after the most-cited paper lied completely flat.

So flat, in fact, that Wang’s team wanted to know if there was any pattern at all. Was the timing of success entirely random?

“We said, ‘OK, within a career, what if we just shuffle all the work you published—as if we’re oblivious to which one gets published first and which one gets published second?’” Wang says. They ran a simulation that randomized the order in which each scientist produced their papers.

The simulation, it turned out, was indistinguishable from the real-world data.

To make sure this was not a fluke, Wang’s team cut the data into different segments—looking only at scientists from a particular decade, for example, or in a particular discipline. Every time they ran the simulation, the same result held.

The timing of a big scientific hit, it seemed, was truly—and unexpectedly—random. For the newly minted Ph.D. and the long-tenured professor alike, this meant a big break could be just around the corner.

Model Behavior

The research also presented an opportunity to study more than just the greatest hits. Since they now understood that, across an individual career, impact was occurring randomly, the researchers could try to predict how citations would accumulate over an entire scientific career. Specifically, they wanted to understand why some scientists were more successful than others. Was success simply a matter of increased productivity—with more publications upping the chances for a breakaway hit? Or was some other factor at play?

From the initial analysis, Wang’s team created a single list that combined all of the citation counts received by every paper published by the scientists in his sample. That distribution contained lots of the low and medium citation counts that the average paper reached, as well as a few high numbers that the occasional hit paper had achieved.

To predict how a career would unfold, the team built a model that drew repeatedly from the distribution. “You just randomly pick a number every time someone publishes a paper,” explains Wang. By putting those random drawings together, they could approximate an individual’s entire career.

But this time when they first ran their model, the output did not quite match the real-world data. While the model predicted that a scientist’s average citation count increased as they produced more papers (and upped their odds of getting a hit), the real data showed that this increase was steeper than predicted. The model also failed to capture the fact that scientists whose hit papers had been particularly big hits tended to producer higher-impact papers all throughout their careers.

In other words, each scientist was indeed drawing randomly from a distribution—but they were not drawing from the same distribution.

This suggested there was some intrinsic quality that allowed certain scientists to produce more citable work than their peers. To account for that quality, the team added another parameter—which they called “Q”—to their model.

When they ran the model again, accounting for Q, its output matched the real-world data almost perfectly.

Making Sense of Q

A high Q score does not make someone a better researcher, necessarily. It just means they are more adept at turning a research topic into an attention-grabbing publication, Wang says. “A high-Q scientist can draw from the same knowledge pool as his peers, but multiply it into a much higher-citation paper.”

Furthermore, the Q parameter captures consistency over an entire career—so even a high-Q scientist will strike out occasionally, drawing a low number from the distribution of possible impacts. “But with time, as you draw more and more, as long as you have a high Q, most of the work you do will have high citations,” says Wang.

The more that Wang and coauthors looked into Q, the more important it appeared to be. Q scores predicted which scientists would win major prizes, including the Nobel, better than any other factor. And Q values calculated at various stages of a scientist’s career were found to be remarkably stable over time, meaning it was more than just a proxy for luck.

“If we know someone’s Q parameter earlier in the career, we’ll have a much better understanding of what will happen going forward,” Wang says.

The existence of Q raises some critical questions. Surely researchers will want to know if a scientist can cultivate a higher Q score—and if so, how. Wang has already begun research on this question.

He is also curious to see if his results hold beyond the realm of academia, and how they might help organizations or countries predict and nurture talent. “So many decisions are based on this ability to foresee a superstar,” he says.

Of course, Q’s predictability may have a dark side. For a low-Q scientist, even their biggest-hit paper would be doomed to a relatively low number of citations.

Wang, however, prefers a more optimistic interpretation of his results. (This is his hope project, after all.)

No matter how disappointing your past work, he says, the random order of impact means your brightest days may be ahead of you yet. “As long as you publish, you’re drawing from a distribution,” he says. “And that means there is hope.”

Featured Faculty

Professor of Management & Organizations; Professor of Industrial Engineering & Management Sciences (Courtesy), Director, Center for Science of Science and Innovation (CSSI), Co-Director for the Ryan Institute on Complexity

About the Writer
Jake J. Smith is a freelance writer and radio producer in Chicago.
About the Research

Sinatra, Roberta, Dashun Wang, Pierre Deville, Chaoming Song, and Albert-László Barabási. 2016. “Quantifying the Evolution of Individual Scientific Impact ” Science. Vol. 354, Issue 6312.

Read the original

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Leadership & Careers Careers