What Matters Most on the Road to Scientific Success?

From publishing in prestigious journals to winning Nobel Prizes, Fields Medals, and other scientific honors, recent research suggests that the most coveted indicators of success tend to remain in narrow networks: The students and postdoctoral researchers most likely to achieve them are trained by scientists who have done the same thing.

That was the conclusion of a study published in an issue of the Proceedings of the National Academy of Sciences (PNAS) last December, which showed that while the total number of scientific prizes available has grown, only a small group of elite scientists are winning more than one. “The increasing concentration,” the authors wrote, “would seem to work at cross-purposes with the inclusiveness and equality ethos of science.”

Similarly, publishing in prestigious, interdisciplinary journals like Nature and Science is a skill that seems to get passed from a principal investigator to their students. According to another study published in the same issue of PNAS, younger researchers are “chaperoned,” and as such have a greater chance of publishing in these journals when they become principal investigators themselves. Although some researchers are able to get published as senior authors without having been chaperoned, the study found, the proportion of papers with this type of senior author has decreased over time.

These sorts of insights into the complex machinery of scientific career-making — surely troubling for any budding researcher outside these influencer networks — have only really become available in the era of big data. And while it is still impossible to positively link cause and effect in the making — or breaking — of scientific careers, a recent surge in the amount of data drawing links between academic publications, faculty hirings, mentorships, and prizes is helping data scientists understand the skills and strategies — as well as the invisible structural factors — that can shape an individual researcher’s trajectory.

Reflecting on the observation that prestigious prizes and publications run in scientific lineages, Stephen David, a neuroscientist at Oregon Health and Science University in Portland asked: “What is it that you actually learn from a mentor? What is the transfer of knowledge that happens, or the training, that’s really actually influencing your work?”



To start to get at these and other questions, he and several collaborators analyzed a database of nearly 19,000 researchers, focusing on sets of students, graduate mentors, and postdoctoral mentors. Measuring success based on whether or not a student went on to an independent research position and by how many students they went on to train themselves, the researchers found that postdoctoral training appeared to have a stronger influence than graduate training on future success. They also found that scientists were more likely to succeed if they trained with graduate and postdoctoral mentors with disparate expertise that they could incorporate into their own work.

David said he suspects that building connections that had not previously existed might be key to success. “There’s an intellectual space that hasn’t really been occupied before,” he said. “And if you can draw on two different areas of expertise and take something that’s kind of unique to each of them and bring them together into a problem of your own, then you can stake out some territory that hasn’t been explored before.”

Structural forces created by uneven opportunities may also play a role in scientific success.

In a 2015 study published in the journal Science Advances, computer scientists Aaron Clauset and Daniel Larremore, both at the University of Colorado Boulder, along with scientist and writer Samuel Arbesman, found that a few elite graduate programs in computer science, business, and history disproportionately contributed to the next generation of assistant professors. Specifically, the researchers found that only 25 percent of institutions produce 71 to 86 percent of all tenure-track faculty across the three disciplines.

In a more recent study, published in PNAS on April 29, Clauset and Larremore, with Clauset’s postdoc Sam Way and graduate student Allison Morgan, used a dataset of computer science faculty hiring networks that they had previously created to try to better understand how much a scientist’s productivity is driven by the prestige of their training environment or that of their work environment as faculty members.

“Your training environment teaches you how to do research and shapes the network of scientists you are likely to collaborate with,” Way explained. “On the other hand, where you are now determines your access to resources and students.”

Using a matched-pair design study, the researchers searched their dataset for “academic twins,” computer science researchers in particular fields who graduated from similar-prestige institutions within a couple of years of each other, with a similar number of publications. One of them went on to become a faculty member at a prestigious institution and the other got a less-prestigious placement. The question is what happens to their career path, especially in the critical pre-tenure years?

“What we find is that people who trained at the same place and then go on to different institutions have different trajectories,” Larremore said. “So the environment that they get jobs in does seem to predict how they’re going to do both in terms of citations and visibility, as well as the total number of papers they publish.”

The researchers also did the matching the other way around, comparing a faculty member with a prestigious degree to one with a less prestigious degree, who got jobs at the same university. They found that there is no statistical difference in the productivity of these faculty members. 

“The prestige of your doctorate does matter insofar as it helps you get a more prestigious job,” Larremore explained. “But what we’ve found is, once you’re in the door to a faculty job, the training then doesn’t matter.” In other words, once in the same department, the productivity of faculty members who trained at more prestigious universities was indistinguishable from that of their colleagues who trained at less prestigious universities.

But for faculty working at more prestigious universities, productivity tended to be higher. Analyzing the dataset, the researchers concluded that this most likely comes down to having a supportive work environment — including access to resources, support staff, and excellent students.

“There’s a whole literature about the role of mentors on the success of their mentees,” Clauset said. In this context, however, he added, “if you think about partnering with students as a benefit to the advisors, that inverts that storyline completely. It may be that my prominence and productivity is really completely driven by the fact that I worked with great students, and that’s what allows me to have my name on a series of great papers.”

This is somewhat of an open secret among academics, Clauset and Larremore said, though more research is needed to understand how student preparedness drives the success of their mentors (rather than vice versa), and how student preparedness varies across universities.

One of the limitations of any matched-pair study, Clauset said — including their own — is that it is impossible to control for variables that aren’t in the dataset. For instance, two scientists who look exactly the same on paper might differ in hard-to-quantify ways: one could give a better job talk, be more charismatic, or have stronger letters of recommendation. The datasets also lack information about some demographic variables, such as race and sexual orientation.

“There may be hidden features which are correlated with the outcomes we observe,” Morgan, the graduate student, explained in an email. Notwithstanding these limitations, a better understanding of the factors that drive productivity, like those highlighted here, might enable evaluators to create better null models, i.e. expectations against which to evaluate faculty performance, given their access to a particular level of resources. That would be a fairer way to evaluate faculty than looking at simple metrics of productivity, and it would even allow comparisons of faculty productivity across institutions of different size and prestige.

All of this matters, Clauset said, because, “when we’re thinking about productivity or prominence and we come at it from the meritocratic perspective, then we have certain ideas and preconceptions about what policies would be just.” But if we come at it from the perspective of thinking that there might be inequalities of opportunity, then we have to think about how to try to fix those. Practices like double-blind review, for instance, are known to remove signifiers of prestige and produce more equitable outcomes.

“You don’t want to squeeze out the hard work and talent and skill and learning parts of the system, but instead you want to eliminate structural inequalities if possible,” Clauset said. “But if people don’t even recognize that those things exist, it’s a hard conversation to have.”

Viviane Callier is a science writer whose work has appeared in Science, Nature, Scientific American, Quanta, Wired.com, Smithsonian.com, and The Atlantic.com, among other publications. She lives in San Antonio, where her husband is an active duty Air Force physician.