Opinion: STEM Graduate Programs Should Embrace Failure

My Ph.D. research culminated in a popular published article. I couldn’t have done it without hitting some dead ends.

2 Comments

A few months ago, the culmination of my Ph.D. research was published in Science, the journal of any STEM graduate student’s dreams. My colleagues and I reported on a potential kill-switch we had found to destroy “forever chemicals,” or PFAS, a class of man-made pollutants that are notoriously difficult to degrade and come with health risks.

The finding was significant, garnering downloads, social media mentions, and — through a giant stroke of luck — a ton of attention from the press. I got texts from friends saying their mother-in-law had sent them a piece about my research. My high school prom date even messaged me when he saw my findings on the BBC’s front page.

In my tweet thread about the paper, I described it as “my entire Ph.D.,” but that was a bit misleading. My dissertation also contained two years of prior research, which were filled with inconsistent results and, by my count, eight failed projects.

When cleaning out my lab bench after grad school, I found a three-inch-thick stack of lab notes. But I couldn’t bring myself to throw it out — at least, not right away. That stack of paper represented the many long nights I spent in the lab, the times I repeated experiments to figure out what was going on, and, most importantly, every single thing I tried that didn’t work.

In a tweet responding to my Science paper, Ian Cousins, an environmental chemistry professor at Stockholm University, pointed out that departments like his expect a minimum of four papers in a Ph.D. thesis. Friends of mine have told me their advisers expect similar results. While not every doctoral program has a formal quota, many departments still do. But as Cousins continued in his tweet, “Why not quality over quantity?”

Academia infamously stresses “publish or perish” above all else, and graduate school is often where that saying gets cemented into place. But this goes against the very notion of what Ph.D. programs should be.

The National Academies of Science, Engineering, and Medicine’s 2018 report “Graduate STEM Education for the 21st Century” says that a doctoral education should spark curiosity, as well as teach students to identify and work out complicated problems. An unpublishable research project can fulfill these objectives as well as a publishable one. Why then do we only value the number of publishable projects a Ph.D. student has, and not also the so-called failed projects that led them there?

This hasn’t always been the case. Much of the publish-or-perish mindset is a relatively recent phenomenon. Several studies show that, in the last 50 years, there has been a huge increase in how many Ph.D. students published research before graduating — in biology, for example, from less than 10 percent of students to upwards of 85 percent in recent years. Even as far back as 1957, the American Chemical Society’s Committee on Professional Training encouraged students to publish results when possible, but cautioned that metrics such as number of papers published should not be seen “as an index of productivity in research.”

That stack of paper represented the many long nights I spent in the lab, the times I repeated experiments to figure out what was going on, and, most importantly, every single thing I tried that didn’t work.

How did we get here? Though it feels like we’ve accidentally stumbled into a world that increasingly values publishing, there are structural factors driving this paradigm. One 2021 review suggested this could be traced back to 1950, when the National Science Foundation instituted a single-project funding mechanism which provided faculty with support to sponsor more students, but came with a tradeoff: Those projects demanded results on a deadline. “Many viewed this as very controversial as they saw the student no longer being a student but an employee,” the review said.

The 2018 National Academies report pointed to current incentives for promotion and tenure, including publication rate and grant funding, as having “adverse effects” on graduate education. The authors even recommended that universities should eliminate program stipulations “that may be adding time to degree without providing enough additional value to students, such as a first-author publication requirement.”

It seems that graduate students feel this tension, as graduate worker unionization efforts spring up at universities across the country. Why should graduate researchers labor for journal impact factor and citation metrics that will benefit their advisers’ careers more than their own? Especially given that an increasing number of Ph.D. graduates are taking positions outside of academia, having a large number of prestigious publications isn’t as important as being able to think critically, learn new technical information quickly, communicate well with others, and solve problems.

I’m not disparaging research articles. Going through the peer-review process helps researchers better understand how to frame their work. Learning how to write up methods and results is a valuable skill — one that’s necessary for others to replicate work and expand scientific knowledge. But, as any good scientist knows, a measurement is only good if it’s a valid proxy for what you want to measure. Likewise, publication quantity isn’t a valid proxy for whether a Ph.D. student understands how to make scientific judgements.

Academia infamously stresses “publish or perish” above all else, and graduate school is often where that saying gets cemented into place. But this goes against the very notion of what Ph.D. programs should be.

My graduate adviser recently admitted that he would have said I was “decidedly unlucky” for the first half of my graduate career, stumbling into projects that had disappointing or inexplicable results. The new method I’d developed for making PFAS-adsorbing polymers worked — until it didn’t. Other approaches to degrading micropollutants with light and oxygen showed some promise – and then none. I was lucky, though, to have a mentor and advocate who recognized that failed projects are just as valuable — if not more valuable — than successful ones.

During the unlucky years, I felt like I was peering through the tangled branches of the forest, hoping the green pastures of a good result were just beyond the horizon. I realize in retrospect that my early stumbles taught me how to conduct research in a way that getting the right result the first time could not.

Those early experiments were haphazard — basically the scientific equivalent of throwing a dart with a blindfold on. On my way to find the right conditions for forming a particular chemical bond, for example, I randomly changed variables such as concentration and type of base, hoping to magically stumble upon the right conditions.

But when I had to present my work, I couldn’t explain why I’d chosen the methods I did. An older grad student suggested I should record more measurements at different timepoints so I could understand what was happening in the middle of the reaction, rather than just the end. Thinking that would take too much time and effort, I discarded his advice. I learned the hard way that he was right. Without those middle timepoints, I couldn’t figure out why my reactions weren’t succeeding, and I got stuck going in circles.

Eventually, learning to better track my experiments let me figure out those dead-end projects faster. Each time I brought a set of bad results to my adviser, it hurt less to kill the project because I had learned how to fail quickly and unambiguously. And when I got to the project that eventually formed the Science paper, my timepoint tracking habit allowed me to notice curious phenomena that I would otherwise have missed.

By then, I had also learned to set up experiments to rule out alternative explanations before they even happened and how to choose analysis methods that could actually answer the questions I was asking. In other words, I needed time — and failures — to figure out the right strategies.

Given the diminishing numbers of graduate students who desire to become postdocs and later academics, graduate schools face a great reckoning. I hold out hope that as more research advisers and departments modernize and adjust to these changes, they will remember that Ph.D. programs are about building good researchers, not simply bodies of research.

When new researchers have the space to learn from failures and the time to answer big questions rather than simply pursue incremental progress in the blind pursuit of publication, science gains richer information and better researchers. Any project that lets students learn how to ask “how?” and “why?” and “why not X?” and “what about Y?,” contains training value for graduate students, even ones who are “decidedly unlucky.”


Brittany Trang is a Sharon Begley Science Reporting Fellow at STAT News. She earned her Ph.D. in chemistry at Northwestern University in 2022.