For years, experts have debated how best to oversee the small subset of research in which scientists manipulate certain pathogens to make them more deadly or more transmissible. Such research has historically been funded by the National Institutes of Health, on the grounds that it might improve pandemic prediction, prevention, and treatment. Yet the NIH leaders who championed the work have also acknowledged that, absent stringent oversight, the research itself could spark a pandemic.
To receive federal funding, such high-risk work — sometimes called “gain of function” — must be evaluated by a government committee. That committee follows a framework that was influenced by guidance crafted, in part, by Gerald Epstein, a biosecurity expert who worked in the White House Office of Science and Technology Policy from 2016 to 2018.
The risks of this research “do not disappear merely because no government funds are involved,” wrote Epstein in an article published last year in the Bulletin of the Atomic Scientists. Epstein isn’t sure if this type of work is happening in the private sector, but if it is, it is largely unregulated. Epstein would like to see oversight broadened, possibly through new legislation, expansion of certain government regulations, or increased scrutiny from companies that provide liability insurance.
There is some precedent for this, Epstein said in a recent interview with Undark. Federal legislation passed in the 1990s and 2000s created the Select Agent Program, which regulates possession and use of certain dangerous microbes, including anthrax.
Epstein is currently a contributing scholar at the Johns Hopkins Center for Health Security. Our interview was conducted over Zoom, with follow-ups via email, and has been edited for length and clarity.
UD: In your article, you write that the government has historically been wary of using legislation to control or oversee scientific research. Why is that?
GE: Legislation is a pretty blunt instrument. If one wants to have controls over a very rapidly evolving field, it’s important to have some awareness of exactly where the field’s going and what the adverse implications of a not finely-tuned regulation could be. It takes technical expertise to know how one would want to regulate technical activity. That technical expertise is typically not resident in Congress. In the past, it’s been delegated directly for [federal] agencies.
The second reason is even simpler. Anything you put in place with legislation can only be changed with legislation. So if you put a regime in place with a law, then the only way to fix it — if that ends up being shown to be unnecessary or shown to be overly harmful or shown to be not quite tuned correctly — the only way to fix that is with a later legislation. And legislation is so hard to pass now. Even if one finally got something in place, one can never count on getting new legislation in place later to amend it.
UD: There have been a few times when Congress did pass legislation to guide scientific research. Specifically, there are laws that regulate use of human subjects, research animals, and select agents. Was there something about these particular aspects of research that made them good candidates for legislation?
GE: Part of this is, I think, distinguishing an ethical issue or a policy issue, which is something where Congress would want to step in and say: “This is not a technical issue. This is one where we’re asserting our policy values.” That might be an area where legislation rather than regulation would be appropriate. I think passing legislation to oversee research is a high bar, and human experimentation passes that bar. We’ve seen travesties in the past.
The point I made in my article is that research going on outside the federal government can only be addressed with legislation. It’s the only way to govern research that’s not federally funded.
UD: Currently, virologists must go through a special safety review before they can receive government funding for gain-of-function research of concern. My understanding is that you helped design this review. If that’s accurate, can you talk about what considerations went into its design?
GE: In 2014, there were some very widely publicized biosafety lapses among the most premier United States laboratories. The government said, “We’ve got to take a look at biosafety.”
The government issued a funding pause [on selected gain-of-function research]. The U.S. government said “research which we’re now conducting on these types of viruses — which has the effect of enhancing a potentially pandemic virus — we’re going to put that on pause for a while until the U.S. government develops a policy that we can then put research proposals through.”
The government asked the NSABB [National Science Advisory Board for Biosecurity] to make recommendations. They had the National Academy of Sciences conduct two symposia on that topic, and they asked a contractor, Gryphon Scientific, to do a very extensive risk-benefit analysis of this type of research. And all of that came together and was handed back to the government when I was at [the Office of Science and Technology Policy].
What I largely used was the NSABB’s recommendations. The part I did take verbatim was the definition of what type of research should fall into this category because that was a technical question that we really wanted the scientific community to opine on. Does it capture anything you’re worried about? And is it narrow enough that it doesn’t unduly constrain researchers?
UD: Do you think that the policy has been successful?
GE: I was surprised by one thing. There have been very few research proposals that have been subjected to it. We recognized that this separate level of review is going to take a level of time and attention, which is merited because of the potential consequences. But we also recognized that giving that kind of scrutiny to every research proposal that comes along that doesn’t pose that same kind of risk would just strangle science.
It turned out that between 2017 and now, 2024 — 7 years — four projects have been reviewed by them. That really surprised me. It’s much lower than I thought.
As another data point: A research paper has come out just a few months ago, consisting of a survey of biosafety officers at research institutions. One of the questions these biosafety officers were asked was, “Does your institute conduct this type of research — research which would need this type of review?” A significant fraction — 10 to 20 percent — said “yes.”
So on the one hand, four projects have gone through the review. On the other hand, there are a lot more research institutions saying that they do that type of research.
It means one of two things: Either it means that the biosafety officers didn’t know exactly the boundaries of what the policy covered. And that might be right. Maybe this isn’t a very common form of research, and if you are vaguely aware of the policy, you might think that you do a lot of stuff that doesn’t technically count. Or it might mean a lot of this research is going ahead, and people don’t know that needs to be reviewed.
I don’t know which of the two it is. I think it’s pretty important to figure that out.
UD: I’ve spoken with some researchers who have gone through the process recently and found it to be, I think, excessively slow and opaque. Do you have thoughts on that?
GE: I have not been inside of it, so I don’t know exactly what’s involved. I can imagine there is a fair amount of, as you said, opacity and slowness to it. I think the counter is that these are very highly consequential experiments. And as long as it’s focused on the ones that really matter, the ones that are enhancing pathogens that really do have pandemic potential, that’s appropriate. One should spend one’s time looking carefully at that.
I’m sympathetic with the idea that the review process is cumbersome, and if it’s inefficient and can be improved to get the same review with less time and being less onerous, then I’m all for that.
UD: Can you talk about the possible role of liability insurance in influencing protocols surrounding gain-of-function research of concern?
GE: It’s just a general principle of law that if you do harm to someone, they have a basis to ask for compensation. Somebody smashes your car, they or their insurance company needs to make you whole and repair your car. It seems that if research activities at an institution lead to significant offsite outcomes, that that institution is then open to exposure to liability, in which case, it either has to have insurance coverage, or some of these institutions are self-insured, they have to have an agreement to come up with it.
Maybe the answer is, no judge would ever find you guilty, or find you liable for that, because you did everything you could, or you applied the accepted standards of care, or there was no way to anticipate that. There are a number of defenses, but the defense, “Who could have ever anticipated that a release of a pandemic agent might trigger a pandemic?” — that’s not something one should be surprised at. There clearly is a risk of that.
UD: Is there anything else that you’d like to add?
GE: If I have a barrel of anthrax in my basement, and I don’t have permission from the government to have it, I have committed a crime. I would like the same thing to happen for people in the private sector who are playing around with pandemic pathogen research. I want anybody doing that without appropriate oversight to break a law. Right now, there’s no law they are breaking.
One of the most important results of something being a crime is that law enforcement is allowed to investigate. Without the select agent regulations, my neighbor may have a barrel of anthrax in their basement. I go, “Well, I can’t imagine any legitimate reason why you would want to do that.” I call the police. “Can you please check that person out? Why do they have a barrel of anthrax in the basement?” And the answer is, “I’m sorry, sir, that’s not a crime.”
So it’s important to have policies, and in this case, criminal regulations on the books to allow law enforcement to even investigate stuff, much less to prosecute anybody. And I think the ability to investigate sketchy activity is pretty important when we’re dealing with consequential activities.
I would like to see some amount of governance on privately funded work totally independently of whether a pharmaceutical firm or a biotech firm would ever want to go down that road. I want there to be a law that if somebody’s at risk of breaking it, I can actually call the police.
UPDATE: A previous version of this piece misspelled the name of a company. It’s Gryphon Scientific, not Griffin Scientific.