When livestock began to mysteriously perish on a farm in Parkersburg, West Virginia, during the 1990s, it took a vexed farmer, a relentless corporate attorney, and serendipity to figure out that the deaths were linked to the disposal of a toxic chemical from a nearby DuPont facility. The chemical, perfluorooctanoic acid (PFOA), otherwise known as C8, had been used widely in consumer and industrial products dating at least back to the 1950s. But its toxicity went undetected by the normal channels of corporate inspection and government regulation.
Today, PFOA and other per- and polyfluoroalkyl substances (PFASs) — commonly referred to as “forever chemicals” due to their persistence in the environment — are at the center of a global contamination crisis. But they are only the tip of the iceberg: They represent just a fraction of the more than 40,000 largely unanalyzed chemicals sold in the U.S., a number that is sure to keep growing.
However, despite the 100 or so pages of statutory text that seem to suggest otherwise, U.S. law barely engages in any meaningful regulation of these chemicals. How could that be?
The underlying legal design of the chemical regulatory program — its basic architecture — deserves the brunt of the blame. Under existing law, chemical manufacturers have no legal obligation to test or assess the toxicity of their own chemicals. That burden rests primarily with the underfunded and understaffed Environmental Protection Agency (EPA), which thus serves as Americans’ main line of defense against toxic chemicals. EPA regulators, not manufacturers, are responsible for evaluating the entire universe of chemicals manufactured in the U.S. They are tasked with digging through the scientific literature on each of the 40,000-odd chemicals produced in the U.S. and guesstimating which ones seem dangerous enough to prioritize for further testing and analysis. The regulators must then order the relevant testing and, ultimately, conduct risk assessments based on the limited toxicity and exposure data available to determine whether additional restrictions are needed. To top it off, a manufacturer can submit unlimited comments and ultimately sue the EPA if it can argue that the agency “arbitrarily” ignored or misused relevant research in the course of an assessment.
It is not surprising, then, that EPA has banned only a handful of chemicals in over 40 years.
As if the agency’s burden wasn’t heavy enough, chemical manufacturers commonly use legal loopholes to make the regulatory job even harder. For instance, manufacturers have classified more than 10,000 chemicals as “trade secrets” — a designation that shields a chemical’s identity from public view and prevents agency staff who lack special legal clearance from viewing the files associated with it. Since a large number of these trade secret claims turned out to be unjustified, Congress in 2016 required the EPA to systematically review the legitimacy of at least a subset of chemicals classified as trade secrets. However, more than 2,000 trade-secret chemicals remain in the EPA’s queue awaiting review. In the meantime, many government and academic scientists cannot even learn the names of these industrial chemicals, much less test them for safety.
The permissive regulatory framework also allows manufacturers to throw regulators off the scent by commissioning and funding their own research. Companies can hire hand-picked scientists to conduct toxicity studies, while contractually reserving the right to control the design and interpretation of the experiment and to dictate whether the findings are made public. Since most of this research is unpublished, it is not encumbered by the conflict-of-interest standards imposed by scientific journals.
Some manufacturers also hire compliant scientists to critique independent studies in misleading and dishonest ways. As long as industry players steer clear of committing actual fraud, there are no penalties for this kind of gamesmanship; it is up to the under-funded agencies to figure out which studies have been manipulated by industry and which haven’t.
As a result of this upside-down approach to chemical regulation, we can’t know how many highly toxic chemicals are being sold on the market unnecessarily, but clearly some are. In one distressing case, a commonly used asphalt sealant turned out to be 1,000 times more toxic than alternative sealants, but no cheaper or more effective than competing brands. Much like with PFOA, the sealant’s toxicity was discovered accidentally, in this case by a group of wildlife biologists in Texas who were trying to figure out why an endangered salamander was dying at unprecedented rates.
Of course, the public is not totally without recourse. If a toxic chemical is discovered to have caused bodily harm or destroyed property, individual victims can file tort claims for their damages. However, this remedy is far from adequate. The claimant must often endure a costly, time-consuming legal process, with no guarantee of victory. And the judgments can only be retrospective: A claim can’t be filed until after the chemical has already caused harm. There’s no way to force stubborn chemical manufacturers to analyze or test their own potentially dangerous chemicals before the harm occurs.
Even after a chemical is outed as dangerous, manufacturers can start the whole process over again by phasing out the known toxic chemical and replacing it with a similar but largely untested substitute. Witness the concerns about the substitutes for the plastic additive bisphenol A (BPA), as well as the many chemicals now being used as substitutes for PFOA.
Fortunately, it is not that hard to imagine a more effective way to regulate potentially toxic chemicals. We simply need to shift the onus for research and analysis from regulators to manufacturers. Under this new regulatory regime, a manufacturer that wants to keep its product on the market would be required to rigorously assess the product’s safety, as well as how it stacks up to alternatives on the market. To prevent gamesmanship, the company would have to complete these assessments under strict deadlines. A series of mandatory assessment guidelines would help ensure that manufacturers’ analyses are scientifically reliable. For example, toxicity data could be analyzed using standardized computer models developed by expert panels, and third-party, certified contractors could be used to ensure accurate data input. Manufactures would also be required to communicate their assessments comprehensibly and honestly. Rigorous enforcement of these requirements — with harsh sanctions for fraud and data doctoring — would help protect against industry cheating.
In this reformed, information-rich world, U.S. manufacturers might finally begin to compete with one another to produce the safest and most effective chemicals. Rival manufacturers might even become part of the enforcement artillery, by scrutinizing competitors’ assessments and calling attention to those that cut corners or exaggerate safety.
Once we cut through the incomprehensible statutes, data manipulation, and lawsuits filled with legalese, the path to safer chemicals is hiding in plain sight. The answers are out there. All that’s left is for politicians to get on board and adopt a smarter, safer regulatory framework that can help keep catastrophes like the West Virginia poisonings from happening again.
Wendy Wagner is a law professor at the University of Texas School of Law and a Member Scholar at the Center for Progressive Reform. She just published “Incomprehensible!” (with Will Walker), a book that explores approaches to chemical regulation and other regulatory programs.
Will Walker is a freelance writer. He is currently enrolled in the J.D. program at Harvard Law School.
Comments are automatically closed one year after article publication. Archived comments are below.
Hmmm: Rigorous guidelines. Harsh penalties. But we were just told that the regulatory process is underfunded. We were just told that the moneyed manufacturers (not even mentioning their industry organizations) are able to game the system, using loopholes and lawyers. Start by funding and staffing the EPA adequately–including sufficient legal muscle–and eliminating the loopholes. If that doesn’t work, only then might it make sense to talk about the present system needs to be replaced by a manufacturer-based one. Because then at least we’ll have some hope of regulating it.
Your ideal regulatory system amounts to a description of the EU’s REACh–industry must provide pre-defined, comprehensive data set. Yet REACh has failed spectacularly: two audits by NGOs show that ~80% of published toxicity findings are not being found (despite the clear mandate to evaluate ‘all available data’, AAD). Since 3 of the 40 high volume REACh registrations I audited contained not a single published toxicity finding (not evaluating acute toxicity) and with quasi-random sampling, it’s safe to assume there are hundreds of high volume registrations that found zero (when each has from dozens to thousands). Very similar numbers are found in the EU’s pesticide authorization law, which has the same AAD mandate
What’s the point of assessing hazard bases on industry’s data? The tragedy is that no REACh stakeholders believes that this is a problem; and no pesticide NGO can be bothered to complain about the AAD mandate being ignored.
It’s true that many petrochemicals are grossly understudied by academia, so new & existing chemical data should be handed to academia, with industry paying the national treasuries for the work. But ignoring this other problem is insane–at least 8,000 published findings of toxicity at low doses in vertebrates exist (my estimate), and are accelerating.