How We Think About Variation Is at the Heart of Our Scientific Literacy Crisis
In a recent essay, Martin Rees, an astrophysicist and retired University of Cambridge professor, says he is certain that, for good or ill, we are coming upon the limits of human knowledge — a point at which computers could one day overtake us. The idea certainly taps into deep-seated fears about artificial intelligence, but I’d argue that we shouldn’t worry about computers outsmarting us, or even the real (or imagined) limits of human knowledge.
What we need to worry about is wasting the knowledge we already have.
Science, be it in the form of cancer biology or climatology or statistics, can help provide perspective and potential solutions for many of our most pressing problems. We know that smoking increases our risk of developing cancer. We know that overall the world has become warmer during the past 150 years. We know that the congressional districts in Wisconsin are politically gerrymandered. But these are general phenomena, found by averaging, and there will always be individual data points which deviate. Someone may smoke for 60 years and never get cancer. Sometimes, despite global warming, it will be cold and snow. Occasionally, a Democrat gets elected in a deeply conservative district. How we deal with such aberrations, or more generally, how we think about variation, is at the heart of the scientific literacy crisis.
Cognitive and behavioral scientists have found that, when confronted with data, our brains are not great at interpreting patterns with an eye to variation. We over-interpret from small datasets and are distracted by the most recent information we consumed. We are baskets of bias. And we really don’t know how to balance error and uncertainty. Consider this: FiveThirtyEight’s final predictions for the 2016 presidential election gave Hillary Clinton a 71 percent chance of winning — the lowest odds among the major poll aggregators, but nonetheless an edge that many Democrats considered money in the bank. Instead, what that figure meant was that, taking into account the polling error, Donald Trump could still win 29 out of every 100 (simulated) elections held, including the only one that mattered.
Overall, American students consistently rank lower than other advanced industrial nations for academic achievement in math and science, while American adults have fairly poor knowledge of some basic scientific concepts. Americans also have among the lowest numeracy scores in the world, meaning they have limited ability to reason with numbers. Refocused energies and revamped classroom curricula could help address this situation (replacing high school calculus with statistics is a good place to start). But some fault may also lie with our culture writ large, particularly as manifested by government priorities.
Looking at the budget, it would be easy to deduce that America doesn’t really care about scientific research beyond engineering. Basic research — where there is no specific immediate utility — always gets the short shrift in apportionment, even though its discoveries will serve as the foundation of future applied work. Government funding for science overall has barely kept up with inflation and its share of the total budget has decreased by more than 50 percent since its peak in the 1960s. Basic research typically receives only a quarter of that federal R&D budget. This pattern seems likely only to worsen in the next budget. While we may undervalue research because it is much harder to identify (or collect) the payoffs of basic science compared to development work, arguably only governments can afford to not think about science as a business proposition. Only governments can use long-term foresight to determine priorities, to worry less about the end-goal and more about the process.
In many ways, government research investments have spoiled us. And they have misled us. All of the major “moonshot” initiatives of the 20th century were endeavors of technology — the actual moonshot Apollo Program; the Manhattan Project; and even the Human Genome Project, which was mostly a question of building the necessary DNA sequencers. What these have trained us to want of our science is the clarity of engineering. The satisfaction of a job well done that translates into a job done in perpetuity. You can go on to further improve the methodology, but there is no uncertainty about whether or not you did it in the first place. (Well, at least for the second two projects.)
According to a former NASA employee, for any given Mars mission, they build three Rovers — one for practice and troubleshooting the design, one to test, and one to send to Mars. In contrast, to find genes associated with a 3.5-fold greater chance of disease, one recent study of breast cancer risk tested 256,123 women. The scale of research necessary to find small associations like this is massive because of variation. A Mars Rover is a Mars Rover is a Mars Rover, it seems. But one cancer patient — with her specific lifestyle, her history, and her ancestry — is unique. Solving her problems, let alone curing cancer altogether as in Joe Biden’s moonshot, is that much more difficult.
Rees deems problems like preventing cancer or curing the common cold to be too complex for human brains to solve. Some think we should leave them for Big Data and machine learning to tackle. Computers will not cure the problem of variation, though. They cannot get rid of the uncertainty that comes with living in a complex world full of historical contingency. So we are left again to rely on the power of our brains. The important question is not whether scientific problems are really too complex to solve, but rather are they really too complex to think about? Right now, every day, individuals, and society as a whole, misunderstand and undervalue what we have already learned from science. But it need not be this way. In other words, it is not a question of whether there is a limit to scientific understanding but whether we are limiting ourselves in our scientific understanding.
In the short term, we can make efforts to fix problems of statistical and scientific literacy in positions of decision making. Currently there is one physicist in Congress, one mathematician, and a smattering of doctors. (Oh, and a one-time astronaut.) But tackling problems of health care, climate change, and inequality all require scientific competence. Facts can come from reports and testimony, but we need legislators who know how to analyze that data. The skills developed from a lifetime of research and scientific analysis would be strengths in government, and scientific expertise will only increase in relevance as this century progresses. It is worth noting that before they were politicians, Margaret Thatcher was a chemist and Angela Merkel a physicist. My recommendation does not only apply to elected officials. Certainly, judges and juries would benefit from a better understanding of math and statistics.
More broadly, we need to train ourselves to appreciate the science we have. There are problems we may not be able to solve, but that is not the only value of science. It can help us identify those problems and describe their risks. It can show us probabilities to plan for the future with more informed guesses. It can give us practice in thinking about and dealing with uncertainty, which arises in our lives not only from big phenomena like weather systems but also from the everyday unpredictability of human behavior. Surely this is worth spending money on.
JFK did not give speeches about finding genes that made you 3.5 times more likely to die. Even writing that sentence is awkward, and most people do not know how to think about a 3.5-fold disease risk increase. (For reference: if the risk was infinitesimal to begin with, 3.5 times it is still very small. You should ask for the absolute risk, not for the relative change.) But these are genes worth researching. Finding signal amongst noise is perhaps the most complex problem science must tackle. And this is why we must do it — because it is hard. Building robots that will take over the planet is hard, too, but appreciating variation is harder.
Aspen Reese is a Junior Fellow at the Harvard Society of Fellows, where she studies microbial ecology. She has written previously for Scientific American.
Comments are automatically closed one year after article publication. Archived comments are below.
Why replace high school calc with stats? Why not do both? Two different kinds of reasoning, and one can feed the other. Both are needed.
I am a citizen of the country that scuttled the SSC, considers me to be an economic parasite, and doesn’t care if I live or die.
I am a physicist, with a near perfect attendance record for 60 years in the classroom, and soon may face the reality of once again not having access to health care.
In an otherwise sane society I would be one of many active participants in solving our social ills. In the “reality” of this society I am considered irrelevant at best and down-right anti- on the worst of days.
This is not about me … This missive is about the collapse of sanity in this country …
RIP
The estimate is that 96% of members of the National Academy of Sciences are atheists. In the U.S. an atheist has a smaller chance of electoral success than a Communist. For scientists to become legislators the idiot voting public would have to change, or the scientists would have to lie and become “men of faith”. They would then be in good company in Congress where most members are liars.