Another mechanism is also needed, in this era of digital publication.
Non-results should be reported. “I tested for X correlating with Y and found nothing of interest.” This isn’t exciting work. It’s the the apprentice painter painting in sky and meadow to provide background for the master’s “Venus on a Giant Clam” But it needs to be done, and provides much needed grist for the mill of meta-studies.
As a (fairly) recent graduate from a bachelors of science program and a few years of experience in research, I have been utterly surprised by how much I need and use statistics in research and how little I was instructed during my undergraduate studies. I didn’t take my first statistics course until 2 years after I graduated from college at a local CC. Not requiring at least a background in stats is a failure on the part of higher education institutions in science and perpetuates the primary issue addressed in this article.
As this article also mentions, the way that incentives are structured for publications also needs some work. “Insignificant” results or ones that researchers were not hoping for should most certainly be published. While it may not attract the attention or additional funding the PI was looking for, it contributes to the larger body of knowledge and can direct more streamlined and efficient research for generations of scientists to come. That in itself is incredibly valuable.
The widespread availability of statistical software packages, e.g. SPSS, allows non-statisticians the ability to present data that betrays their minimal statistical expertise.
With the click of a dropdown box, statistical tests may be run, and cited, that would otherwise never have been performed if manual calculation was required.
Comments are closed.