Hindsight is always 20/20, especially in the field of science. Given what we know now, it seems crazy that people used to think the world was flat. The realm of toxicology is filled with similar stories (see “pregnancy-boosting” DES and super-insecticide DDT). In the mid-twentieth century America, realization of the harmful effects of chemicals like DES and DDT contributed to rising concern about potential risks to human and environmental health associated with chemicals. Consequently, in efforts to better monitor and understand what chemicals may be harmful to humans and the environment, Congress enacted the Toxic Substance Control Act in 1976, enabling the US Environmental Protection Agency (EPA) to inventory and regulate chemicals being sold or manufactured in the US. As of 2014, 84,000 chemicals are in use in the United States (EPA TSCA). Continue reading
As scientists, many of us have read a paper, been inspired by the glamorous data, carefully followed the methods section in order to replicate the results in our own hands, and failed to validate the original results. I’ve often attributed these issues to my own inexperience and naiveté as a young scientist, but over the past several years, the irreproducibility of published data has become a widespread problem. This lack of reproducibility could be perceived as a manifestation of poor experimental design and faulty interpretation of results by researchers. However, this seems counterintuitive in that so much of a scientist’s reputation rests upon the quality of his or her publication record.
Just how rampant is the reproducibility problem?
A 2012 study led by C. Glenn Begley (then the head of cancer research at Amgen, Inc.) probed the boundaries of reproducibility in cancer literature by investigating 53 landmark publications from reputable labs and high impact journals. Despite closely following the methods sections of those publications, and even consulting with the authors and sharing reagents, Begley et al. found that the data in 47 of the 53 publications could not be reproduced; only 6 held up under scrutiny. A similar study performed at Bayer Healthcare in Germany replicated only 25% of the publications examined. These reproducibility issues do not only plague the clinical sciences. The field of psychology recently came under scrutiny during an effort called the ‘Reproducibility Project: Psychology.’ Of 100 published studies, only 39 could be reproduced by independent researchers. These facts are at once shocking, depressing, and infuriating, especially when considering preclinical publications that spawn countless secondary publications, which may lead to expensive and faulty clinical trials that inevitably fail. Unfortunately, the increasing number of flawed publications has led to a precipitous decline in the public’s trust in science and medicine.