top of page
Search
  • Writer's pictureSociety of Bioethics and Medicine

Replicability of Science

Updated: Sep 29, 2023

Written by Selma Music

Edited by Christopher Orzech


Amy Cuddy is a social psychologist best known for her work on body language and her 2010 field study on “power poses.” Her study found that participants who adapted certain positions, such as having their legs astride or up on a desk, felt stronger feelings of empowerment as opposed to other positions. The participants' testosterone levels went up, and their cortisol levels went down when adapting to these poses. Dr. Cuddy went on to publish in the academic journal, Psychological Science, saw mainstream attention from CNN and Oprah magazine, and gave a TED Talk which is now the second-most popular video of the company. This fame only lasted so long, as Dr. Cuddy’s research has been under heavy scrutiny by her fellow academics.


A reform movement has been going through the field of psychology since 2011. Calling into question numerous published studies, social psychologists have been adapting new statistical methods of analyzing replicability. Dr. Cuddy has been accused of attracting attention and fame with an “excess of confidence in fragile results, [prizing] her platform over scientific certainty.” However, many of Dr. Cuddy’s colleagues, even those who are critical of her work in the field, have said that the attacks on her are excessive and personal. Yet the critiques of Dr. Cuddy have reflected the toll that scientists take to acknowledge the possibility that their work may not be completely scientific.


The field of social psychology began developing at the turn of the 20th century and has continued flourishing since then. Researchers have tried to design better experiments by carefully controlling their methods, increasing the empiricism of their findings. A long-standing issue that is receiving more attention is subjectivity. When researchers review data they use their judgment to analyze the mostly qualitative data, as well as decide which data to retain and which to remove. The data removed may have been considered to be “unusual” or to exclude subjects due to an “experimental glitch,” most often, these decisions conveniently strengthened the results). This phenomenon is known as a false-positive, the exclusion of unusual results leading to positive data, usually the result of bias. P-hacking, another research phenomenon, is the intentional manipulation of data being skewed. The probability value (P-value) is the standard measurement of statistical significance where a value of 0.05 or lower is said to mean that there is a 5% or lower respective chance of the observed results occurring purely by coincidence. In other words, the lower the P-value, the greater the chance of disproving the null hypothesis. The standard for statistical significance is a P-value of less than 5 percent, which is what allows a study to be published. By removing data that does not agree with the majority of findings, researchers can push the P-values for their data closer to 0.05, which may then attract a scientific journal to publish their study. Results are heavily skewed by researchers who cut corners using these practices, and their studies mislead the rest of the scientific community until it is observed that the results cannot be replicated.


In a case study done on reproducibility by Torsten Hothorn and Friedrich Leisch, the researchers conducted a survey comprising 100 papers recently published in the peer-reviewed academic journal Bioinformatics. The main findings implicate a culture of making data from simulation studies widely available, while rarely providing the source code for them, among its authors. Out of 56 papers published in a particular volume of the journal, only 8 papers offered computer code (Hothorn & Leisch, 2010).


Many journals have established a “data must be published” policy. By making the data and methods more accessible, this allows for transparency and the opportunity for other researchers to reproduce any given experiment, and hopefully solidify the conclusions of the original study's authors. If a study were to provide results without a preceding methodology, any conclusions made would be heavily scrutinized. Similarly, if the code used to perform a simulation experiment is left out of the study its results get published in, the findings lack a level of empiricism that is standard in scientific research. Moreover, what qualifies as reproducible research? Is it enough for the study to be reproduced a few times before publication? Should the study be reproduced by the whole community for a period of time? There are also issues of code failing over time with the development of new computing technology. Yet, in any of these cases, future generations of scientists may question how we published many of the supposed findings we have up until this point. Whether it is for the current generation or future generations, reproducibility and open access to data and methodology is crucial for science.


References:


Dominus, S. (2017, October 18). When the Revolution Came for Amy Cuddy. The New York Times. https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html?ref=oembed


Hothorn, T., & Leisch, F. (2011). Case studies in Reproducibility. Briefings in Bioinformatics, 12(3), 288–300. https://doi.org/10.1093/bib/bbq084

Post: Blog2_Post
bottom of page