In science, results of experiments are peer-reviewed before acceptance and publication for a wider scientific audience where they may act as source of information for other scientists and/or basis for further research by others in that field.
However, when another scientist attempts to recreate the experiment and cannot get the same result, it raises some concern about the validity of the original experiment.
Recently a major effort to reproduce published psychology experiments is resulting in lively discussions in scientific circles due the fact that fewer than half the “verification” experiments found similar results to the initial experiment.
Michael Barnett-Cowan (PhD), assistant professor of neuroscience in the Department of Kinesiology at the University of Waterloo, in Ontario.
Listen
Published this month in the journal Science, the Reproducibility Project: Psychology (Estimating the reproducibility of psychological science) found that only 35 of 100 attempted replications produced the same findings as the original study. This recreation of previous psychological research was begun in 2011,
The Science article points out that the status or authority of the researcher or agency are not important, rather the ability of others to replicate the experiment and achieve the same results are what gives the results credence.
Micheal Barnett-Cowan who conducted the replication of a previous experiment as part of this overall study, and which did replicate original results, says usually when a scientist attempts to replicate previous experiments and results differ, they will communicate with the original research person or team to find out what they may have done incorrectly in efforts to resolve discrepancies in results.
He notes however that psychology deals with the minds and emotions of individuals and is therefore unlike more concrete scientific fields such as chemistry, or mathematics. He says because of that, it is not necessarily unusual to find somewhat different results from the same experiment conducted elsewhere with different subjects. He also says the inability to replicate the results of many of these experiments should not lead to the automatic conclusion that the original research was wrong.
He does say however, that the results of this major effort reported in Science Magazine. are an important wake-up call to both scientists and scientific journals.
He notes that science advances on the work of previous research and if that research cannot be replicated, then later work must also be called into question. “While reproducing all scientific experiments is not feasible, sciences such as psychology need to occasionally take stock and question previously published results”, he says, adding, “Error correction is central to science moving forward in the pursuit of new knowledge and innovation,” he says.
Professor Barnett-Cowan also expresses the concern of many scientists that there is little incentive to conduct “verification” research of previous studies, and that science journals generally are not interested in publishing research where replication of a previous experiment has failed, a feeling echoed in the Science Magazine article.
“Reproducibility is not well understood because the incentives for individual scientists prioritize novelty over replication. Innovation is the engine of discovery and is vital for a productive, effective scientific enterprise. However, innovative ideas become old news fast. Journal reviewers and editors may dismiss a new test of a published idea as unoriginal. The claim that “we already know this” belies the uncertainty of scientific evidence. Innovation points out paths that are possible; replication points out paths that are likely; progress relies on both “ Science Magazine.
Additionally however, he points out there is almost always some variability in results and the fact that only 35 of 100 psychological studies produced similar results as the original is not a cause for great alarm.
He says this large-scale study does however highlight the need for greater diligence in all science experiments, and greater sharing of methodology and data.
Brian Nosek, a psychologist at the University of Virginia in Charlottesville who led the mass replication effort is encouraged by comments such as that of Joshua Correll, a psychologist at the University of Colorado, Boulder, whose own experiment was among those where research results could not be replicated.
Speaking about this “verification” effort, Professor Correll has said. “This is how science works. How else will we converge on the truth? Really, the surprising thing is that this kind of systematic attempt at replication is not more common.”
For reasons beyond our control, and for an undetermined period of time, our comment section is now closed. However, our social networks remain open to your contributions.