In August, the journal Science published the results of an ambitious initiative called the Reproducibility Project, a collaborative effort coordinated by the nonprofit Center
for Open Science. Participants attempted to replicate 100
experimental and correlational psychology studies that had
been published in three prominent psychology journals.
The results — widely reported in the media — were sobering. Just 39 of the studies were successfully replicated (Science,
2015). Psychology, it seemed, had a credibility problem.
But in many ways, the media missed the point. Brian
Nosek, PhD, a psychologist at the University of Virginia and
executive director of the Center for Open Science, and his
colleagues chose to focus on psychology because they are
psychologists — not because there’s something fishy going
on in the field. Reproducibility is a concern throughout
science, he says.
In this study, teams of psychologists were asked to
attempt to replicate studies that had been published in
2008 in three journals: Psychological Science, the Journal
of Personality and Social Psychology, and the Journal of
Experimental Psychology: Learning, Memory, and Cognition.
The researchers attempted to replicate the conditions of the
original experiments as closely as possible. To that end, the
authors of the original studies reviewed the materials and
methods of the replication studies.
Despite the care taken to reproduce the experiments
exactly, more than half of the studies failed to replicate.
When the effects were replicated, they tended to be smaller
or weaker than those of the original study. On the other
hand, correlational tests showed that when the original
studies had lower original p values or larger effect sizes, they
were more likely to be replicated.
Nosek and his co-authors attribute the reproducibility
problem, in part, to a combination of publication bias
and low-power research designs. Publications favor flashy,
positive results, making it more likely that studies with
larger-than-life effect sizes are chosen for publication.
That’s true throughout science. “Incentives for
achievement are similar across disciplines,” Nosek says.
“Publication is essential, and positive, novel, tidy results
increase the likelihood of getting published everywhere.”
Howard Kurtzman, PhD, APA’s acting executive
director for science, praised the study. “It’s an excellent
example of how scientists can study science itself,” he says.
“The outcomes point to the need for reforms in research,
review and publication practices.” This fall, he adds, APA’s
governance groups will be discussing steps the association
can take to enhance reproducibility.
Efforts to make scientific data and methods more
transparent will only help as scientists search for deeper
understanding. “Reproducibility is important, hard and
improvable,” says Nosek. “We can nudge the incentives
driving our behavior so that researchers are rewarded for
more transparent and reproducible research.”
— Kirsten Weir
A reproducibility crisis?
The headlines were hard to miss: Psychology, they proclaimed, is in crisis.
Open Science created Open Science Framework, software that
helps researchers store and share their data and other study
materials in a systematic way.
But to fully face these problems of process, VandenBos says,
the logistics of data sharing will have to be taught to researchers
in training. “It’s during graduate school that many of the values
and standard operating procedures are put in place. They are
the foundation that guides a researcher for the rest of his or her
career,” he says.
And then there’s the question of access. How long should
primary investigators have exclusive access to data before they
are made public? If researchers don’t have adequate time to
make full use of their data before others can share them, it may
be a disincentive to doing the research at all.
“There’s been a suggestion that the data should become
public after one year,” Ross says. “For academics who also have
to teach and have other responsibilities, it would make life
impossible for them to get the data analyzed and published
within a year. It may lead to a new type of researcher who
simply cherry picks other people’s data to publish.”
Still, he believes that with care, the benefits of transparency
can outweigh the growing pains. “It’s good science,” says Ross.
Jennings agrees, and also hopes that the research community
will come together to wrestle with the ethical concerns
thoughtfully and deliberately. “Let’s step back and think about
how to do this well, rather than cleaning up the mistakes after
we’ve made them,” she says.
Finding the sweet spot would make science more
transparent without destroying innovation or creating huge
bureaucratic hurdles, Nosek adds. “The real responsibility now
is to make sure any changes do not destroy all the good things
that are happening in science, but complement them.” n