Sci + Society,

#186: The Replication Crisis with Dr. Rolf Zwaan

June 23, 2017

“Open, transparent, reproducible science is stronger science.”

Openness and transparency are among the fundamental values of science.  The most reputable scientific communication is assessable, i.e. communication that “allows those who follow it to not only understand what is claimed, but also to assess the reasoning and evidence behind the claim.”  Sadly, despite this tenet that gets lip-service from just about everyone, low rates of data sharing remain the norm across many scientific disciplines.

In Episode #186, Jesse speaks with Dr. Rolf Zwaan, a professor of Biological and Cognitive Psychology at Erasmus University Rotterdam, Netherlands.  Dr. Zwaan is one of the leading voices addressing this lack of transparency — and specifically, exploring ways in which social psychology can take on the Replication Crisis and improve the reliability of previous research.

Promoting practices of scientific transparency, as well as good data collection and analysis, is key to accelerating further scientific progress.  And to maintaining the public’s trust in what science has to say about the world.

So what exactly happened to science?

The idealized form of the Scientific Process™ involves posing a question, setting up an objective test, going through the steps to identify an answer, and then repeating.  So why is science rarely practiced according to this recipe?  Oftentimes, modern scientists are forced (or believe they are forced) to prioritize their careers and reputations over pursuing the most important questions and uncovering meaningful truths.  There are several reasons for this — not mutually exclusive and many of them interrelated — all of which contribute to today’s less-than-ideal research and publication environment.

Receiving and sustaining sufficient funds to conduct research is among the biggest factors.  In the US, researchers often cannot rely on university funds alone to finance their research.  To cover the shortfall, they must seek outside grants.  Acquiring such grants is increasingly competitive, with the plain truth that there aren’t enough funds to meet the needs of all would-be researchers.

Many grants expire after only a couple years, preventing scientists from conducting longer-term studies.  This is problematic because many of the greatest discoveries (historically speaking, anyway) have taken decades to uncover.  In the modern research environment, if researchers hope to run long-term studies, they must regularly reapply for grants — a task that takes up significant time and siphons off the time and mental energy that might otherwise be used for actual research.

Another problem involves private-industry funding.  When university and government funds have been exhausted, many researchers turn to industry or interest-group financing.  The problem is that privately funded research has a tendency to yield conclusions that support their agendas, creating conflicts of interest that are as unavoidable as they are obvious.  We see this regularly in nutritional studies, often funded by the food industry — and cigarette-funded tobacco studies are a cultural icon for untrustworthiness.

Often, the need for research funds yields a vicious cycle.  Scientists with more publications to their credit are more likely to receive funding.  Scholarly journals are more likely to publish studies in which statistically significant results were obtained (publication bias).  This puts pressure on researchers to choose easy, “broad side of a barn” topics that are sure to yield significant conclusions — or to cherry-pick interesting anomalies from large data sets, anomalies unlikely to survive replicative scrutiny by future researchers.

How can we support open scientific research practices?

How can one support increased openness and reproducibility in scholarly research?  The Center for Open Science (COS) is a great place to start!  Their website site provides a free Open Science Framework to help researchers manage and archive their research, a network of “open science” communities, and critical research platforms to aid in conducting research on various scientific practices.

Show Notes
  • 00:00:29

    Episode Introduction: The Replication Crisis

  • 00:01:52

    This Week in Neuroscience: Oxytocin and the Freeze Response

  • 00:06:15

    Smarts Drug Smarts News + Updates: The New SDS Website

  • 00:08:02

    Interview Lead-in: The Replication Crisis Leading To Manufactured Data

  • 00:10:09

    Interview Begins with Dr. Rolf Zwaan

  • 00:12:08

    The Findings of How Many Findings are Actualy Replicable

  • 00:14:14

    Is Replicating an Experiment Easy to Do?

  • 00:16:16

    How Many Studies Have Significant Findings? And How Many... Don't?

  • 00:17:41

    The Current Movement To Address The Replication Crisis

  • 00:20:24

    Arguments Against The Pre-Registration Solution

  • 00:22:37

    The Pushback Against Replication and Pre-Registration

  • 00:24:20

    Possibility of Replication Movement Leading to Meta-Researchers

  • 00:27:07

    Retort to the statement, "Since can't be replicated, so I'll just believe what I want."

  • 00:30:00

    The Stroop Effect's Replicability and How It Translates To Generalizing Effects

  • 00:33:32

    Does This Problem Exist in Other Sciences?

  • 00:34:38

    Interview Wrap: How can science best be done and it's ramifications?

  • 00:35:43

    Ruthless Listener Retention Gimmick: How Smiling Can Work Against You

  • 00:38:08

    Episode Wrap-Up

One comment

  1. ben says:

    It’s unfortunate when science experiments are poorly formed, or there is unintentional bias, but when it’s deliberate, that should classify as fraud and those responsible held accountable.

    David Diamond was recently on Stem Talk, where he talks about how the (pharma industry backed) studies that show positive results using statins, are always in terms of relative numbers instead of absolute numbers. This small distinction gives the misleading impression that they are much more effective than they really are.

    I like the idea of submitting research to journals on how the study will be conducted, with the accepted/rejected criteria based solely on how well the study is formed, regardless of the actual findings.

    Brian Nosek (Center for Open Science) was on the ‘You Are Not So Smart’ podcast (episode 100), and also talks about this. He was also on Episode 677 of the great NPR planet money podcast.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to top