Half Empty

Writing in PLoS One,​ Mobley et al discuss a worrying trend in data-reproducibility. Following a survey of MD Anderson Cancer Center faculty the authors conclude:

​"50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than ‘‘collegial’’ interaction."

I'm surprised ​it's only 50%. Presumably the other half are far too busy doing 'pioneering research'.

It's a very small study but the results will feel familiar to many researchers. It can be hard enough to reproduce an old method from one's own lab book, let alone an entirely new finding from a group on the other side of the world. Small-scale (i.e. personal) technical infidelity is frustratingly commonplace. It's why researchers repeat their experiments. What's more worrying is the potential source of large-scale (i.e. post-publication) infidelity:

"Almost one third of all trainees felt pressure to prove a mentor’s hypothesis even when data did not support it."

Forcing data to fit a hypothesis is an egregious waste of time. It's a fundamental reversal of the data-driven hypothesis axiom and the literal opposite of the scientific method. It's fraud. Conclusions follow data, not the other way around. 

So what happens when the data doesn't fit the hypothesis? Bayesian inference dictates we re-calcultate the hypothesis in light of new data. Great in theory, but what if this happens 2-months before the end of a big grant and your future career depends on 'big' publications? This hasn't happened to me yet but the thought of it terrifies me. The current funding/employment infrastructure does not reward researchers who spend 4-years discovering their grant proposal was misplaced. Failure is defined by a lack of publications, not the practice of bad science.

When bad science can = improved career prospects, we might fairly say the system is broken.

I'm starting to wonder if having a hypothesis is too much of a career burden. ​Preconceived ideas fuel confirmation bias and hamper attempts to refute those ideas. A safer option may just be consistently ask interesting questions. 

An interesting question will always have an interesting answer — whichever way the data goes. ​​