A Different Law

Following a PBC-OBE link-bait title, Sylvia McLain reminds us that the inductive methodology underpinning technical modernity should be appreciated:

“Just like great works of art, basic scientific research should be thought of as an essential component of a modern society.”

I’m not convinced this point needs making. Do the public widely hold art above science in 2013? I can understand cultural bigots supporting science over art - and maybe as a scientist I’m too close to the metal here - but do science luddites even exist anymore?

Similar to Tom Chivers a fortnight ago, McLain supports the idea that dictating scientific facts (e.g. the second law of thermodynamics) constitutes towards a scientifically literate society. Ironically, when referencing the importance of modern technology, McLain subsequently acknowledges that technical progress itself has made fact-based discussion redundant:

“I do at times miss those slightly drunken debates about things like, "No, Tom Cruise wasn't in Dances with Wolves, it was that guy – you know, the guy that was in Bull Durham." We can't have those anymore, because someone inevitably has an iPhone or a Blackberry and goes to looks it up – end of debate.”

The same redundancy is true for scientific ‘facts’. 

Knowing the second law of thermodynamics does not make one ‘scientifically literate’. A homeopath with access to Wikipedia can look up the second law. Literacy of the scientific method is what’s missing.

Honest Methods

The ‘Materials and Methods’ section of an interesting paper has the potential to be either extremely valuable or completely infuriating. Comprehensive, detailed protocols provide invaluable technical information and can save future researchers literally months of work. Conversely, brief, low-resolution methods (often meta-referencing old papers) inhibit the understanding of reported results and can seduce future researchers into experiments that will never work. 

However, even the best published protocols have their nuances.

For example, most researchers have experienced the ‘secret trick’ phenomena; whereby an additional piece of technical knowhow - not reported in the published protocol - is required for a method to work correctly. (For example, when quizzing an author on their protocol, a formal answer might also include the footnote: “Ahhh yes, well, also, if you just indiscriminately whack X a few times we’ve noticed that helps Y catalyse Z”.) Such unpublished nuances exist in every lab I’ve ever worked in and their absence from formal literature is not deliberately disingenuous. However, their existence provides an insight into how technical protocols develop: Often by trial and error; limited by the reagents and time available to their authors.

Although detailing methodological serendipity does not appear to be welcomed by peer-reviewed journals, a recent twitter meme entitled #overlyhonestmethods provides researchers with a new medium to ‘publish’ the sincere origins of their protocols. 

Beckie Port curates a few favorites.

Despite being predictably tongue-in-cheek, the thriving success of such a meme perfectly illustrates the widespread bias, chaos and unpredictability of research methodology. 

Seduced By Confusion

Carl Bialik summarises a recent paper describing how people are easily seduced by mathematics. The study suggests that when a reader encounters data they don’t technically understand (e.g a complex equation), this actually increases their confidence in the overall conclusion of the data. Interestingly, this observation was made not from lay-readers, but from professional scientists:

“Research has shown that even those who should be especially clear-sighted about numbers—scientific researchers, for example, and those who review their work for publication—are often uncomfortable with, and credulous about, mathematical material”.

Empirical but unsurprising. And the phenomena doesn’t stop with mathematics. I’m not sure why, but any data that’s beyond one’s technical knowledge always looks more impressive. There is something naively seductive about a bonkers multicoloured-multivarient-multidimensional vector graphic induced from a technique you've never heard of. 

As a student my peers and I often hypothesised that in order to get a paper into Nature/Science you had to have at least one figure that no one understands.

The Method

I’ve noticed something recently: When asked, most people can not define science. 

Answers often describe the products of engineering and 'technology' - frequently citing everyday items a user trusts, but does not mechanistically understand (e.g. computers, medicines etc). In this sense “science” provides a contemporary noun for magic. A talismanic utility delegated by technology companies and boffins. 

Science is rarely described as method. 

There is an extremely common confusion between science as a process and the products of scientific discovery. Contemporary scientific TV programmes do little to alleviate this issue. Many shows focus on the conclusions of scientific inquiry; with little or no narrative on how that conclusion was induced. Audiences are often told to accept something extraordinary with little or no explanation. 

Carl Sagan famously said: “Extraordinary claims require extraordinary evidence”. He was discussing the dubious empiricism of extra-terrestrial visitors, but I would argue that most scientific discoveries reported in popular media are extraordinary to everyday human experience. With such regular fantastical pronouncements, now more than ever, extraordinary claims require extraordinary explanation of the evidence. 

Sagan was the mainstream master of this. Not because he had access to more or better evidence than anyone else, but because he was given a sufficient canvas to describe the empirical narrative underlying scientific claims. Longer-form media (such as Brian Cox’s Wonders series, the occasional Horizon episode and full-length newspaper articles) do better, but typically still elevate scientific conclusions over the method used to induce these conclusions. 

So why is the method not routinely explained? Extraordinary explanation of the evidence is (nearly) always found in peer-reviewed scientific articles. However, these documents are composed in the esoteric lexicon of their niche and often hidden behind paywalls. Such language and accessibility barriers ensure peer-reviewed papers only educate the educated. Access to peer-reviewed journals is improving (see Wellcome, MRC and BBSRC), but from personal experience the primary obstacle is not the availability of the evidence, but the language used to describe the evidence. Most experimental evidence is highly technical and it is extremely difficult to explain something esoteric to a non-expert without using esoteric language. Existing approaches to overcome this include 'dumbing down' the explanation (i.e. removing detail) and metaphorical translation of methodology into more general language. I prefer a bias towards the later as this treats the audience with a little more respect - but there is still a real danger of patronisation with excessive metaphor. When balanced just right, good communicators (such as Sagan and more recently Ben Goldacre) can clearly explain the methodology underlying their claims.

Unfortunately many science communicators put conclusion before method; resulting in factual media, not science media.

This all tickles me because the scientific method is extremely simple. 

All claims are induced from evidence. These claims are always falsifiable and it is the job of future scientists to test that fallibility. If something can’t be proved wrong, it’s temporarily 'right'. Richard Feynman nailed it nearly 50 years ago

Maybe if we can overcome the language barrier, more people will appreciate that science isn’t an iPad; it’s a method for evidence-based decision-making.

Trivial Pursuit

Tom Chivers lamenting scientific illiteracy in The Telegraph:

"Why is it acceptable, in certain educated circles, to cheerfully express total ignorance of the largest, and most important, domain of human knowledge?"

I can’t say I’ve seen much of this myself. None of my family or close friends are scientists - yet they always treat science with substantial intellectual respect. 

For me, the focus on 'scientific facts' as conversational currency completely misses the point. I’m more concerned with illiteracy in the scientific method as a process - not the failure of people to cite scientific facts at dinner parties.

Retraction Watch

Nothing is infallible. 

Axioms induced from genuine empiricism often turn out to be incorrect. To address this, a core value of science is its unequivocal capacity for correction. To rationally adjust one’s opinion in the light of new evidence (often labelled Bayesian reasoning). 

Unfortunately, some 'scientists' deliberately falsify data. This is a small, but very damaging problem. It can severely mislead subsequent research and ruin careers. However, just as old scientific paradigms are retracted (to make way for new, more accurate ones), previously approved peer-reviewed scientific articles can be retracted from the literature should their authors be discovered fraudulent.

Retraction Watch covers this dirty, but essential process and provides an excellent window into the plasticity of contemporary science.