Ideas are cheap – empiricism is expensive. At some point, someone needs to pay for science.
There are vastly more researches than money for research. So that raises the question: Who gets the money? Surely the people who stand the highest possible chance of doing the best future work? It should be a meritocracy. But from the writhing pile of overeducated – how do funding bodies decide who is worthy of their money?
They need accurate proxies for future success. They need predictive metrics.
Educational achievement is a traditional predictive metric. If someone was smart in the past, they will be smart in the future. Unfortunately, respected institutions produce so many graduates that alumni often share a common, indistinct qualification. We need a higher-resolution metric for future success. Something that enriches the genuinely-elite signal from the commonly-prestigious noise. We need personal metrics.
‘Narratively complete’ research projects are communicated by their authors as scientific papers. In a scientific utopia, peer-reviewed papers act purely to communicate findings between researchers. However, in the pragmatic, funding-constrained environment of contemporary science, publications are now used as personal metrics for success.
It makes sense: judge scientists by the narrative product of science.
As with the university system, a hierarchy of locations exist to host each publication. There are ‘luxury’ journals (such as Nature, Science and Cell) and more humble alternatives. Journal prestige traditionally comes from a single metric: Impact Factor (IF).
IF is the average number of citations from a journal divided by the number of articles published by a journal. To game the IF system, savvy journals aim to publish highly cited articles and dismiss manuscripts that might not be cited as much. Despite being driven by subjective editorial policies, IF has become synonymous with scientific prestige. Both for luxury journals themselves and the authors who achieve a luxury byline.
If we’re looking for predictive proxies of an individual’s performance what does a high-IF publication tell us?
A ‘normal’ peer-reviewed publication indicates a researcher has narratively completed a research project. A ‘luxury’ peer-reviewed publication indicates a researcher has narratively completed a research project that IF-savvy editorial staff guess will be highly cited. The difference is not the quality of the science – but the subjective citability of the article.
This is important because luxury publications are now required for career progression. Both with grant allocation and faculty hiring. Funding bodies and host institutions use luxury journal publications as precocious proxies for future impact.
I see several problems with this approach. Firstly, whether an article get sent for external peer-review at a luxury journal (the first barrier to publication) is a decision often made by journalists – not active scientists. Their incentive is to keep IF high – not to publish the highest quality science. Secondly, for many funding bodies/institutes, one paper in a high-IF journal seems to be enough. You’ve got a single Nature/Science/Cell paper? Then you’ve produced something an IF-savvy editorial board deems citable. Climb over your peers. You’re in the club.
However, there is a difference between a paper in a high-impact journal and a high-impact paper. The real problem with lauding papers in luxury journals is that it presumes the former guarantees the later. It’s an indirect proxy used as a direct predictive metric.
One big paper is not a pattern of success. It’s a huge achievement for any scientist – and one day I’d love to grace such a podium (mainly for the wide exposure) – but it’s a single datapoint. Probably even an outlier. Big papers often have a ‘right-place, right-time’ whiff to them and are influenced by transient fashion. Again there is nothing wrong with this at a personal level. Being fashionable in the right-place, at the right-time is commendable.
Extrapolating from a single, anecdotal datapoint is less admirable. It's an indirect proxy. And if scientists get angry about anything, it’s extrapolation from subjective data. Recently, there’s been a growing rebellion.
From the comfort of his pre-Nobel vantage, Randy Schekman boldly announced:
“Journals like Nature, Cell and Science are damaging science.”
“I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.”
Easy to say from your Stockholm hotel – having already reaped the rewards of publishing in luxury journals. Still, Schekman’s piece caused a stir because it specifically claimed that in science:
“The biggest rewards often follow the flashiest work, not the best.”
Of course it depends on how you define ‘best’. But Schekman’s argues that by using luxury publications as a prestige metric, the scientific community has outsourced quality-definition to journalists. And journalists don’t choose the best science:
“A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.”
To this end Schekman suggests we – as a scientific community – need to break the hold luxury journal editorial committees have over us:
“There is a better way, through the new breed of open-access journals that are free for anybody to read, and have no expensive subscriptions to promote. Born on the web, they can accept all papers that meet quality standards, with no artificial caps. Many are edited by working scientists.”
By only sending our work to open-access journals, Schekman believes we can circumvent luxury journal editorial biases and simply publish by merit. Perpetual open-access publishing is easy to propose whilst your Nobel Prize is being put in its display box. I’m not convinced junior researchers can risk this behaviour just yet. In the current climate it would be extremely dangerous for a junior researcher to send all their work to PLoS One and eLife.
But it's an interesting idea. For it to work the scientific community also needs to shift focus from ‘single luxury papers’ towards ‘multiple high-impact papers’. Recently, there’s been an encouraging trend towards direct quantification for individual scientists. For example, Google Scholar curates all citations from an individual’s publication output. Here’s my page. I like the multitude of direct, personal metrics. For example, “h-index is the largest number h such that h publications have at least h citations”. h-index is a broad measure of an individuals citability. No weight is given to where a paper is published. Only citations. It’s not predictive impact from a single data-point. It’s actual impact from several data-points.
So if we combine consistent open-access publishing (as proposed by Schekman) with personal output metrics (such as h-index), scientists could achieve an independent objective meritocracy. We're not there yet – but I hear Schekman-like rumination from more and more people.
I just hope institutes and funding bodies have an ear to this veering zeitgeist.
When those guarding the purse-strings can resolve between prestige and quality – researchers can simply aspire to quality.