Skip to main content

Indicators of scientific excellence - where are they?

The discussion about reputation-metrics in science is dragging on. By now everybody knows the standard indicators (publications, impact-factor, citations,...), everybody uses them, everybody criticises them - and everybody ignores them if necessary. It has become a ritual to do metrics-bashing (while boasting about the own Hirsch-factor). Something has to happen. 
Now. 
(It won't.)
While researching new metrics can earn you a living, the output, quite frankly, can bore you to tears. The same folks that were unable to show how scientific excellence maps onto numbers, now open the floodgates. They  spread their concept of 'excellence by Excel' from research to knowledge-transfer to impact on society - expanding the food-chain to be tagged. 
Get real! 
What societal impact does a scientific result have? The discovery of superconductivity? Research on linguistics of micro-languages? Any result: societal impact? Good luck!

The science-community is feeling the grip of the bureaucrats while science-funding is following the mirage of 'efficiency'. It looks as if everybody is fooled into submission.You know the line: 'I believe it is crap but since everybody is doing it, so should we' - which is heard from scientists and bureaucrats alike. So they all play those 'boredgames'.
The science-bureaucrats are the ones who need some computable numbers to rank, judge, praise or dismiss science and scientists - because they so deeply mistrust the concept of science and the peer-review-system, it seems. How could they understand the predominant working principle of curiosity-driven self-exploitation that powers any real scientist?
Since many of them can't distinguish potatoes from horse-droppings, they need the science-landscape mapped to a score-sheet to create their impressive set of poo-charts - umm, pie-charts.

This age-old approach to reputation-metrics looks so impressingly objective. But it must not be mistaken: no matter what numbers they compile, the best ones are the fallout of a peer's opinion:
publications? - Referrees have seen the paper and commented on it
citations? - Scientists quote what they learned to be important and trustworthy
PhD-theses? - a number of scientists were involved over years

Reputation-metrics as we know them - the compilation of indicators - is nothing but the condensate of peer-review that scientists justifiably rely on and that bureaucrats are so scared of. 'Objectivity' is a sweet deception and honesty about that would be a good thing for a start.

Comments

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

My guinea pig wants beer!

Rather involuntary train rides (especially long ones, going to boring places for a boring event) are good for updates on some thoughts lingering in the lower levels of the brain-at-ease. My latest trip (from Berlin to Bonn) unearthed the never-ending squabble about the elusive 'free will'. Neuroscientists make headlines proving with alacrity the absence of free will by experimenting with brain-signals that precede the apparent willful act - by as much as seven seconds! Measuring brain-activity way before the human guinea pig actually presses a button with whatever hand or finger he desires, they predict with breathtaking reproducibility the choice to be made. So what? Is that the end of free will? I am afraid that those neuroscientists would accept only non-predictability as a definite sign of free will. But non-predictability results from two possible scenarios: a) a random event (without a cause) b) an event triggered by something outside of the system (but caused).

Information obesity? Don't swallow it!

Great - now they call it 'information obesity'! If you can name it, you know it. My favourite source of intellectual shallowness, bighthink.com, again wraps a whiff of nothing into a lengthy video-message. As if seeing a person read a text that barely covers up it's own emptyness makes it more valuable. More expensive to produce, sure. But valuable? It is ok, that Clay Johnson does everything to sell his book. But (why) is it necessary to waste so many words, spoken or written, to debate a perceived information overflow? Is it fighting fire with fire? It is cute to pack the problem of distractions into the metaphore of 'obesity', 'diet' and so on. But the solution is the same. At the core of every diet you have 'burn more than you eat'. If you cross a street, you don't read every licence-plate, you don't talk to everybody you encounter, you don't count the number of windows of the houses across, you don't interpret the sounds an