Skip to main content

Invention, innovation and carrier pigeons

We live with the bold categorization of research as being either 'fundamental' or 'applied'. The emphasis in funding - and broadly in the public understanding - is on the supposedly more valuable *applied research*.
Scientists engaged in fundamental research, on the other hand, are widely seen as geeks, as nerds in ivory-towers of academia, kind of wasting taxpayers' money for their personal entertainment, dabbling with expensive machines, finding ultrafast neutrinos and dismissing them again...
At the same time innovation is imperative. So innovate we do. All the time.
But seriously, what kind of innovation could we expect when we are asked to do research on optimizing the rubber of a tire, or if coerced to develop a better mp3? What can we expect if somebody pays our research to
make cars more fuel-efficient? Certainly there would be some neat progress. Some nifty inventions. But innovation?
Let's look back. What would we have gotten when, 30 years ago, we would have researched how to make light-bulbs smaller and less energy-hungry? We might have gotten smaller light-bulbs, with some trickery inside. Would we have light-emitting diodes today? Probably not.
Anybody remember those tubes in old receivers? Doing applied research on them would have led to what? To the invention of a transistor?
Never!
Would we have discovered and understood x-rays by looking for methods to study bones in a living organism early 1900? Or take electromagnetic waves. Are you sure we would have encountered them during our quest to invent some type of long-distance communication 100 years ago? I bet carrier pigeons would have made the race.
To put it the other way around: what fundamental discovery concerning laws of nature did *not* result in a multiple gazillion dollar market? (yes there are a few - but you get the point). The danger of doing research in a system that demands predictability of results, as well as clear, controllable project-plans with well-defined milestones and a written concept for IP- and technology-transfer at the end is that you mostly get exactly that: predictable results.
Don't get me wrong, those results can be great, helpful, important. But the request to do only the predictable will yield the predictable - at best. Stubborn application-driven research yields incremental inventions and a predictable but very limited return on investment.
The real disruptive stuff can not be planned - by definition. If you aim for real invention it is necessary to look for some deep understanding of nature - and then get somebody inspired to create an application from that. As also industry is demanding to be 'less nice' if you wish to be innovative, science should withstand the pressure to serve as an incremental problem-solving machine.
The return on investment will be much bigger.

Comments

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

My guinea pig wants beer!

Rather involuntary train rides (especially long ones, going to boring places for a boring event) are good for updates on some thoughts lingering in the lower levels of the brain-at-ease. My latest trip (from Berlin to Bonn) unearthed the never-ending squabble about the elusive 'free will'. Neuroscientists make headlines proving with alacrity the absence of free will by experimenting with brain-signals that precede the apparent willful act - by as much as seven seconds! Measuring brain-activity way before the human guinea pig actually presses a button with whatever hand or finger he desires, they predict with breathtaking reproducibility the choice to be made. So what? Is that the end of free will? I am afraid that those neuroscientists would accept only non-predictability as a definite sign of free will. But non-predictability results from two possible scenarios: a) a random event (without a cause) b) an event triggered by something outside of the system (but caused).

No theory - no money!

A neuroscientist I was talking to recently complained that the Higgs-research,even the Neutrino-fluke at CERN is getting humungous funding while neuroscience is struggling for support at a much more modest level. This, despite the undisputed fact that understanding our brain, and ultimately ourselves, is the most exciting challenge around. Henry Markram of EPFL in Switzerland   is one of the guys aiming for big, big funding to simulate the complete brain. After founding the brain institute and developing methods to analyze and then reconstruct elements of the brain in a supercomputer he now applies for 1.5 Billion Euro in EU-funding for the 'flagship-projects' of Blue Brain -and many believe his project is simply too big to fail. Some call the project daring, others audacious. It is one of the so very few really expensive life-science endeavours. Why aren't there more like that around? Why do we seem to accept the bills for monstrous physics experiments more easily? Is