Skip to main content

Invention, innovation and carrier pigeons

We live with the bold categorization of research as being either 'fundamental' or 'applied'. The emphasis in funding - and broadly in the public understanding - is on the supposedly more valuable *applied research*.
Scientists engaged in fundamental research, on the other hand, are widely seen as geeks, as nerds in ivory-towers of academia, kind of wasting taxpayers' money for their personal entertainment, dabbling with expensive machines, finding ultrafast neutrinos and dismissing them again...
At the same time innovation is imperative. So innovate we do. All the time.
But seriously, what kind of innovation could we expect when we are asked to do research on optimizing the rubber of a tire, or if coerced to develop a better mp3? What can we expect if somebody pays our research to
make cars more fuel-efficient? Certainly there would be some neat progress. Some nifty inventions. But innovation?
Let's look back. What would we have gotten when, 30 years ago, we would have researched how to make light-bulbs smaller and less energy-hungry? We might have gotten smaller light-bulbs, with some trickery inside. Would we have light-emitting diodes today? Probably not.
Anybody remember those tubes in old receivers? Doing applied research on them would have led to what? To the invention of a transistor?
Never!
Would we have discovered and understood x-rays by looking for methods to study bones in a living organism early 1900? Or take electromagnetic waves. Are you sure we would have encountered them during our quest to invent some type of long-distance communication 100 years ago? I bet carrier pigeons would have made the race.
To put it the other way around: what fundamental discovery concerning laws of nature did *not* result in a multiple gazillion dollar market? (yes there are a few - but you get the point). The danger of doing research in a system that demands predictability of results, as well as clear, controllable project-plans with well-defined milestones and a written concept for IP- and technology-transfer at the end is that you mostly get exactly that: predictable results.
Don't get me wrong, those results can be great, helpful, important. But the request to do only the predictable will yield the predictable - at best. Stubborn application-driven research yields incremental inventions and a predictable but very limited return on investment.
The real disruptive stuff can not be planned - by definition. If you aim for real invention it is necessary to look for some deep understanding of nature - and then get somebody inspired to create an application from that. As also industry is demanding to be 'less nice' if you wish to be innovative, science should withstand the pressure to serve as an incremental problem-solving machine.
The return on investment will be much bigger.

Comments

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

My guinea pig wants beer!

Rather involuntary train rides (especially long ones, going to boring places for a boring event) are good for updates on some thoughts lingering in the lower levels of the brain-at-ease. My latest trip (from Berlin to Bonn) unearthed the never-ending squabble about the elusive 'free will'. Neuroscientists make headlines proving with alacrity the absence of free will by experimenting with brain-signals that precede the apparent willful act - by as much as seven seconds! Measuring brain-activity way before the human guinea pig actually presses a button with whatever hand or finger he desires, they predict with breathtaking reproducibility the choice to be made. So what? Is that the end of free will? I am afraid that those neuroscientists would accept only non-predictability as a definite sign of free will. But non-predictability results from two possible scenarios: a) a random event (without a cause) b) an event triggered by something outside of the system (but caused).

Information obesity? Don't swallow it!

Great - now they call it 'information obesity'! If you can name it, you know it. My favourite source of intellectual shallowness, bighthink.com, again wraps a whiff of nothing into a lengthy video-message. As if seeing a person read a text that barely covers up it's own emptyness makes it more valuable. More expensive to produce, sure. But valuable? It is ok, that Clay Johnson does everything to sell his book. But (why) is it necessary to waste so many words, spoken or written, to debate a perceived information overflow? Is it fighting fire with fire? It is cute to pack the problem of distractions into the metaphore of 'obesity', 'diet' and so on. But the solution is the same. At the core of every diet you have 'burn more than you eat'. If you cross a street, you don't read every licence-plate, you don't talk to everybody you encounter, you don't count the number of windows of the houses across, you don't interpret the sounds an