Skip to main content

Artificial Intelligence Revisited

On June 22, 2010 David Gelernter presented his thoughts on Artificial Intelligence - the capability of computers to show intelligent behaviour - in a talk on invitation by The American Academy and FAZ in Berlin.
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)

Comments

Anonymous said…
Common sense, ofcourse, can never serve as a proof. The essence of every proof is that it obeys some kind of formalization, i.e. some kind of agreed iupon practice of assurance. However, it is a matter of experience that an argument which is totally couter-intuitive and contrary to common sense is usually flawed.

Anyway, as to the subject of artificial consciousness, I believe, that any machine which we would concede a kind of consciousness to, would at the same time stop being a machine.

Of course, intelligence is intimately linked to meaning and meaning in turn is linked to many things like e.g. an action-oriented context and social interaction. But consciousness comes before intelligence; how would we test for consciousness? I propose, the most basic prerequisite for consciousness is twofold: (1) The "thing" in question shows behaviour, i.e. lets us assume, that it has its own purposes and (2) this behaviour is to a certain degree unpredictable and recursive (i.e. the thing acts diverse; it "learns" and "evolves").

Ofcourse, these concepts leave a ginormous space for of interpretation. But that's in the nature of the subject. Intelligence is to a large degree a normative and not just a descriptive attribute.

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

My guinea pig wants beer!

Rather involuntary train rides (especially long ones, going to boring places for a boring event) are good for updates on some thoughts lingering in the lower levels of the brain-at-ease. My latest trip (from Berlin to Bonn) unearthed the never-ending squabble about the elusive 'free will'. Neuroscientists make headlines proving with alacrity the absence of free will by experimenting with brain-signals that precede the apparent willful act - by as much as seven seconds! Measuring brain-activity way before the human guinea pig actually presses a button with whatever hand or finger he desires, they predict with breathtaking reproducibility the choice to be made. So what? Is that the end of free will? I am afraid that those neuroscientists would accept only non-predictability as a definite sign of free will. But non-predictability results from two possible scenarios: a) a random event (without a cause) b) an event triggered by something outside of the system (but caused).

No theory - no money!

A neuroscientist I was talking to recently complained that the Higgs-research,even the Neutrino-fluke at CERN is getting humungous funding while neuroscience is struggling for support at a much more modest level. This, despite the undisputed fact that understanding our brain, and ultimately ourselves, is the most exciting challenge around. Henry Markram of EPFL in Switzerland   is one of the guys aiming for big, big funding to simulate the complete brain. After founding the brain institute and developing methods to analyze and then reconstruct elements of the brain in a supercomputer he now applies for 1.5 Billion Euro in EU-funding for the 'flagship-projects' of Blue Brain -and many believe his project is simply too big to fail. Some call the project daring, others audacious. It is one of the so very few really expensive life-science endeavours. Why aren't there more like that around? Why do we seem to accept the bills for monstrous physics experiments more easily? Is