Skip to main content

Artificial Intelligence Revisited

On June 22, 2010 David Gelernter presented his thoughts on Artificial Intelligence - the capability of computers to show intelligent behaviour - in a talk on invitation by The American Academy and FAZ in Berlin.
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)

Comments

Anonymous said…
Common sense, ofcourse, can never serve as a proof. The essence of every proof is that it obeys some kind of formalization, i.e. some kind of agreed iupon practice of assurance. However, it is a matter of experience that an argument which is totally couter-intuitive and contrary to common sense is usually flawed.

Anyway, as to the subject of artificial consciousness, I believe, that any machine which we would concede a kind of consciousness to, would at the same time stop being a machine.

Of course, intelligence is intimately linked to meaning and meaning in turn is linked to many things like e.g. an action-oriented context and social interaction. But consciousness comes before intelligence; how would we test for consciousness? I propose, the most basic prerequisite for consciousness is twofold: (1) The "thing" in question shows behaviour, i.e. lets us assume, that it has its own purposes and (2) this behaviour is to a certain degree unpredictable and recursive (i.e. the thing acts diverse; it "learns" and "evolves").

Ofcourse, these concepts leave a ginormous space for of interpretation. But that's in the nature of the subject. Intelligence is to a large degree a normative and not just a descriptive attribute.

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

Left Brain, Right Brain

At a wonderful summer night I was lying in the grass, my little son beside me. We were staring into the dark sky, debating infinity, other planets, the origin of everything, observing falling stars that were whizzing through the atmosphere at a delightfully high rate. Why did we see so many of them that night? What are falling stars? What are comets. Why do comets return and when? The air was clear and warm. No artificial lights anywhere. The moon was lingering lazy in the trees across the river. Some fireflies were having a good time, switching their glow on and off rather randomly - in one group they seemed to synchronize but then it was random again. It reappeared: a few bugs were flashing simultaneously at first ... it started to expand, it was getting more. A whole cloud of insects was flashing in tune. Are they doing this on purpose? Do they have a will to turn the light on and off? How do those fireflies communicate? And why? Do they communicate at all? My son pointed at a fie

My guinea pig wants beer!

Rather involuntary train rides (especially long ones, going to boring places for a boring event) are good for updates on some thoughts lingering in the lower levels of the brain-at-ease. My latest trip (from Berlin to Bonn) unearthed the never-ending squabble about the elusive 'free will'. Neuroscientists make headlines proving with alacrity the absence of free will by experimenting with brain-signals that precede the apparent willful act - by as much as seven seconds! Measuring brain-activity way before the human guinea pig actually presses a button with whatever hand or finger he desires, they predict with breathtaking reproducibility the choice to be made. So what? Is that the end of free will? I am afraid that those neuroscientists would accept only non-predictability as a definite sign of free will. But non-predictability results from two possible scenarios: a) a random event (without a cause) b) an event triggered by something outside of the system (but caused).