Jul 5, 2010

Consciousness - an emergent property

A stone is dropped into a pond creating a perfectly circular wavefront that propagates radially away from the point of impact. The speed of wave-propagation, the amplitude and wavelength can easily be modelled by a wave-function - a computer can simulate the wave-pattern on the water and display a virtual lake with breathtaking similarity. But it remains a simulation. The lake does not solve a wave-equation in order to show a wave-pattern. The propagation of a water-wave is the consequence of an inherent property of the water itself. The description by a wave-equation - as accurate as it might be - is a model of the real thing, a simulation - not even an imitation. These are two completely different - and absolutely not comparable - paths to the image of a water-wave.
The simulated wave shows the same imagery as the real one, the wave-propagation looks identical, the optical reflections will be perfectly similar, it might even be possible to predict some wave-behavior.
But the simulation lacks wetness.
The simulation of intelligent behaviour might imitate quite well decision-processes, even the all so human fuzzyness (some might remember the 'humanize'-button on a sequencer of the eighties... a button adding some imperfection to the timing), but no matter how good this form of "Artificial Intelligence" is, it remains a simulation of intelligence. The processes leading to intelligence or artificial intelligence are inherently different. And so mainly the appearances we actively simulate will be found, the simulation still lacks - consciousness

Jul 2, 2010

Artificial Intelligence Revisited

On June 22, 2010 David Gelernter presented his thoughts on Artificial Intelligence - the capability of computers to show intelligent behaviour - in a talk on invitation by The American Academy and FAZ in Berlin.
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)