Skip to main content

Artificial Intelligence Revisited

On June 22, 2010 David Gelernter presented his thoughts on Artificial Intelligence - the capability of computers to show intelligent behaviour - in a talk on invitation by The American Academy and FAZ in Berlin.
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)

Comments

Anonymous said…
Common sense, ofcourse, can never serve as a proof. The essence of every proof is that it obeys some kind of formalization, i.e. some kind of agreed iupon practice of assurance. However, it is a matter of experience that an argument which is totally couter-intuitive and contrary to common sense is usually flawed.

Anyway, as to the subject of artificial consciousness, I believe, that any machine which we would concede a kind of consciousness to, would at the same time stop being a machine.

Of course, intelligence is intimately linked to meaning and meaning in turn is linked to many things like e.g. an action-oriented context and social interaction. But consciousness comes before intelligence; how would we test for consciousness? I propose, the most basic prerequisite for consciousness is twofold: (1) The "thing" in question shows behaviour, i.e. lets us assume, that it has its own purposes and (2) this behaviour is to a certain degree unpredictable and recursive (i.e. the thing acts diverse; it "learns" and "evolves").

Ofcourse, these concepts leave a ginormous space for of interpretation. But that's in the nature of the subject. Intelligence is to a large degree a normative and not just a descriptive attribute.

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

How Does Knowledge Get Into Society? A fly-by-artist-in-residence and a Dialogue

The artist Sadie Weis was shadowing some of the scientists at Paul-Drude-Institut (a research-institute for nanomaterials) for eight weeks, observing the way they work, how scientists communicate with eachother, how they explain stuff to an outsider. The result of this dialogue is a light-installation and - maybe more important for the scientists involved - a reflection of the scientists  and of the artist on the languages they use.  T his project of an artist in a fly-by-residency will be wrapped up on Saturday, November 10th with a p resentation by the artist Sadie Weis and a panel discussion on differences and similarities in the way artists and scientists communicate with the outside world                  November 10, 2018 from 14-18                 Paul-Drude-Institut für Festkörperelektronik                  Hausvogteiplatz 5–7, Berlin-Mitte                Germany For  the Dialogue,  please register at   exhibition@pdi-berlin.de .   Der Dialog wird auf Deutsc

Driven by rotten Dinosaurs

My son is 15 years old. He asked me what a FAX-machine was. He get's the strange concept of CDs because there is a rack full with them next to the bookshelf, which contains tons of paper bound together in colorful bundles, called 'books'. He still accepts that some screens don't react to you punching your fingers on them. He repeatedly asks why my 'car' (he speaks the quotation marks) is powered by 'rotten dinosaurs'. At the same time he writes an email to Elon Musks Neuralink asking for an apprenticeship and sets up discord-servers for don't-ask-me-what. And slowly I am learning that it is a very good thing to be detached from historic technology, as you don't try to preserve an outdated concept while aiming to innovate. The optimized light-bulb would be an a wee bit more efficient, tiny light-bulb. But not a LED. An optimized FAX would probably handle paper differently - it would not be a file-transfer-system. Hyper-modern CDs might have tenf