Skip to main content

Artificial Intelligence Revisited

On June 22, 2010 David Gelernter presented his thoughts on Artificial Intelligence - the capability of computers to show intelligent behaviour - in a talk on invitation by The American Academy and FAZ in Berlin.
The title "Dream Logic, Software Minds, and the Poetry of Human Thought" gave a hint at what to expect. He went deep into his rather personal understanding of intelligence and consciousness.
Gelernter attempted a definition of 'thinking' (as opposed to the simulation of thinking) by deep introspection and analysis of his thought-processes. The result was a rather romantic, very anthropocentric praise of creativity, dreaming and intuition. Something tightly connected to feelings, emotion and unpredictability - a collection of elements a computer does arguably not have. A thinking computer, he inferred, should 'know' or 'feel' that he is thinking - thereby connecting thinking to consciousness.
But is this the right approach?
David Gelernter rejects anything that smells like solipsism. "if I see an animal with a head and eyes, I simply assume that what is going on in my head is also going on in it's head", he states in an interview with Berlin's "Der Tagesspiegel". His proof is: common sense. Although this might be satisfactory for a contemporary proponent of a romantic universal poetry, we actually do lack the ultimate test for consciousness and always end up with cozy attributes like feelings, emotions, awareness.
(see also: Der Tagesspiegel "Selbstbewußtsein ist ein Fluch", 27.6.2010)

Comments

Anonymous said…
Common sense, ofcourse, can never serve as a proof. The essence of every proof is that it obeys some kind of formalization, i.e. some kind of agreed iupon practice of assurance. However, it is a matter of experience that an argument which is totally couter-intuitive and contrary to common sense is usually flawed.

Anyway, as to the subject of artificial consciousness, I believe, that any machine which we would concede a kind of consciousness to, would at the same time stop being a machine.

Of course, intelligence is intimately linked to meaning and meaning in turn is linked to many things like e.g. an action-oriented context and social interaction. But consciousness comes before intelligence; how would we test for consciousness? I propose, the most basic prerequisite for consciousness is twofold: (1) The "thing" in question shows behaviour, i.e. lets us assume, that it has its own purposes and (2) this behaviour is to a certain degree unpredictable and recursive (i.e. the thing acts diverse; it "learns" and "evolves").

Ofcourse, these concepts leave a ginormous space for of interpretation. But that's in the nature of the subject. Intelligence is to a large degree a normative and not just a descriptive attribute.

Popular posts from this blog

Academics should be blogging? No.

"blogging is quite simply, one of the most important things that an academic should be doing right now" The London School of Economics and Political Science states in one of their, yes, Blogs . It is wrong. The arguments just seem so right: "faster communication of scientific results", "rapid interaction with colleagues" "responsibility to give back results to the public". All nice, all cuddly and warm, all good. But wrong. It might be true for scientoid babble. But this is not how science works.  Scientists usually follow scientific methods to obtain results. They devise, for example, experiments to measure a quantity while keeping the boundary-conditions in a defined range. They do discuss their aims, problems, techniques, preliminary results with colleagues - they talk about deviations and errors, successes and failures. But they don't do that wikipedia-style by asking anybody for an opinion . Scientific discussion needs a set

Information obesity? Don't swallow it!

Great - now they call it 'information obesity'! If you can name it, you know it. My favourite source of intellectual shallowness, bighthink.com, again wraps a whiff of nothing into a lengthy video-message. As if seeing a person read a text that barely covers up it's own emptyness makes it more valuable. More expensive to produce, sure. But valuable? It is ok, that Clay Johnson does everything to sell his book. But (why) is it necessary to waste so many words, spoken or written, to debate a perceived information overflow? Is it fighting fire with fire? It is cute to pack the problem of distractions into the metaphore of 'obesity', 'diet' and so on. But the solution is the same. At the core of every diet you have 'burn more than you eat'. If you cross a street, you don't read every licence-plate, you don't talk to everybody you encounter, you don't count the number of windows of the houses across, you don't interpret the sounds an

How Does Knowledge Get Into Society? A fly-by-artist-in-residence and a Dialogue

The artist Sadie Weis was shadowing some of the scientists at Paul-Drude-Institut (a research-institute for nanomaterials) for eight weeks, observing the way they work, how scientists communicate with eachother, how they explain stuff to an outsider. The result of this dialogue is a light-installation and - maybe more important for the scientists involved - a reflection of the scientists  and of the artist on the languages they use.  T his project of an artist in a fly-by-residency will be wrapped up on Saturday, November 10th with a p resentation by the artist Sadie Weis and a panel discussion on differences and similarities in the way artists and scientists communicate with the outside world                  November 10, 2018 from 14-18                 Paul-Drude-Institut für Festkörperelektronik                  Hausvogteiplatz 5–7, Berlin-Mitte                Germany For  the Dialogue,  please register at   exhibition@pdi-berlin.de .   Der Dialog wird auf Deutsc