Saturday, February 7, 2009

Sarah Connor: "Look... I am not stupid, you know. They cannot make things like that yet."


This weekend, I finally finished reading James Trefil's book entitled Are We Unique? This is a very accessible, introductory text for anyone interested the study of human consciousness. You can even find it in the Frazar library (BF 444 t74 1997). Trefil is the Robinson Professor of Physics at George Mason University and has appeared on NPR several times over the years. He also has a mustache like Don Mattingly.

As a scientist, Trefil chooses consciousness and intelligence as the defining characteristics that make human beings distinct from both animals and computers. A great majority of the book is dedicated to the idea of computers nearing anything close to consciousness or artificial intelligence. Needless to say, Searle and the Chinese Room example are written about extensively. Trefil ends up contextualizing his own theory of consciousness within a larger general theory of complexity, defining consciousness as an emergent property of neuronal complexity. Like a philosopher, he ultimately leaves the question of artificial intelligence open (but his tone is more than skeptical).

After I finished reading the book, I couldn't help but wonder why human beings are so threatened by the idea of artificial intelligence? What is so sacred about intelligence as opposed to any other ability that human beings have? We have built machines that can "run" faster (cars) and are stronger (forklifts) than human beings, yet we don't feel threatened by these machines. However, if Deep Blue beats Kasparov in a chess match we start to question our own uniqueness.

Is it evolution anxiety? Do human beings simply fear becoming a place holder between animals and intelligent machines? Supposing we could create a conscious machine, would human beings cease being unique? After all, Australian orchids are not conscious entities but are still rather unique.

Or is it just a control issue as opposed to an issue of uniqueness? James Cameron seems to think so. We build the machines and the machines eventually become more powerful and kill us (See The Terminator, Terminator 2: Judgment Day, Terminator 3: Rise of the Machines, Terminator Salvation forthcoming)


What do you think?

~guybrarian

7 comments:

Anonymous said...

Firstly, the so called philosophical issues seem to me to be a cover for religious or spiritual questions. Now if you are not religious or spiritual then any argument based on such reasoning is going to fail to be convincing. The chinese room argument and indeed all arguments I know against the philosophical possibility of AI are of a spiritual nature.

This of course does not say anything about whether we are capable of creating AI or not in practice. On that matter there is considerable controversy. There are those, such as Steven Pinker, who think we will not be able to do it and there are those such as Ray Kurzweil who think we are only 2 decades away from doing it.

My own opinion is that it will very likely occur this century and I have a bet that the first functioning human level AI will have been created before 2047.

Matthew Butkus said...

I'm not so sure that the Chinese room arguments are questions of spirituality. I do a variation of this in my introduction to philosophy class and the argument is valid - the ability to work within a symbolic system effectively does not necessarily produce conscious awareness or understanding. So long as one can follow the logical operators, it is entirely possible to produce the appearance of comprehension.

Now, I don't mean to reveal my inner geek more than I have to, but this fear of replacement directly parallels a number of "worst case scenarios" in popular culture. Terminator movies aside, this is a recurring debate within the new Battlestar Galactica episodes - machines that do not know they are machines (the "Final Five").

I think it's the same fear that we see in any number of other contexts - for instance, a compelling argument can be made for the same criteria of consciousness and intelligence for a number of higher species, such as primates and dolphins (those species exhibiting self-awareness, language, and tool usage), making the "consciousness" and "intelligence" criteria differences of degree, not kind.

I strongly suspect that the fear is due to the realization that we're less special than we think we are in the grand scheme of things. In terms of its plausibility, with the rate that processing power is increasing, it wouldn't surprise me if we see something comparable to human intelligence within two decades.

Josh said...

I suppose I share Trefil's skepticism concerning AI. To date, no machine has been able to pass the Turing Test (the inability to tell whether you are talking with a machine or a human by communication only - without actually seeing the machine on the other side of the wall).

Moreover, I think that intuition or the "a-ha" moment that humans experience is impossible to program. Where does this moment of discovery come from? Is it purely chemical? Can it be replicated with transistors? I have my doubts.

Barnaby - I think that our insecurity may stem from a religious or spiritual issue. However, I don't think Searle made any claims to the religious nature of his chinese room.

At the same time, a religious or spiritual argument might claim a "soul" as what makes humans unique. Something which can be neither created nor destroyed (except by God). This won't hold much weight with the scientific mind, but that does not necessarily make it incorrect.

Matthew Butkus said...

Regarding the "a-ha" moment; maybe, maybe not. While nothing has passed the Turing test *yet*, it does not mean it cannot happen.

The nature of consciousness is still in question, but the evidence to date suggests it may be something epiphenomenal to sufficient complexity (i.e., once you hit a sufficient level of neural complexity, self-awareness is produced by the system, but isn't localized to any particular part of it). We've seen other animals display behavior similar in nature to humans in dolphins and primates (self-concept and recognition, tool use, etc.), so human level processing is clearly not the minimal degree of organization necessary for consciousness.

So what's the barrier to sufficient electronic complexity? If you have a comparable system that is capable of inductive and deductive reasoning (like us) that's made out of silicon and copper instead of carbon compounds, what would prevent similar awareness and "a-ha" moments? The video game systems we have now can adapt to individual player styles, and that's just what we use for entertainment. As technology improves (e.g., as we move towards quantum computing), why isn't it reasonable to suppose we'll see this kind of awareness develop?

Anonymous said...

Talking with an ape who has been taught sign language, is virtually identical to talking with a deaf child. To think human "consciousness" is somehow unique, and is what defines us as human, either jettisons certain humans or must include non-humans.

In other words, you're simply mistaken about what this entails.

Josh said...

I suppose the barrier I see to AI comes from the natural limits of logical systems. Godel's incompleteness theorem showed that there are problems that cannot be solved by any set of rules or procedures. We can't have a set of all sets, for example. We would create an infinite loop, constantly including the set we keep creating. (Hanno enter argument here - I never finished Godel, Escher, Bach).

This establishes a limit on systems that require a set of deductive or inductive rules to function. I think the problem facing AI comes from this dilemma. Electronic complexity, as a programmable set of rules and procedures, has a limit that the organic elements of the human mind don't.

I think our "a-ha" moments directly stem from overcoming Godel's theorem in our weird, organically neuronal way. Why does a strange dream often provide the key to a problem that had been alluding us for weeks?

Honestly, I do not know.

Maybe Jung is right, dreams are a way of communicating and acquainting ourselves with the unconscious. Maybe the unconscious offers solutions to problems that are posed during waking life. A conscious machine that couldn't dream could never be whole according to Jung. Sad. Poor cylons.

C.E. - I actually think our unconscious is what makes humans unique. We can teach apes and machines language, but we can't teach them to dream.

Anonymous said...

To Matthew Butkus: The chinese room argument shows nothing. Its just an intuition pump. The systems reply is a knock down response and the standard reply to it depends on the false assumption that conciousness is indivisible. I'm not happy to hear that you were taught the chinese room argument is valid!

To Josh: The chinese room argument does not reference spirituality. However, the argument does pump a spiritual intuition (there exists a soul).

Regarding Godel's incompleteness theorem your usage of it is a common misapplication. The main problem with your argument is the assumption that unaided human reason is capable of deciding all (or even some interesting classes of) mathematical problems. Computers are certainly limited but homo sapiens are almost certainly limited too. Having worked in hypercomputation I can confidently tell you that no demonstrated physical process goes beyond Turing computation (+ a little randomness) and hence the human brain, operating as it does within the laws of physics, is limited in precisely the same way as a program running on a Turing machine. Adding randomness does not help you get passed incompleteness results!

Regarding apes dreaming... experiments have shown that a rodent species (I think it was mice) dream. The experiment picked up (I think it was during REM sleep) recaps of neural activity that had been recorded earlier in the day. Apes probably dream and I suspect dreaming will be key to creating functioning AI.