Thursday, February 14, 2008

Will Spiritual Robots Replace Humanity by 2100?


In April 2000, Douglas Hofstadter (author of Godel, Escher, Bach) organized a conference at Stanford University to discuss the question; “Will Spiritual Robots Replace Humanity by 2100?” Among the participants were Bill Joy, Ray Kurzweil, Hans Moravec, John Holland, and Kevin Kelly.


Kevin Kelly chose to answer the question by examining each word in the question, starting at the end.

2100:

When thinking in the long term, especially about technology, I find it very helpful to think in terms of human generations. As a rough estimate I reckon 25 years per generation. Civilization began about 10,000 years ago (the oldest city, Jericho, was born in 8000 BC) which makes the civilization now present in Jericho and the rest of the world about 400 generations old. That's 400 reproductive cycles of mother to daughter. Four hundred generations of civilized humans is not very long. We could almost memorize the names of all 400 cycles if we had nothing much else to do. After 400 generations we are different people than when we began. We had the idea of automatons and robots only maybe 8 generations ago, and made the first electronic computers 2 generations ago. The entire World Wide Web less than 2,000 days old! The year 2100 is only four generations away, keeping the same human lifespan. If we morph into robots in 2100, civilized humans will have lasted only 400 generations. That would be the shortest lifespan of a species in the history of life.

Humanity:

The central question, the central issue, of this coming century is not “what is an AI,?” but “what is a human?” What are humans good for? I forecast that variants of the question “What is a human” will be a recurring headline in USA Today-like newspapers in this coming century. Movies, novels, conferences and websites will all grapple with this central question of “Who are we? What is humanity?” Fed by a prosperous long boom, where anything is possible, but nothing is certain, we’ll have more questions about our identity than answers. Who are we? What does it mean to be a male, or female, or a father, an American, or a human being? The next century can be described as a massive, global scale, 100-year identity crisis. By 2100, people will be amazed that we humans back here now, thought we knew what humans were.

Replace:

Replacement is a very rare position in nature. The reason we have 2 million species now is that most new species don’t replace old species; rather they interweave with the existing organisms, infill between niches, and build upon the success of other species. It is much easier to invent a new niche than it is to displace an occupied one. Most extinctions of species are not caused by usurpers, but by other factors, like climate change, comets, or self inflicted troubles. Replacement or obsoleteness of the human species seems unlikely. Given that we don’t know what humans are, our roles are likely to change; We are far more likely to redefine ourselves than to disappear.

Robots:

In general, I like Hans Moravec’s formulation that these are our children. How does one raise children? You train them for the inevitable letting go. If our children never left our control, we’d not only be disappointed, but we’d be cruel. To be innovative, imaginative, creative, and free, the child needs to be out of control of its maker. So it will be with our mind’s children, the robots. Is there a parent with a teenager who is not concerned, who does not have a bit of worry? It took us a long time to realize that the power of a technology is proportional to its inherent out-of-controlness, its inherent ability to surprise and be generative. In fact, unless we can worry about a technology, it is not revolutionary enough. Powerful technology demands responsibility. With the generative power of robots, we need heavy duty responsibility. We should be aiming to train our robotic children to be good citizens. That means instilling in them values so they can make responsible decisions when we let them go.

Spiritual:


What is the most spiritual event we could imagine? A verifiable contact with an ET would rock the foundations of established religions. It would rekindle the question of God no matter what ET’s answers. I think the movie *Contact* is the only movie where a theologian is a star. But we don’t have to wait for SETI to contact ET. We will do it by making ET; that is by making a robot. In this way ET goes by another name: AI. People worried about AI being an artificial human are way off. AIs will be closer to artificial aliens. Your calculator is already smarter in arithmetic than any person in this room. Why aren’t we threaten by it? Because it is “other.” A different kind of intelligence. One superior to us, but one we aren’t particularly envious of. Most of the minds we make including the smartest AI, will be “other.” Even in the possibility space of types of conscious minds, there are 2 million other possible species of intelligence than the one type we know (humans) -- each one of them unique and different as a calculator and a dolphin. There is no reason to make a clone of human intelligence because making traditional version is so easy. Our endeavor in the coming centuries is to use all minds so far (artificial and natural) to make all possible new minds. Meeting these minds I think will be the most spiritual thing we can imagine right now.

Will:

I think technology has its own agenda. The question I am asking myself is what does technology want? If technology is a child, a teenager even, it would really help to know what teenagers want, in general. What are the innate urges, the inherent bias, the internal drives of this system we call technology? Once we know what technology wants, we don’t have to surrender to all of these wants, anymore than you surrender to any and all adolescent urges; but you can’t buck them all either. WILL these things technology wants happen? I believe they want to happen. What we know of technology is that it wants to get smaller (Moore’s Law), it wants to get faster (Kurzweil’s Law), and my guess is that technology wants to do whatever humans do (Kelly’s Law). We humans find tremendous value in other creatures, and increasingly in other minds. I see no reason why robots would not find humans just as valuable. Will robots be able, or even want to, do all the things that humans do? No. We’ll make them mostly to do what we don’t want to do. And what then do we humans do? For the first time robots will give us the power to say: Anything we want to do.



Kevin Kelly is Senior Maverick at Wired magazine. He helped launch Wired in 1993, and served as its Executive Editor until January 1999. He is currently editor and publisher of the Cool Tools website, which gets 1 million visitors per month. From 1984-1990 Kelly was publisher and editor of the Whole Earth Review, a journal of unorthodox technical news. He co-founded the ongoing Hackers' Conference, and was involved with the launch of the WELL, a pioneering online service started in 1985. He authored the best-selling New Rules for the New Economy and the classic book on decentralized emergent systems, Out of Control.



reprinted from Kevin Kelly's website (http://www.kk.org/)

2 comments:

Anonymous said...

Thanks for posting this. It's a topic I'm interested in because it scares the crap out of me (from both a scientific, theological and philosophical viewpoint). But the religious & faith based part of me refutes this idea and scoffs at it, in all honesty. Or maybe it's because I've been tainted by that horrible iRobot movie starring Will Smith...
Anyways, I'm torn when it comes to coming up with an answer to this question. I don't think they will take over, but I do feel they will be a part of lives in many ways. Check out what they are doing in Japan, for example! I'm fascinated by Hiroshi Ishiguru's work: http://www.ed.ams.eng.osaka-u.ac.jp/
Check this video out...he made a robot that looks like him!!!
http://www.youtube.com/watch?v=RksP_gAqSh0
And Waseda University has some pretty interesting robotics projects as well: http://www.takanishi.mech.waseda.ac.jp/index.htm

Romach said...

At the Monday, February 18 edition of the Philosophy club meeting, Dr. Furman posited his deeply entrenched feelings about there not being free will. While the arguments were engaging, one thing seemed to be amiss. What ever happened to Hume's argument that one cannot be absolutely certain what will happen despite previous patterns of behavior? The sun has risen for millennia, but it begs the question to say that it will rise tomorrow. We just don't know what will happen until it happens. Right? Okay. Well, Todd argued that behavior in robots, and apparently also in humans, results from input that regulates behavior. That sounds awfully causal. If the best we can say is that there is some probability of behavior resulting from input, then shouldn't the actual output be a result of some statistical anomaly? Now, if we can say that there is some probabilistic freedom between what we predict and what actually occurs, then it would seem that the behavior that results somewhere in that chasm would have to be free.