moore

There is no question technology is advancing at a breakneck, never before seen pace. The historical course of technology’s advances is perhaps best exemplified by today’s Web, which has changed everything from how the economy is run, to finding knowledge, to our social lives, to privacy, etc. etc. Any sane person is aware of this. The signs of exponential growth in all kinds of computing power and information sharing are well described by Kurzweil’s extrapolation of Moore’s Law: the Law of Accelerating Returns. Despite social constraints, fluctuating economies, and despite only having recently discovered these long-term, antiquated trends, these laws seem to persist against all odds. This has led Kurzweil (and a growing list of others) to naturally conclude that at some point, our created technology’s are going to surpass us in intelligence and start creating greater intelligences, and then who knows what? With all the facts at hand, the idea of this “technological singularity” holds some weight, and it is worth considering the possibility of it happening, and its ramifications. It is also worth considering some other points of view, including the idea that these ramped up exponential trends may have different results, that we may be experiencing already, day by day. This is a more subtle version of the ‘Singularity,’ but it, too holds weight and in my view is worth considering.

SINGULARITY: DEFINED
Before I begin a discussion about the Technological Singularity, it is worth taking time to define what this term actually describes. The Technological Singularity is considered inevitable by many, and ridiculous by others. There have been some proposing that perhaps the Singularity is already behind us. Maybe so, but that depends on your definition of the Singularity. I don’t buy it.

Singularity

One of the pioneers of the idea of a technological singularity, Vernor Vinge, wrote in 1993 that the singularity is “change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.” Basically, that means the point when machines surpass human intelligence, a.k.a. the creation of “Superintelligence,” which goes on to create other superintelligences, essentially shutting humans out of the equation.

Before Vinge, I. G. Good wrote:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Then again, Good expected a superintelligence to be created some time in the 20th century.

I like the summation my friend makes: “I can’t call it intelligence for something that is nothing without us. We left tomorrow and there would be a ton of idle computers, dead zoos and dead domesticated animals.” I agree with that: if we humans were wiped off the face of the earth and robots / machines are operating everything without us, and to their own ends, we could call these super-human, or at least “human-like” intelligences.

If we go by any of these definitions, the Technological Singularity has indeed not yet happened.

But perhaps the Singularity happens, as predicted. Perhaps a super-human intelligence is created. Why would it then be automatically compelled to create other intelligences? Honestly, how can we even know that? An intelligence greater than our own will by its own definition have a different set of rules, and more than likely a different agenda altogether. Peter on consciousentities.com enters the Singularity conversation with a number of thought provoking criticisms along these same lines. I believe it may be a stretch to think that the simple creation of a super-human intelligence will result in an intelligence explosion that kicks us to the curb. There are others who agree with me on this. Maybe it will be more like a super-Google that is just really good at finding answers we didn’t know existed and finding them in unprecedented ways we humans simply couldn’t comprehend. On the flip-side of the coin, there are plenty of horror scenarios we can already imagine: Terminator, Eagle Eye, iRobot, and so on and so on–but of course we humans always like a good end-of-the-world story.

What would motivate the superintelligences to run things the way we do after we are gone? Common sense tells me that for a superintelligence to be motivated to run things in a way humans would run things, they would have to have the same human-like tendencies built on human-like hardware as well as software, and I don’t see that happening.

As Kevin Kelly eloquently states:

The one kind of mind I doubt we’ll make many of is an artificial mind just like a human. The only way to reconstruct a viable human species of mind is to use tissue and cells–and why bother when making human babies is so easy?

REAL VS. ARTIFICIAL INTELLIGENCE


[script type=”text/javascript”]
[/script]