moore
There is no question technology is advancing at a breakneck, never before seen pace. The historical course of technology's advances is perhaps best exemplified by today's Web, which has changed everything from how the economy is run, to finding knowledge, to our social lives, to privacy, etc. etc. Any sane person is aware of this. The signs of exponential growth in all kinds of computing power and information sharing are well described by Kurzweil's extrapolation of Moore's Law: the Law of Accelerating Returns. Despite social constraints, fluctuating economies, and despite only having recently discovered these long-term, antiquated trends, these laws seem to persist against all odds. This has led Kurzweil (and a growing list of others) to naturally conclude that at some point, our created technology's are going to surpass us in intelligence and start creating greater intelligences, and then who knows what? With all the facts at hand, the idea of this "technological singularity" holds some weight, and it is worth considering the possibility of it happening, and its ramifications. It is also worth considering some other points of view, including the idea that these ramped up exponential trends may have different results, that we may be experiencing already, day by day. This is a more subtle version of the 'Singularity,' but it, too holds weight and in my view is worth considering.

SINGULARITY: DEFINED
Before I begin a discussion about the Technological Singularity, it is worth taking time to define what this term actually describes. The Technological Singularity is considered inevitable by many, and ridiculous by others. There have been some proposing that perhaps the Singularity is already behind us. Maybe so, but that depends on your definition of the Singularity. I don't buy it.

Singularity

One of the pioneers of the idea of a technological singularity, Vernor Vinge, wrote in 1993 that the singularity is "change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence." Basically, that means the point when machines surpass human intelligence, a.k.a. the creation of "Superintelligence," which goes on to create other superintelligences, essentially shutting humans out of the equation.

Before Vinge, I. G. Good wrote:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

Then again, Good expected a superintelligence to be created some time in the 20th century.

I like the summation my friend makes: "I can't call it intelligence for something that is nothing without us. We left tomorrow and there would be a ton of idle computers, dead zoos and dead domesticated animals." I agree with that: if we humans were wiped off the face of the earth and robots / machines are operating everything without us, and to their own ends, we could call these super-human, or at least "human-like" intelligences.

If we go by any of these definitions, the Technological Singularity has indeed not yet happened.

But perhaps the Singularity happens, as predicted. Perhaps a super-human intelligence is created. Why would it then be automatically compelled to create other intelligences? Honestly, how can we even know that? An intelligence greater than our own will by its own definition have a different set of rules, and more than likely a different agenda altogether. Peter on consciousentities.com enters the Singularity conversation with a number of thought provoking criticisms along these same lines. I believe it may be a stretch to think that the simple creation of a super-human intelligence will result in an intelligence explosion that kicks us to the curb. There are others who agree with me on this. Maybe it will be more like a super-Google that is just really good at finding answers we didn't know existed and finding them in unprecedented ways we humans simply couldn't comprehend. On the flip-side of the coin, there are plenty of horror scenarios we can already imagine: Terminator, Eagle Eye, iRobot, and so on and so on--but of course we humans always like a good end-of-the-world story.

What would motivate the superintelligences to run things the way we do after we are gone? Common sense tells me that for a superintelligence to be motivated to run things in a way humans would run things, they would have to have the same human-like tendencies built on human-like hardware as well as software, and I don't see that happening.

As Kevin Kelly eloquently states:
The one kind of mind I doubt we'll make many of is an artificial mind just like a human. The only way to reconstruct a viable human species of mind is to use tissue and cells--and why bother when making human babies is so easy?

REAL VS. ARTIFICIAL INTELLIGENCE
Let's take the definition game a step closer to its core: what is AI, or intelligence for that matter? Our understanding of how to go about creating AI has changed as we have begun to redefine our understanding of our own intelligence. And it almost seems as if the more we try to define intelligence, the more it evades definition.

Considering our flawed definitions of our own intelligence, our exact definition of "artificial" intelligence likewise remains liquid as technology marches onward toward sentient machines. This has resulted in the "AI Effect."

The 'AI Effect,' simply stated, says: "AI is whatever hasn't been done yet." Much of what has already been created is AI by anyone's definition 50 years ago. We just don't call it AI because we accept these things as normal as they are integrated into our daily lives, and as our definitions of intelligence and Artificial Intelligence change. Other factors keeping us from calling these things true AI may include the type of change--gradual, incremental changes that have built up to what we have today as opposed to obvious revolutionary leaps--and furthermore it may simply be human nature, or fear, disallowing us to believe that anything close to AI has actually been created already.

From a Wired article: “If you told somebody in 1978, ‘You’re going to have this machine, and you’ll be able to type a few words and instantly get all of the world’s knowledge on that topic,’ they would probably consider that to be AI,” Google cofounder Larry Page says. “That seems routine now, but it’s a really big deal.” These kinds of things happen all the time as technology marches forward. Humans are learning from computers how to play better chess now, not the other way around.

Kevin Kelly blames our chauvinistic nature for this:
We are blind to this massive eruption of minds into [technology] because humans have a chauvinistic bias against any kind of intelligence that does not precisely mirror our own. Unless an artificial mind behaves exactly like a human one, we don't count it as intelligent. Sometimes we dismiss it by calling it "machine learning." So while we weren't watching, billions of tiny, insectlike artificial minds spawned deep into [technology], doing invisible, low-profile chores like reliably detecting credit-card fraud or filtering e-mail spam or reading text from documents. These proliferating microminds run speech recognition on the phone, assist in crucial medical diagnosis, and guide automatic gearshifts and brakes in cars. A few experimental minds can even drive a car autonomously for a hundred miles.

In Gödel, Escher, Bach, Hofstadter includes in his speculations about AI his answer to the question: "How would we even recognize AI?":
It is almost impossible to imagine that the "body" in which an AI program is housed would not affect it deeply. So unless it had an amazingly faithful replica of a human body--and why should it?--it would probably have enormously different perspectives on what is important, what is interesting, etc. ... My guess is that any AI program would, if comprehensible to us, seem pretty alien. For that reason, we will have a very hard time deciding when and if we really are dealing with an AI program, or just a "weird" program. ... Probably no one will ever understand the mysteries of intelligence and consciousness in an intuitive way. Each of us can understand people, and that is probably about as close as you can come.

In some respects, we have already created "super intelligences"--technologies that can easily outperform us at some things, like math and other specialized tasks. In some ways these machines are smarter than us, but they cannot live without us. They are capable of mass calculation etc. but not emotion. Not love, or hate, or happiness. If we're honest with ourselves, these are what we're looking for, and trying to produce.

Hard vs. easy

Emotional intelligence, while ubiquitous for humans, is much more difficult to attain for the same machines that can easily outdo us in tasks we find downright tedious and boring. Getting machines to understand emotion may be a problem of hardware as much as software, as the two layers are tightly intertwined--both with living biological beings and with computers. Intelligence as we understand it emerges from these tightly interwoven complex layers.

If we do achieve this emotional intelligence, have we then reached the Singularity? The more I think about it, I don't know that there will be a definable 'moment' of reaching the Singularity. Technology keeps overtaking menial human tasks, tasks that once would have required what we considered intelligence to perform, such as beating us at chess, or, ahem, Jeopardy!... Then again, if a piece of technology can't be considered intelligence as long as it is something that is nothing without us, then all these advances so far cannot be considered any more than tools to supplement our own intelligence. After all, if all humans are suddenly wiped off the face of the earth, what good is Deep Blue, Watson, or Google Translate? For that matter, why would we even want to create human-like AI? Everything we've created thus far has been decide-ably nonhuman, but these tools have profoundly impacted our lives.

IMMERSED IN THE PUZZLE
So machines are now capable of teaching humans, while all technology still remains dependent upon its human creators. "In short, we are engaged in a permanent dance with machines, locked in an increasingly dependent embrace." This coevolution between humans and our technology is highlighted in two of Kevin Kelly's books, Out of Control and the recently released What Technology Wants. He makes many valid points about the history and likely future of technology, and our intimate relationship as humans with it.

Watson
Instead of the antiquated top-down approach of creating a smart robot, Kelly sees intelligence as being more likely to emerge from the Web and its infrastructure, the Internet. Time and time again he displays examples in nature and machines where intelligence emerges from the hive mind of the whole--something that can only be achieved from the bottom up. Something different, more sentient, more powerful, emerges from the combination of myriad dumb parts, which is exactly what the Web is. Furthermore, the Web is growing at an unprecedented pace, and it is easily becoming the largest database of information to have ever existed, at least in our corner of the universe.

Besides the web, he points to advances in GRIN technologies: Genetics, Robotics, Information and Nanotechnologies, as all being part of the forefront of technology's march toward achieving its "wants"--increasing: efficiency, opportunity, emergence, complexity, diversity, specialization, ubiquity, freedom, mutualism, beauty, sentience, structure, and evolvability.

The language recognition of Watson 5.0, the data set of the web, the yet to be created emotionally intelligent robot... Add all these facets together, and that super-intelligence becomes something more than we can imagine, but until these components are combined, they are each capable of becoming 'intelligent' in their own specific domain, whether or not we humans admit it. Deep Blue is already "chess intelligent," Calculators are "math intelligent," gene sequencers are "DNA intelligent"--all beyond human capabilities, just to name a few examples. Technology is our tool-set, and these tools continue to become smarter. Many have already surpassed our abilities. Combine all these fields of research, supplement them with the immense data set of the Web, and then, maybe, we'll have our super-intelligence--but it will likely be unrecognizable to us. Because we are, after all, only human.

All these things being considered, I think the "Singularity" may be a poor choice of word for an event that may not be singular at all. In fact, it may already be in progress, one event at a time--on many different fronts, incomprehensible to us at the present. Maybe 100 years from now we'll look back and say "Wow, how could we have missed the moment when the Web started outsmarting us?" There may be a pinpoint in time when a super-intelligent organism overtakes our intelligence and starts immediately spitting out more and more complex, intelligent machines which we are incapable of understanding. But maybe the "Technological Singularity" is more like a giant puzzle which is slowly being assembled from several directions. And even after it is all together, maybe no one will have a full view of it. Intelligent machines already outperform us in many areas, and other tasks such as understanding emotion and navigating obstacles will likely be achieved in the future as well. However, because of how tightly we are intertwined with our creations in this coevolutionary dance, I think machines will remain dependent on humans for advances just as we are now gleaning the benefits from our creations in technology. In other words, even if super-human-intelligence does attain human-like qualities such as love and hate, I think the super-intelligence would be ill-advised to just wipe us out. Call it naive optimism if you must, but I feel this scenario is as likely, if not more-so, than the prophesied human-machine war. In the meantime, let's just all sit back and enjoy the show.

Important References:


Peter. (2010, Oct. 2). "The Singularity". Conscious Entities. Retrieved from http://www.consciousentities.com/?p=620

Hofstadter, D. (1999). Gödel, Escher, Bach: An Eternal Golden Braid (20th Anniversary Ed.). Quote: p. 680. Basic Books.

Kelly, K. (1994). Out of Control. The New Biology of Machines, Social Systems, & the Economic World. Basic Books.

Kelly, K. (2010). What Technology Wants. Quotes: p. 329, p. 332. Viking Adult.

Kurzweil, R. (2001, Mar. 7). "Law of Accelerating Returns." Retrieved from http://www.kurzweilai.net/the-law-of-accelerating-returns

Levy, S. (2010, Dec. 27). "The AI Revolution is On." Wired Magazine. Retrieved from http://www.wired.com/magazine/2010/12/ff_ai_essay_airevolution/

Vinge, V. (1993). "What is the Singularity?" Retrieved from http://mindstalk.net/vinge/vinge-sing.html

Wikipedia. (n.d.). "AI Effect". Retrieved from http://en.wikipedia.org/wiki/AI_effect


Stumbleupon Twitter Google Buzz Digg it del.icio.us Facebook Reddit Yahoo Buzz

12 Responses to “Our Daily Lives in the Progressive ‘Singularity’”

  1. roger says:

    gavins@algeria.tosca” rel=”nofollow”>.…

    спс за инфу!!…

  2. Rene says:

    bruises@voraciously.rope” rel=”nofollow”>.…

    thanks….

  3. brett says:

    prestidigitator@serenissimus.flares” rel=”nofollow”>.…

    спасибо!…

  4. wayne says:

    trusted@pillspot.com” rel=”nofollow”>.…

    hello….

  5. David says:

    otherworldly@buss.alarmingly” rel=”nofollow”>.…

    ñïñ!…

  6. vernon says:

    robberies@marches.impartial” rel=”nofollow”>.…

    ñýíêñ çà èíôó!…

  7. matthew says:

    solvency@traits.cezannes” rel=”nofollow”>.…

    ñýíêñ çà èíôó!!…

  8. travis says:

    oliver@jennie.imitative” rel=”nofollow”>.…

    ñïñ!…

  9. jorge says:

    maple@unnaturalness.carrying” rel=”nofollow”>.…

    tnx for info!…

  10. Ron says:

    widower@longrun.goggles” rel=”nofollow”>.…

    ñïñ….

  11. jordan says:

    debora@associated.revising” rel=”nofollow”>.…

    ñïñ!…

  12. gordon says:

    kittler@terribly.sleuthing” rel=”nofollow”>.…

    thanks for information….

Leave a Reply