Applying Moore's law to every form of technological progress just doesn't work. Look at the internal combustion engine, space travel, or energy technology and try to fit an exponential curve to their datapoints since their introduction.
I agree, Kurzweil's analyses are just bunk, but they work reasonably well as a subjective "intuition pump". You can't logically claim anything from the "accelerating curves", except that technology does seem to be "accelerating" a lot. So "exponentially accelerating technological development" does not imply that there will be world-changing effects.
To me the whole "singularity" thing comes down to these two beliefs: a) AI is feasible. b) When true AI appears on the scene it will have a profound impact on humanity. If I ignore all the confusing and contradictory definitions of the word "singularity", then my claim is "appearance of AI leads to singularity" as opposed to Kurzweil's "accelerating change observed so far will lead to singularity".
Sure is. But look at how long it has taken us to reach a Type I class with fusion. Another technology that is eternally just 20 years away.
My main points of contention with Kurzweil are in his summary dismissal of fundamental problems in science, acting as if because Moore's law exists, nanobots running in our bloodstream are just a process node or two away.
He is also dismisses biological complexity as not being that inscrutable because our genome is only as long as microsoft word in bit length. This is just silly; the power in the code is it's expressiveness that explodes combinatorially in a given environment. You know probably far better than I do
that his statements about the ease with which we'll understand DNA, etc. to reverse engineer the brain is nowhere near as simple and linear a process as he makes it out to be.
I agree. Kurzweil makes for a nice fantasy though. I mean, it could happen...
I think we've already hit a lesser singularity and no one noticed except some crazy stoned guy that sent me spam emails: the singularity of information. I can find practically anything I want to with a few keystrokes. I can talk to anyone anywhere in the world with ease. It's pretty cool, even if it doesn't make me a post-human. Maybe human 1.1.
It does make for a nice fantasy...I got swept up in it for a while. After thinking about it for a while though, I started to have doubts, at least about his time frames.
You are right about the information singularity though, I still trip over how easy it is. I remember my earlier computing experiences back in the 80s with the commodore; the computer was a toy. Even back in 91 with my crash-prone
Mac classic; it was still a toy. Then in 96 people kept talking about this "Internet". I went to a PC at my school,
typed in a few words and answers came back. At that moment I thought Holy s---, it happened without me.
In a way, I've been in that state of shock ever since.
I read Kurzweil's book and got excited about his predictions. But two recent trips: one to my dentist and the other to my doctor, left me with a saner view.
My dentist used a new CAD/CAM system to carve a sliver of plastic to fit perfectly into a hole he'd drilled into my tooth. Total cost of that procedure: $500, uncovered by dental insurance (because it's a new procedure). This is progress, but at a cost.
My doctor diagnosed multiple cysts in a private region. His cure: multiple operations, each leaving me limping out of the hospital with a bag of ice wrapped around my nuts and two week's of painful recovery. I asked for another procedure (sclerotherapy) that involves no cutting, no pain, lower cost, and no hospital or recovery time: he declined.
So IMO the singularity may arrive if you can 1) endure the procedures, 2) have sufficient $$ to pay for them, 3) live long enough for doctors to get up to speed. It is the last that I doubt will occur.
Medicine today remains barbaric and primitive, locked into a generational struggle for control and knowledge. Medicine promises wonders but rarely delivers. We've been promised cures for everything from cancer to cavities for 30 years but all that is delivered is crude cutting and poisoning (chemotherapy) by people who are little more than glorified plumbers. We funnel millions to the kingdoms of doctors/researchers, most whom are poor scientists, and get little in return. They get their Mercedes, a third home in Florida and free medical care.
Before any singularity occurs there will have to be a social change: organized medicine needs to be knocked off it's pedestal and deregulated. The function of the doctor must be de-elevated and possibly fragmented. The current system is positively medieval, with doctors occupying the position of Catholic priests during the Middle Ages. Hell, they still write their prescriptions in Latin sometimes.
I'm surprised by how incoherent this page is; it almost seems cultish. Perhaps this sort of thing would have found a more overtly religious expression in the past.
There definitely are plenty of religious wackos who take up the Singularity too. You need to make a distinction between the views of a group and the wackos within that group. Especially when it comes to something as nebulous/fuzzy as the singularity. I totally agree that there are some "singularitarians" who do treat it as a religion, they are just so excited by the ideas that they irrationally glom on and make nonsensical claims, but this doesn't reflect the views of all of the members. Take a look at this: http://www.singinst.org/blog/2007/09/30/three-major-singular...
For example, Bush is a member of the Republican Party, but I don't assume that all Republicans are deranged genocidal maniacs.
Check out the one right above it -- super cultish:
"The four worlds of the Metaverse Roadmap could also represent four pathways to a Singularity. But they also represent potential dangers. An "open-access Singularity" may be the answer."
"If (and when) the Singularity /does/ happen, humans will be wiped off the planet within a few centuries."
a. I disagree.
b. Even if I assume that you're right, why do you assume that all possible outcomes in which there are no humans on the planet are negative outcomes? e.g. What if all humans decided to "upload" themselves into computers? (Assuming that such a thing is coherent and feasible and becomes commonplace.)
Jey - if we all had kind-of abstract existences on computers, then we wouldn't be able to wonderful things like dancing, sailing, skiing and all that good stuff! We could just imagine ourselves doing that, and wouldn't that be worse? It would be kind of like dreaming a life, instead of living it, and I'd much prefer to live a life!
The kind of technology that would allow us to upload our minds to computers would presumably be powerful enough to simulate a body and an environment so convincing that we wouldn't know the difference.
I don't know and it isn't relevant. It's just an example to illustrate that it's possible for there to be outcomes which are positive and desirable yet have no humans on Earth. Whether or not uploading is feasible and desirable is certainly still an open question.
I don't claim to be an expert on this or to have thought about it much, but I think everything should be totally voluntary. Go upload yourself if you want, or go live off the land if you want. Either is fine; do what makes you happy.
I agree that you did not explicitly say that it's negative, but the scenario you're referring to obviously has a seriously negative connotation in today's zeitgeist, so you should have been clearer if you wanted to evoke something other than the default connotations. As your comment stands it seems like you're saying "ohnoes humans will be gone!".
I'm not concerned about evoking anything--but it obviously evoked passion in you. That's a good thing to see though, and I'm glad there's people that feel so strongly about humans continuing to exist on the planet for millenia to come! I'm not sure everyone share's your opinion.
I'm sure what you mean by "ohnoes"--I looked it up in a dictionary and couldn't find it--what <i>does</i> it mean?
If and when machines reach the point of self sufficiency, given that they'll supposidly be driven by rational and logical thought (though post-Singularity irrationality and illogical thought could be conceivable) they'll see humans as, well, irrational and illogical, and question our need. Once they begin to think of how we've destroyed eachother, the planet, and how we may do the same to them, they'll probably see us as a risk to their survival and do the logical thing: elliminate the risk.
You're making random assumptions about the goals these AIs would have. Remember that the goals are put into the AIs by the thing that builds it -- that means us puny humans! So we should build the AIs to value human life and do what we want.
Applying Moore's law to every form of technological progress just doesn't work. Look at the internal combustion engine, space travel, or energy technology and try to fit an exponential curve to their datapoints since their introduction.