Hacker News new | past | comments | ask | show | jobs | submit login

That seems to be a very interesting article. However, it’s quite long. Anybody ok with writing a short synthesis or abstract? Thanks



Basically, humans historically are rather bad at predicting future technological advancement - even those people directly involved. The article gives the examples of Wilbur Wright saying heavier-than-air flight was 50 years away in 1901 and Enrico Fermi saying that a self-sustaining nuclear reaction via Uranium was 90% likely to be impossible 3 years before building the first nuclear pile in Chicago. So AI researchers saying that AGI is 50 years away doesn't necessarily mean any more than "I don't personally know how to do this yet" - not "you've got 40 years before you have to start worrying".

Oh, and the first sign pretty much everyone had of the Manhattan Project was Hiroshima.


We’re just as bad at predicting in the other direction. General strong AI has been about 20 years away since the 1960s. Nanotechnological antibody robots were supposed to be coursing through our bloodstreams making us near immortal long before now.


Oh, of course! The article itself puts some effort into repeatedly stating the fact that people are saying 50 years does not in any way imply it will actually be 2 years and it might well be 500.


This is a useless claim though. There are an infinite number of things that could happen that would be very bad for Earth that could happen anytime between 2 to 1000 years from now. We're bad at forecasting ALL of them. We can't use this indeterminacy to prove we should be working on X when the same is applicable to another thing Y.


Well we can decide what seems more or less likely. I mean, yes, an asteroid could impact the earth and destroy all life on it. But we have some guesses as to the probability that that happens.

Clearly, by itself, the world will most likely not kill off humanity, since it hasn't happened in the thousands/millions years we've been around. The one big thing that is changing is humanity itself and the technology we're making - that's the X factor, that's what statistically speaking has a chance of actually wiping us out.

Many of the people concerned about AGI are also concerned about e.g. manufactured viruses and other forms of technology.


But equally, because we aren't working on all of them, doesn't mean it isn't worth working on any.

Also, be careful not to confuse uncertain duration with more general uncertainty. They are related, but not the same.


How about the future of Space travel imagined by extrapolating trends just after we landed on the moon in 1969? Until SpaceX came along space tech was basically frozen in the past and still the Russian's ancient soyuez capsules are the only way to get astronauts to the ISS.


I think the strongest point in the article is this: "After the next breakthrough [in AI], we still won’t know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before." That means that if we aren't prepared to start work on AI alignment now, there's not likely to be any sort of future event that will convince us of that.


> One of the major modes by which hindsight bias makes us feel that the past was more predictable than anyone was actually able to predict at the time, is that in hindsight we know what we ought to notice, and we fixate on only one thought as to what each piece of evidence indicates. If you look at what people actually say at the time, historically, they’ve usually got no clue what’s about to happen three months before it happens, because they don’t know which signs are which.

> When I observe that there’s no fire alarm for AGI, I’m not saying that there’s no possible equivalent of smoke appearing from under a door.

> What I’m saying rather is that the smoke under the door is always going to be arguable; it is not going to be a clear and undeniable and absolute sign of fire; and so there is never going to be a fire alarm producing common knowledge that action is now due and socially acceptable.

Should give you the general idea.


It's worth reading. With that said, the jist is that for every technological advance that hindsight will later show to be a precursor to AGI, it will be easy for AI "luminaries" to explain why it is not AGI, until it is, and then it will be too late.


When GAI is imminent, there will be no consensus that it is imminent. Therefore now is as good a time as any to prepare.


Adding to DuskStar's reply: There will likely not be any development or indication, short of the first functional AGI, that will make experts agree that AGI is right around the corner, and that now an appropriate time to devote a lot of resources to figuring out how to _safely_ create superhuman AGI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: