Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's the kind of fanciful futuristic pseudo-risk that takes over the discussion from actually existing risks today.


Calling it "fanciful" is a prime example of the greatest shortcoming of the human race being our inability to understand the exponential function.

In any case, the open letter addresses both types of risk. The insistence by many people that society somehow can't think about more than one thing at a time has never made sense to me.


Can you demonstrate exponential progress in AI?

One notes that in roughly 6000 years of recorded history, humans have not made themselves any more intelligent.


Two notes that AI is clearly way more intelligent than it was even three years ago, and that GPU hardware alone is advancing exponentially, with algorithmic advances on top of that, along with ever-larger GPU farms.


"Exponentials" in nature almost always turn out to be be sigmoid functions when looked at over their full range. Intelligence in particular seems very very likely to be a sigmoid function, since scaling tends to produce diminishing returns as inter-node communication needs increase, just like we saw with CPU parallelism.


Sure, but we have an existence proof for human-level intelligence, and there's no particular reason to believe humans are at the top of what's possible.


There's no particular reason to believe anything about intelligence far greater than a human's either.

And there's absolutely no reason to imagine that current AI tech, requiring more training data than the whole of humanity has ever consumed to train on tasks that humans acquire in 5-10 years, has any chance to reach significant improvements in general intelligence (which would certainly require on-the-fly training to adapt to new information).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: