Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Man's scale is Earth.

You know, I think you have no evidence for any of your claims of "impossibility" either. And I'd argue there's a ton of counterevidence where man, completely ignoring how impossible that's supposed to be, effects change on a global scale.

You're comparing two dissimilar things. On the one hand slowing it down (which contrary to your claim that I'm moving the goalpost, is at sufficient investment effectively equal to stopping it), on the other, "contributing" to safe co-existence, which is trivially achieved by literally doing anything. I'm telling you that if we merely "contribute" to safe co-existence, we all die. The standard, and it really is the standard in any other field, is proving safe coexistence to a reasonable standard. Which should hopefully make clear where the difficulty lies: we have nothing. Even with all the interpretability research, and I'm not slagging interpretability, this field is in its absolute infancy.

"It can't be prevented" simply erases the most important distinction: if we get ASI tomorrow, we're in a fundamentally different position than if we get ASI in 50 years after a heroic global effort to work out safety, interpretability, guidance and morality.





> I'm telling you that if we merely "contribute" to safe co-existence, we all die.

I hear you. I believe you are wrong.

> it really is the standard in any other field, is proving safe coexistence to a reasonable standard

No it isn't. It often becomes the standard after the fact. But pretty much every invention by man didn't go through a committee. Can you provide some counter-examples? Did the Wright brothers prove flight was safe before they got on the first plane? Did the inventors of CRISPR technology "prove" it is safe? Or human cloning? Or nuclear fission? Your very argument rests on the mistakes humans made in the past and the out-sized consequences of making the same kinds of mistakes with AI. Your argument must be: we have to do things differently this time with AI because the stakes are higher.

These are old and boring arguments. I've been watching the less wrong space since it was overcoming bias (and frankly, from before). I've heard all of the arguments they have to make.

But the content of this discussion was on inevitability and how to respond to it. The person I replied to suggested that it was a mistake to see the future as something that happens to us. It was a call to agency. I was pointing out that not all agency is equal, and hubris can lead us to actions that are not productive.

It is also the case that fear, just like hubris, can lead us to actions that aren't productive. But perhaps we should just move on from this discussion.


> prove flight was safe

Flight did not have potentially uncontrollable consequences.

> human cloning

No uncontrollable consequences.

> Nuclear fission

To a reasonable standard, yes! I remind you that there was a concern of atmospheric ignition that was reasonably disproven before the first test.

> CRISPR technology

Tbh they should have, and I fully advocate this standard for any sort of live genomic research as well.

Also, just fwiw. I am not scared of AI. I'm not even particularly scared of dying in a global armageddon (as the song says, "we will all go together when we go", and tbh that's genuinely a relief). I just think, fairly dispassionately, that it's going to happen. You can't explain your disagreements with "my opponents are just emotionally affected."

> Your argument must be: we have to do things differently this time with AI because the stakes are higher.

I don't understand what you're saying here. That is in fact my argument. My whole entire point is just that it's not something beyond our means by any means- we have to do it, and we're capable of doing it, so we should do it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: