Hacker News new | past | comments | ask | show | jobs | submit login

There's a lot to potentially unpack here, but idk, the idea that humanity entering hell (extermination) or heaven (brain uploading; aging cure) is whether or not we listen to AI safety researchers for a few months makes me question whether it's really worth unpacking.





Maybe people should just don’t listen to AI safety researchers for a few months? Maybe they are qualified to talk about inference and model weights and natural language processing, but not particularly knowledgeable about economics, biology, psychology, or… pretty much every other field of study?

The hubris is strong with some people, and a certain oligarch with a god complex is acting out where that can lead right now.


It's charitable of you to think that they might be qualified to talk about inference and model weights and such. They are AI safety researchers, not AI researchers. Basically, a bunch of doom bloggers, jerking each other in a circle, a few of whom were tolerated at one of the major labs for a few years, to do their jerking on company time.

If we don't do it, someone else will.

That's obviously not true. Before OpenAI blew the field open, multiple labs -- e.g. Google -- were intentionally holding back their research from the public eye because they thought the world was not ready. Investors were not pouring billions into capabilities. China did not particularly care to focus on this one research area, among many, that the US is still solidly ahead in.

The only reason timelines are as short as they are is because of people at OpenAI and thereafter Anthropic deciding that "they had no choice". They had a choice, and they took the one which has chopped at the very least years off of the time we would otherwise have had to handle all of this. I can barely begin to describe the magnitude of the crime that they have committed -- and so I suggest that you consider that before propagating the same destructive lies that led us here in the first place.


The simplicity of the statement "If we don't do it, someone else will." and thinking behind it eventually means someone will do just that unless otherwise prevented by some regulatory function.

Simply put, with the ever increasing hardware speeds we were dumping out for other purposes this day would have come sooner than later. We're talking about only a year or two really.


But every time, it doesn't have to happen yet. And when you're talking about the potential deaths of millions, or billions, why be the one who spawns the seed of destruction in their own home country? Why not give human brotherhood a chance? People have, and do, hold back. You notice the times they don't, and the few who don't -- you forget the many, many more who do refrain from doing what's wrong.

"We have to nuke the Russians, if we don't do it first, they will"

"We have to clone humans, if we don't do it, someone else will"

"We have to annex Antarctica, if we don't do it, someone else will"


Cloning? Bioweapons? Ever larger nuclear stockpiles? The world has collectively agreed not to do something more than once. AI would be easier to control than any of the above. GPUs can't be dug out of the ground.

Which? Exterminate humanity or cure aging?

Yes

The thing whose outcome can go either way.

I honestly can't tell what you're trying to say here. I'd argue there's some pretty significant barriers to each.

I’m okay if someone else unpacks it.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: