Hacker News new | past | comments | ask | show | jobs | submit login

I’m not an AI expert but as I see it:

1. LLMs are already doing much more complex and useful things than most people thought possible even in the foreseeable future.

2. They are also showing emergent behaviors that their own creators can’t explain nor really control.

3. People and corporations and governments everywhere are trying whatever they can think of to accelerate this.

4. Therefore it makes sense to worry about newly powerful systems with scary emergent behaviors precisely because we do not know the mechanism.

Maybe it’s all an overreaction and ChatGPT 5 will be the end of the line, but I doubt it. There’s just too much disruption, profit, and havoc possible, humans will find a way to make it better/worse.




I follow but that looks like a weak (presumptive) inductive argument to me. Could it be that Hinton is convinced by an argument like that? I would have expected something more technically specific.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: