Hacker News new | past | comments | ask | show | jobs | submit login

Let's achieve AGI first before making predictions.





When AGI is achieved, it will be able to make the predictions. And everything else. And off we the humans go to the reserve.

Only if it can do it in an affordable fashion.

If you need a supercomputer to run your AGI then it's probably not worth it for any task that a human can do, because humans happen to be much cheaper than supercomputers.

Also, it's not clear if AGI doesn't mean it's necessarily better than existing AIs: a 3 years old child has general intelligence indeed, but it's far less helpful than even a sub-billion parameters LLM for any task.


A child learns from experience, something still missing in LLMs.

Yep, but it won't deserve to be called AGI before it can learn too.

That doesn't seem sensible. We already have general intelligences. Let's infer possible outcomes before rushing head first.

People can't even begin to actually quantify how an "AGI" fits the world. Or define it consistently. How do you think you can hypothesize preparing for it? This is why people keep telling you that talking about it is ultimately meaningless. Leave this stuff on r/singularity, because people are talking about foreseeable productivity.

A general problem to action mapper. We have those in biological form with varying degrees of generality. We can use those to infer what synthetic ones will behave like.

You keep saying reiterating this like the people in charge of researching for this agree with this bar that you set. These qualified people can't even agree amongst themselves.

Also. That second point is like the most unhelpful point I've ever seen saying that "we just need to look at us, we're the real GI, we're proof AGI can exist". What are you even talking about? You don't think people have taken that philosophy before? We're not even close to figuring out all the nuts and bolts that go into /natural/ general intelligence. What makes you think it's easier here?


It doesn't get easier. Nobody predicted LLMs. Intelligence is much easier to grow than it is to understand. This is obvious.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: