Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is my exact takeaway too, and I'm always surprised it doesn't get mentioned often. If AI is truly groundbreaking, then shouldn't AI be able to re-implement itself? Which, to me, would imply that every AI company is not only full of software devs cannibalizing themselves, but the companies themselves also are.


This is my watershed for true AGI. It should be able to create a smarter version of itself.

Last I checked, feeding the output of an LLM back into its training data leads to a progressively worse LLM. (Note I'm not talking about distillation, which involves training a smaller model, by sacrificing accuracy. I'm referring to an equal or greater number of model parameters)


If the LLM is given the code for its training and is able to improve that, does that count? Because it seems like a safe bet that we're already there, the only problem is latency of training runs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: