Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a dumb critique. A thin wrapper running a new samples of training data and updating weights is something already done in many situations. A sophisticated rag system incorporates new information even if the weights themselves aren't updated, effectively giving "new memory".

LLMs have problems, in practice being "static" aint one of them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: