This is a dumb critique. A thin wrapper running a new samples of training data and updating weights is something already done in many situations. A sophisticated rag system incorporates new information even if the weights themselves aren't updated, effectively giving "new memory".
LLMs have problems, in practice being "static" aint one of them.
LLMs have problems, in practice being "static" aint one of them.