I don't know how you solve the "training data and tooling prompts bias LLM responses towards old frameworks" part of this, but once a new (post-cutoff) framework has been surfaced, LLMs seem quite capable of adapting using in-context learning.
New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.
New framework developers need to make sure their documentation is adequate for a model to use it when the docs are injected into the context.