It's not augmenting anything at a humanity level. It might give people access to skills that they don't possess, but I don't see it coming up with new styles.
Why not? If you can make a functional LLM it's a fairly small step to an LCM (Large Culture Model) and LEM (Large Emotion Model) as submodules in a LBM (Large Behavioural Model).
The only difference is the tokens are rather more abstract. But there's really nothing special about novelty.
If you have a model of human psychology and culture, there isn't even anything special about cultural novelty fine-tuned to tickle various social, intellectual, and emotional receptors.
Training data is the main thing. We have lots and lots of text and text has the special property that a sequence of text contains a lot of information about what is going to come next and is easy for a user to create. This is a rather particular circumstance, the combination of so much freely available data and their being a lot of utility in a purely auto-regressive model. It is difficult to think about what other modalities are in a similar position.
In all you described there, you are talking about anything but humanity. You described hypothetical artifacts that, if successful, would be vehicles of a synthetic species that could imitate human behavior. Again, nothing to do with humanity (unless you are bought into some kind of idea related to see humanity as dinosaurs in extinction and transhumanism as a new reality).