I am not sure how someone would change your mind beyond Anthropic's excellent interpretabilty research. It shows clearly that there are features in the model which reflect entities and concepts, across different modalities and languages, which are geometrically near each other. That's about as latent space-y as it gets.
So I'll ask you, what evidence could convince you otherwise?
Good question - I guess if the interpretability folk went looking for these sort of additive/accumulative representations and couldn't find them, that'd be fairly conclusive.
These models are obviously forming their own embedding-space representations for the things they are learning about grammar and semantics, and it seems that latent space-y representations are going to work best for that since closely related things are not going to change the meaning of a sentence as much as things less closely related.
But ... that's not to say that each embedding as a whole is not accumulative - it's just suggesting they could be accumulations of latent space-y things (latent sub-spaces). It's a bit odd if Anthropic haven't directly addressed this, but if they have I'm not aware of it.
So I'll ask you, what evidence could convince you otherwise?