Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good question - I guess if the interpretability folk went looking for these sort of additive/accumulative representations and couldn't find them, that'd be fairly conclusive.

These models are obviously forming their own embedding-space representations for the things they are learning about grammar and semantics, and it seems that latent space-y representations are going to work best for that since closely related things are not going to change the meaning of a sentence as much as things less closely related.

But ... that's not to say that each embedding as a whole is not accumulative - it's just suggesting they could be accumulations of latent space-y things (latent sub-spaces). It's a bit odd if Anthropic haven't directly addressed this, but if they have I'm not aware of it.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: