Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

no one publicly pushes any techniques very far except for meta and it’s true they continue to train dense models for whatever reason.

the transformer was an entirely new architecture, very different step change than this

e: and alibaba



They likely continue to train dense models because they are far easier to fine tune and this is a huge use case for the Llama models


It probably also has to do with their internal infra. If it were just about dense models being easier for the OSS community to use & build on, they should probably be training MoEs and then distilling to dense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: