Hacker News new | past | comments | ask | show | jobs | submit login

i think maybe the LLM team at facebook was bummed out because twitter bullied them on their last public release (and they didn't ignore it), and this time they decided to sit down and nerd flex by doing some undeniably excellent performance work that reduces resource requirements by 10x and limits itself only to publicly available training data.

maybe they care about moats and elon muskcrosoft's closedai or whatever, but i kinda doubt it. again, it feels more like a nerd flex probably for the purposes of raising morale internally and pushing the field as a whole in a good direction by reducing resource requirements.

excellent paper! easy on the eyes and i really like the angle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: