Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But it will be both slower and lower quality than anything OpenAI currently offers.

It will definitely not be slower. Local inference with a 7b model on a 3090/4090 will outpace 3.5-turbo and smoke 4-turbo.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: