Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is weird, but GPT-3 is worse than much smaller LLaMA models so I doubt it would see much use anyway.


How do you measure this? Pointers to papers would be very helpful


The LLaMA paper had a bunch of comparisons


Aren't the LLaMA weights leaked though? Did Facebook ever open up its license?


Doesn’t matter if you only use it yourself. No one will know.


Are you referring to DaVinci or ChatGPT-3.5


DaVinci




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: