Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're finetuning your own model, the closed models being "incredibly higher quality" is probably less relevant.


That's how we all want it to work, but the reality today is that GPT-4 is better at almost anything than a fine-tuned version of any other model.

It's somewhat rare to have a task and good enough dataset that you can finetune something else to be close enough in quality to GPT-4 for your task.


GPT-4 is still heavily censored and will simply refuse to talk about many "problematic" things. How is that better than a completely uncensored model?


Depends what you’re using it for. For many use cases, the censorship is irrelevant.


Finetuning a better model still yields better results than finetuning a worse model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: