Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel that the effects of fine-tuning are often short-term, and sometimes it can end up overwriting what the model has already learned, making it less intelligent in the process. I lean more towards using adaptive methods, optimizing prompts, and leveraging more efficient ways to handle tasks. This feels more practical and resource-efficient than blindly fine-tuning. We should focus on finding ways to maximize the potential of existing models without damaging their current capabilities, rather than just relying on fine-tuning.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: