Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even GPT-4 can only handle a few pages of text as prompt for examples. In most cases you'd want to fine tune.


I guess I’m just curious what the killer use-cases are for fine tuning. For example it seems like overkill to fine tune a Shakespeare model, because you can just say “write like Shakespeare” and it already knows what you want.


I guess you'd want to fine tune for content that wasn't already parsed before


to my understanding there are 4 levels to add information:

1. train a model

2. fine tune a model

3. create embeddings for a model

4. use few shot prompt examples at inference time

These have decreasing resource need, but also decreasing quality.

For example, the GPT-3 API (not yet the GPT-4 API) has a functionality to send it your own embeddings, for example of your own source code documentation. Then you can query GPT-3 and it "knows" your source code doc and answers specifically with that in mind.


Where in the API docs is this described?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: