I guess I’m just curious what the killer use-cases are for fine tuning. For example it seems like overkill to fine tune a Shakespeare model, because you can just say “write like Shakespeare” and it already knows what you want.
to my understanding there are 4 levels to add information:
1. train a model
2. fine tune a model
3. create embeddings for a model
4. use few shot prompt examples at inference time
These have decreasing resource need, but also decreasing quality.
For example, the GPT-3 API (not yet the GPT-4 API) has a functionality to send it your own embeddings, for example of your own source code documentation. Then you can query GPT-3 and it "knows" your source code doc and answers specifically with that in mind.