Hacker News new | past | comments | ask | show | jobs | submit login

Apologies if this is so unrelated as to be off-topic, but I'm new to this and so my mental model is incomplete at best and completely wrong at worst. My question is:

How would one create a "domain expert" version of this? The idea would be to feed the model a bunch of specialized, domain-specific content, and then use an app like this as the UX for that.




Either you can try it out with a longer system prompt, or wait until OpenAI releases a fine-tune API for the gpt-3.5-turbo model. The system prompts aren't designed to be very long, so the fine-tune is definitely what you'd be looking for. But it's only provided for the older models, so it's outdated at this point.

I guess you could also try to tack on an extra layer before the actual API call, and make your own system that includes key bits of info to the prompt from a more specific data set. But I'd guess at this rate of new releases from OpenAI, it might be a safe bet to wait the couple of weeks until they update the fine-tune API.


I just did this exact thing, it's very easy.

https://dev.to/dhanushreddy29/fine-tune-gpt-3-on-custom-data...


This actually has nothing to do with fine-tuning in the technical sense. You are actually using vector search and injecting the results of that into the prompt for GPT.

It is a good approach, but to use the word “fine-tuning“ for that is confusing, given that OpenAI actually has a process for fine-tuning, which works in a very different way.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: