Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, we originally thought GPT could accept a large domain specific training set (e.g. feed in the SQL schema for a user), but it's not there yet. A PM at OpenAI said it shouldn't be long off though. When that's possible, the SQL generated should be much better than Google.



> but it's not there yet. A PM at OpenAI said it shouldn't be long off though.

Does this mean that the model is still being improved? Or just that your access to it will somehow become better? Either way, I'm curious what that entails.


> Does this mean that the model is still being improved? Yes

> Or just that your access to it will somehow become better? Yes

Increasing the amount of training data we can send would improve our results and that's what OpenAI mentioned they're working on.


The OpenAI GPT-3 API access policy talks about how you grant them the right to use anything you feed the API to improve it, so I assume they're doing some kind of continuous retraining




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: