Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is some evidence that the OpenAI GPT-3 APIs have a human in the loop for bad examples. They may also have a number of filters to exclude certain words/patterns/other rules.

The challenge with such rule and human in the loop systems is that the long-tail of these problems is huge, and fat. Meaning that you generally can't make a product which doesn't have full generalization. That it took ~1.5 years to open the GPT-3 API inclines me to think that they've run into similar problems. We're also not seeing the long pitched swarm of GPT enabled content despite the API being open for ~10 months.



There’s no way they have a human in the loop. The model spits out tokens one at a time. You can see that with the stream flag set to true. The latency doesn’t allow for human intervention.

They do have API parameters for tweaking repetitiveness. That might be what you’re talking about - but it’s fair to call the model and an external repetition filter part of the same product.

As for word filters - no. If they did they’d not be sending back explicit content. But they do. If you have a gpt-3 product you’re obligated to run each result through their content filter to filter out anything nsfw.

We don’t see a ton of gpt-3 enabled content because writing good gpt-3 prompts is hard. You’re trying to learn how this black box works with almost no examples to go off of. I worked for a gpt-3 startup and we put someone on prompt writing full time to get the most out of it. Most startups wouldn’t think to do that and won’t want to.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: