OpenAI launched an App Store in Nov 2023. A 23 month turnaround from major feature launch, to deprecation, to relaunch is a commitment to product longevity that’d put Google to shame.
I found it genuinely impressive how useless their "GPTs" were.
Of course, part of it was due to the fact that the out-of-the-box models became so competent that there was no need for a customized model, especially when customization boiled down to barely more than some kind of custom system prompt and hidden instructions. I get the impression that's the same reason their fine-tuning services never took off either, since it was easier to just load necessary information into the context window of a standard instance.
Edit: In all fairness, this was before most tool use, connectors or MCP. I am at least open to the idea that these might allow for a reasonable value add, but I'm still skeptical.
> I get the impression that's the same reason their fine-tuning services never took off either
Also, very few workloads that you'd want to use AI for are prime cases for fine-tuning. We had some cases where we used fine tuning because the work was repetitive enough that FT provided benefits in terms of speed and accuracy, but it was a very limited set of workloads.
Very typical e-commerce use cases processing scraped content: product categorization, review sentiment, etc. where the scope is very limited. We would process tens of thousands of these so faster inference with a cheaper model with FT was advantageous.
Disclaimer: this was in the 3.5 Turbo "era" so models like `nano` now might be cheap enough, good enough, fast enough to do this even without FT.