Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Provided we can keep riding this hype wave for a while, I think the logical long term solution is most teams will have an in house/alternative LLM they can use as temporary backup.

Right now everyone is scrambling to just get some basic products out using LLMs but as people have more breathing room I can't image most teams not having a non-OpenAI LLM that they are using to run experiments on.

At the end of the day, OpenAI is just an API, so it's not an incredibly difficult piece of infrastructure to have a back up for.



> At the end of the day, OpenAI is just an API, so it's not an incredibly difficult piece of infrastructure to have a back up for.

The API is easy to reproduce, the functionality of the engines behind it less so.

Yes, you can compatibly implement the APIs presented by OpenAI woth open source models hosted elsewhere (including some from OpenAI). And for some applications that can produce tolerable results. But LLMs (and multimodal toolchains centered on an LLM) haven't been commoditized to the point of being easy and mostly functionally-acceptable substitutes to the degree that, say, RDBMS engines are.


I neither agree or disagree, but could you clarify which parts are hype to you?

Self-hosting though is useful internally if for no other reason having some amount of fall back architecture.

Binding directly only to one API is one oversight that can become a architectural debt issue. I"m spending some time fun time learning about API Proxies and Gateways.


Except that it is currently impossible to replace GPT-4 with an open model.


Depends on use case if your product has text summarisation, copywriting or translation, you can swap to many when openAI goes down and your users may not even notice




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: