sorry to hear that, totally understand feeling burnt
ooc - do you think theres anything we could do to change that? that is one of the biggest things we are wrestling with. (aside from completely distancing from langchain project)
My advice is to focus less on the “chaining” aspect and more on the “provider agnostic” part. That’s the real reason people use something other than the native SDK of an LLM provider - they want to be able to swap out LLMs. That’s a well-defined problem that you can solve with a straight forward library. There’s still a lot of hidden work because you need to nail the “least common denominator” of the interfaces while retaining specialized behavior of each provider. But it’s not a leaky abstraction.
The “chaining” part is a huge problem space where the proper solution looks different in every context. It’s all the problems of templating engines, ETL scripts and workflow orchestration. (Actually I’ve had a pet idea for a while, of implementing a custom react renderer for “JSX for LLMs”). Stay away from that.
My other advice would be to build a lot of these small libraries… take advantage of your resources to iterate quickly on different ideas and see which sticks. Then go deep on those. What you’re doing now is doubling down on your first success, even though it might not be the best solution to the problem (or that it might be a solution looking for a problem).
> My advice is to focus less on the “chaining” aspect and more on the “provider agnostic” part
a lot of our effort recently has been going into standardizing model wrappers, including for tool calling, images etc. this will continue to be a huge focus
> My other advice would be to build a lot of these small libraries… take advantage of your resources to iterate quickly on different ideas and see which sticks. Then go deep on those. What you’re doing now is doubling down on your first success, even though it might not be the best solution to the problem (or that it might be a solution looking for a problem).
I would actually argue we have done this (to some extent). we've invested a lot in LangSmith (about half our team), making it usable with or without langchain. Likewise, we're investing more and more in langgraph, also usable with or without langchain (that is in the orchestration space, which youre separately not bullish on, but for us that was a separate bet than LangChain orchestration)
Separating into smaller libraries is a smart move. And yeah, like you said, I might be bearish on the orchestration space, but at least you can insulate it from the rest of your projects.
Best of luck to you. I don’t agree with the disparaging tone of the comments here. You executed quickly and that’s the hardest part. I wouldn’t bet against you, as long as you can keep iterating at the same pace that got you over the initial hurdles.
Your funding gives you the competitive advantage of “elbow grease,” which is significant when tackling problems like N-M ETL pipelines. But don’t get stuck focusing on solving every new corner case of these problems. Look for opportunities to be nimble, and cast a wide net so you can find them.
I agree. Adopting a more modular approach is a great idea. Coming from the Java ecosystem, I still miss having something like the Spring framework in Python. I believe Spring remains an example of excellent framework design. Let me explain what I mean.
Using Spring requires adopting Spring IoC, but beyond that, everything is modular. You can choose to use only the abstractions you need, such as ORM, messaging, caching, and so on. At its core, Spring IoC is used to loosely integrate these components. Later on, they introduced Spring Boot and Spring Cloud, which are distributions of various Spring modules, offering an opinionated application programming model that simplifies getting started.
This strategy allows users the flexibility to selectively use the components they need while also providing an opinionated programming model that saves time and effort when starting a new project.
I'm not sure. My suspicion is that the fundamental issue with frameworks like LangChain is that the problem domain they are attempting to solve is a proper subset of the problems that LLMs also solve.
Good code abstractions make code more tractable, tending towards natural language as they get better. But LLMs are already at the natural language level. How can you usefully abstract that further?
I think there are plenty of LLM utilities to be made- libraries for calling models, setting parameters, templating prompts, etc. But I think anything that ultimately hides prompts behind code will create more friction than not.
ooc - do you think theres anything we could do to change that? that is one of the biggest things we are wrestling with. (aside from completely distancing from langchain project)