Inngest is very cool, and if you are interested in the topic but absolutely want to self-host, 2 other high-performance durable execution engines that enable all the above and that you can self-host and are open-source and mature would be temporal and windmill. Both can use postgresql as the queue (temporal can use cassandra too), the rest can be done leveraging the transactional properties of postgresql, such as atomic counters for the concurrency keys: https://github.com/windmill-labs/windmill/blob/main/backend/... or re-queuing jobs that haven't progressed when they should have (most likely, worker crashed).
There are very cool things you can do today without too much complexity with the primitives that modern database are capable of. Should one rebuild it for their startups, no, but if you were to extract the very core of the durable execution engine of windmill for instance, it would actually be surprisingly reasonable given that postgresql does the heavy lifting. I strongly believe the benefits of our platforms mostly come from the overall virtue to be standardized, opinionated and working out-of-the-box in a way that make everything fit together rather than the overall engineering complexity of it.
Having built my own queue-based data processing system and worked through a lot of pain in the past, I will say that I'm a huge fan of Inngest and what they are doing.
As for the article, I think the main point being driven here is:
"building a system with queues requires much more than just the queue itself"
I would imagine almost anyone who has built a production grade queue-based message processing system would agree.
For me, I would say that for the majority of software being developed, the investment to build all of that yourself just doesn't make sense. Obviously there will be exceptions, but Inngest gives you incredible power at a very simple layer of abstraction.
We keep reaching for them because they work fine most of the time. This article would have been less click-bait and possibly more persuasive if instead of “QuEuEs R oVeR” you would have simply just explored some advanced scenarios that are tricky to solve and prescribed how your product makes it worth another line item on our monthly bill.
But you also spend a lot of cycles building and maintaining the ancillary features that make queues powerful. Early- to mid-stage companies especially need to focus on business logic and less on reinventing wheels
If your early to mid-stage company needs queues to depth then what is in them is vital to your business.
You better know what is going on with mission critical data at every step of its journey. Or the data isnt that important and you should write something lighter than this because it can be lossy...
Queues and workflow engines aren’t the right solution for everything, but they work well for a lot of stuff. Like a signup flow that integrates with multiple third-parties, or a drip campaign, or notifications.
Those are things many small, all developer teams need. They don’t want to hire an infra-minded person just for queues and workflows
> They offer reliability through guaranteed delivery, persistence, and dead letter queues, so developers know they aren't sending workloads into a black hole.
I disagree with this reason to use queues. If this is the only reason for using SQS or RabbitMQ or similar, perhaps the application is over-engineered.
If you want reliability, and that alone, use a transaction-based system.
This press release content marketing appears hung up on some mythical perfect ESB system containing kitchen sink cross-cutting concerns.
There are many tools in the toolbox for backend infrastructure: nosqls (memcache/redis/keydb), dlms (zk), kafka, rabbitmq, ejabberd, 0mq, nng. Some scale better than others, and some are more atomic or durable than others. OLTP and infrastructure orchestration will have different needs. Sometimes, cross-cutting concerns can be added by gating the sender, receiver, or both with "controller"-like middleware proxies or modifications.
That was a lot of reading to discover they’ve built a workflow engine. They don’t want to call it a workflow engine because there’s a million of those. So they can up with a new name for a workflow engine.
Spot on. This is what Durable Functions does on Azure and it's brilliant for implementing complicated business processes and handling multiple events in one flow, in a resilient way where the logic is easy to follow.
One catch is that you're going to have to version your code if your workflows/orchestrations run for days, or if there's no windows without running workflows. And there's no built-in support for this, so expect to duplicate your entire workflow for new versions, so the old one can run to the end with the old code.
There are very cool things you can do today without too much complexity with the primitives that modern database are capable of. Should one rebuild it for their startups, no, but if you were to extract the very core of the durable execution engine of windmill for instance, it would actually be surprisingly reasonable given that postgresql does the heavy lifting. I strongly believe the benefits of our platforms mostly come from the overall virtue to be standardized, opinionated and working out-of-the-box in a way that make everything fit together rather than the overall engineering complexity of it.