When I used PG for my job queue, I was already using Kafka instead, to handle immediate job entries. The reason I needed PG was for jobs scheduled at a future time.
I had no difficulty with long-running jobs because servicing jobs out of PG was simply a matter of pushing them onto the Kafka queue for immediate uptake there.
Exactly. It is so easy to achieve using a dedicated queuing system. You could just as easily achieve much higher throughput by passing a jobId as a message to Rabbit and have the worker pop the required data out of the db. Thats assuming you dont want to just pass in a serialized object. All this talk of it not being a durable system is just wrong. Rabbit has strong durability guarantees across a cluster with queue mirroring, confirmations and acknowledgements.
Rabbit and most others do; but only once it reaches the queue itself. The problem that using a fb to queue stuff gets around is guaranteeing it earlier; your changes to database and the queued messages are either all committed, or not - a single transaction. This is hard to impossible with queues outside a dB. However obviously it’s a trade off; it’s slower, it’s more work and connections to your dB. But, you don’t have to code around messages either never sent it from a later rolled back transaction.
I have seen benchmarks reaching millions of messages per second.