Hacker News new | past | comments | ask | show | jobs | submit login

No, cause VACUUM can't kill those dead tuples while transaction is still running. Think of it this way...

When you open a transaction, you need to have a guarantee that you can touch rows that existed at the moment when transaction has started. You job queue is chugging along and processes let's say a 1000 jobs per minute. Processing a job involves deleting the row from the queue, but since you have a transaction running, Postgres only marks the row as deleted, and keeps it around in case a transaction would want to access it at some point. Each time you need to process a job, Postgres needs to lock the row. The way this mechanism works involves iterating over the rows until you find one you can lock on. If your transaction is running 30 minutes, each job would have to iterate through 30k dead rows (deleted, but still around for the sake of the transaction). Slowing down lock time leads to overal degreaded performance of the job queue, which leads to jobs being added faster than they're being processed, which further exacerbates the problem




I wonder if this could be solved by moving in-progress jobs to a separate table...

Guess you lose job atomicity that way.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: