No, cause VACUUM can't kill those dead tuples while transaction is still running. Think of it this way...
When you open a transaction, you need to have a guarantee that you can touch rows that existed at the moment when transaction has started. You job queue is chugging along and processes let's say a 1000 jobs per minute. Processing a job involves deleting the row from the queue, but since you have a transaction running, Postgres only marks the row as deleted, and keeps it around in case a transaction would want to access it at some point. Each time you need to process a job, Postgres needs to lock the row. The way this mechanism works involves iterating over the rows until you find one you can lock on. If your transaction is running 30 minutes, each job would have to iterate through 30k dead rows (deleted, but still around for the sake of the transaction). Slowing down lock time leads to overal degreaded performance of the job queue, which leads to jobs being added faster than they're being processed, which further exacerbates the problem
It naturally grows due to long running transactions because the vacuum process cannot clean dead tuples that a transaction can "see". Though more aggressive vacuum settings might solve this problem.