You explain why I don't need them for a very different usecase than what I refer to.
My point is I want a new primitive -- a pub/sub sequence number -- to avoid having Kafka around at all.
What Kafka does is "only" to generate and store such a sequence number after all (it orders events on a partition, but the sequence number I talk about is the same thing just different storage format). So also you do need it in the setup you describe, you just let Kafka generate it instead of having it in SQL.
Assuming your workload is fine with a single DB, the only thing Kafka gives you is in fact assigning such a post-commit row sequence number (+API/libraries building on it).
This is the mechanism used to implement pub/sub: Every consumer tracks what sequence number it has read to (and Kafka guarantees that the sequence number is increasing).
That is what mssql-changefeed linked above is about: Assigning those event log sequence numbers in the DB instead. And not use any event brokers (or outboxes) at all.
For postgres I would likely then consume the WAL and write sequence numbers to another table based on those...
It may seem clunky but IMO installing and operating Kafka just to get those pub/sub sequence numbers assigned is even clunkier.
It sounds like the Log Sequence Number in Postgres is what you are looking for. If you subscribe to a Postgres publication via logical replication, each commit will be emitted with a monotonically increasing LSN.
My point is I want a new primitive -- a pub/sub sequence number -- to avoid having Kafka around at all.
What Kafka does is "only" to generate and store such a sequence number after all (it orders events on a partition, but the sequence number I talk about is the same thing just different storage format). So also you do need it in the setup you describe, you just let Kafka generate it instead of having it in SQL.
Assuming your workload is fine with a single DB, the only thing Kafka gives you is in fact assigning such a post-commit row sequence number (+API/libraries building on it).
This is the mechanism used to implement pub/sub: Every consumer tracks what sequence number it has read to (and Kafka guarantees that the sequence number is increasing).
That is what mssql-changefeed linked above is about: Assigning those event log sequence numbers in the DB instead. And not use any event brokers (or outboxes) at all.
For postgres I would likely then consume the WAL and write sequence numbers to another table based on those...
It may seem clunky but IMO installing and operating Kafka just to get those pub/sub sequence numbers assigned is even clunkier.