Short answer: you write your consumer's state into the same DB as you're writing the side-effects to, in the same transaction.
Long answer: say your consumer is a service with a SQL DB -- if you want to process Event(offset=123), you need to 1. start a transaction, 2. write a record in your DB logging that you've consumed offset=123, 3. write your data for the side-effect, 4. commit your transaction. (Reverse 2 and 3 if you prefer; it shouldn't make a difference). If your side-effect write fails (say your DB goes down) then your transaction will be broken, your side-effect won't be readable outside the transaction, and the update to the consumer offset pointer also won't get persisted. Next loop around on your consumer's event loop, you'll start at the same offset and retry the same transaction.
Persisting message offsets in DB has its own challenges. The apps become tightly coupled with a specific Kafka cluster and that makes it difficult to swap clusters in case of a failover event.
If you expect apps to persist offsets then it’s important to have a mechanism/process to safely reset the app state in DB when the stored offset doesn’t make sense.
Interesting, do you have any resources about how to best handle side effects with message queues (e.g. GCP PubSub)? Trying to find out if its worth the effort, or good practice to allow replay-ability (like from backup) of message and get back to the same state
Not sure about PubSub, but Kafka doesn’t store messages on disk forever by default. So hydrating the state from a message bus may not be a good idea depending on what you’re doing. I’d say DB is the way to go here.
Don’t have any resources though. I’m only speaking from experience.