ZFS snapshots + send/receive are an absolute game changer in this regard.
I have my /home in a separate dataset that gets snapshotted every 30 minutes.
The snapshots are sent to my primary file-server, and can be picked up by any system on my network. I do a variation of this with my dotfiles similar to STOW but with quicker snapshots.
ZFS is a game changer for quickly and reliably backing up large multi-terabyte PostgreSQL databases as well. In case anyone is interested, here is our experience with PostgreSQL on ZFS, complete with a short backup script: https://lackofimagination.org/2022/04/our-experience-with-po...
Came here to say this. Can you list your example commands for snapshotting, zfs send, restoring single files or entire snapshots, etc.? (Have you tested it out?) I am actually in the position of doing this (I use zfs on root as of recently and I have a TrueNAS) but am stuck at the bootstrapping problem (I haven't taken a single snapshot yet; presumably the first one is the only big one? and then how do I send incremental snapshots? and then how do I restore these to, say, a new machine? do I remotely mount a snapshot somehow, or zfs recv, or? Do you set up systemd/cron jobs for this?) Also, having auto-snapshotted on Ubuntu in the past, eventually things slowed to a crawl every time I did an apt update... Is this avoidable?
Zfs send/receive is nice but it does lack the toolchain to easily extract individual files from a backup. It's more of a disaster recovery thing in terms of backup.
I have my /home in a separate dataset that gets snapshotted every 30 minutes. The snapshots are sent to my primary file-server, and can be picked up by any system on my network. I do a variation of this with my dotfiles similar to STOW but with quicker snapshots.