What sorts of features does btrfs offer home users that ZFS lacks?
My experiences with btrfs have been somewhat mixed. On OpenSuse, I've found snapper to be an amazing lifesaver, but the regular system-crippling maintenance process[0] to be very frustrating.
The big missing feature, IMO, is that you can't grow a vdev just by adding a disk -- you have to go through this rigmarole of swapping in a larger disk and resilvering, one at a time.
It's been a while since I played with btrfs but it was really nice to be able to dynamically add hard drives of any size to a volume and have the filesystem seamlessly balance the data between the drives with the requested replication factor.
Very flexible online resizing, re-striping and conversion between RAID levels of pools. On-demand deduplication instead of always-active. Reflink copy. On-demand compression and defragmentation. No ARC gobbling up memory (competes with page cache). NOCOW files.
- metadata raid1 with data raid5. If you currently have metadata raid5 'btrfs balance start -mconvert=raid1 <mp>' will online convert metadata from raid5 to raid1.
- raid10 with a device failure, you might think you have to wait a day or two for a device replacement, but in fact you can do `btrfs device remove missing <mp>` and the data on the missing device is replicated into the free space of remaining devices along with an online shrink, and the whole operation is COW and atomic per block group.
On stable hardware, Btrfs raid56 should be stable. As the code everywhere is constantly changing I guess it's somewhat subjective if raid56 specific changes mean the code is not stable.
I don't have a complete or detailed breakdown, but I think a big chunk of Btrfs raid56 problems would have affected any raid56 implementation, including the crazy Linux 30s command timeout default. Many I've seen are single drive failures, with an additional bad sector on another drive. If Btrfs itself detects an error (csum mismatch for metadata or data) on a remaining drive, this is no different than a two "device" failure for that stripe.
More reliable than raid5 for metadata and data, is to use raid1 metadata and raid5 data (this can be chosen at mkfs time or you can do a balance with -mconvert=raid1 to convert raid5 metadata to raid1).
And for all raid, it's questionable if raid56 is a good idea with 5+TB drives taking days to rebuild.