Great write up and nice setup! I’ve been running Unraid on a fast box for my local storage host and using NFS and SMB with dismal performance. I’m looking at 10gbe and building up my cache pool, but as it stands it takes 40+ seconds for my laptop to mount and browse even a small share. I’m intrigued by zfs + iSCSI- do you think it would give me some improvements over SMB?
Probably highly dependent on what you're doing with it and whether or not your SMB implementation supports the direct RDMA extensions.
In my case its mostly because I tend to run VMs on various older semi-retired machines with limited or slow local storage that I only turn on when I need them, and VMware's VMFS is cluster-aware, so it really doesn't matter which hypervisor is the one I end up spinning it up on.
I haven't dealt with Unraid specifically but there are a lot of caching and network parameters that can wildly affect performance -- VMware for example wants to do synchronous writes on network storage for obvious reasons, and having a safe write-cache and large transfers with enough in-flight commands can make a night and day difference.
If you're primarily just using NFS/SMB as file shares then getting iSCSI working probably isn't going to be a good use of your time versus figuring out why the existing setup behaves that way -- Samba and SMB performance tuning can be a frustrating experience but iSCSI is far more opaque and inscrutable on Windows particularly.