Hacker News new | past | comments | ask | show | jobs | submit login

Please stop spreading this misinformed statement. I assume you are referring to the ZFS ARC (Adaptive Replacement Cache). It works in much the same way as a regular Linux page cache. It does not take much more memory (if you disable prefetch) and will only use what is available/idle. We use Linux with ZFS on production systems with as low as 1GB memory. We stopped counting the times it has saved the day. :-)

ECC is a nice to have, but ZFS does not have special requirement over say a regular page cache. The only difference is that ZFS will discovery bit-flips instead of just ignoring them as ext4 or xfs would do.




> ECC is nice to have.

Actually it seems ECC is important for ZFS filesystems see:

http://louwrentius.com/please-use-zfs-with-ecc-memory.html


To be clear, it is not ZFS that requires or even mandates ECC. Since ZFS uses data as present in memory and has checks for everything post that, it is prudent to have memory checks at the hardware level.

Thus, if one is using ZFS for data reliability, one ought to use ECC memory as well.


> Actually it seems ECC is important for ZFS filesystems see:

The inflection made by the previous comment tends to lead people to think ECC RAM is needed for ZFS specifically. As the blog post you link to points out it's equally applicable to all filesystems.


It's not required, but it doesn't make sense to use ZFS but not to use ECC memory. That's the point. It's like locking the backdoor but leaving the front door wide open.


Interesting.

That's rigth the kind of hardware I was referring to, 1 GB of plain RAM. Truly, I haven't tested ZFS yet for that reason I've always read that ZFS has big requirements so I refrained to try it. It seems I should give it a try. ;)

Btrfs is another story I've used it for years and I'd prefer not to have to use it anymore untill it'll become "stable" and "performance". :)


FreeNAS != ZFS. The former is a specialised storage system that has to meet a very different set of criteria than a lightweight server with 1GB ram.


Is zfs able to repair from single data (copy) corruption?

My main issue is to be able to repair a "silent" data corruption on a single drive machine. Am I able to use x% of my "partition" to data repair or do I need to use other partition/drive to mirror/raid it?

If I understand right zfs can detect bitrot ("not really" a big deal) but without any local copy It can't self heal.

My use case is an arm A20 SoC (lime2) to storage local backups among other things, so I need something that detects and repairs silent data corruption at rest by itself (using a single drive).

A poor man NAS/server. ;)


Not sure if it will fit your needs or not, but for long term storage on single HDs (and back in the day on DVD), I would create par files with about 5-10% redundancy to guard against data loss due to bad sectors. http://parchive.sourceforge.net/ total drive failure of course means loss of data, but the odd bad sector or corrupted bit would be correctable on a single disk. This was very popular back in the binary UseNet days....


You can create a nested ZFS file system and set the number of copies of the various blocks to be two or more. This will take more space, but there'll be multiple copies of the same block of data.

Ideally, though, please add an additional disk and set it up as a mirror.

ZFS can detect the silent data corruption during data access or during a zpool scrub (which can be run on a live production server). If there happen to be multiple copies, then ZFS can use one of the working copies to repair the corrupted copy.


Got it but not for my use case then cause I don't want to halve my storage capacity.

Anyway I will try to use it for my main PC which has several disks and continue to use my solution for single disk machines (laptop, vps, SoC...). :)


Note it won't necessarily halve the capacity. Selectively enable it for the datasets requiring it, and avoid the overhead with the rest.


No but parity archives solves a different problem, with only some percent of wasted storage you can survive bit-errors in your dataset. It's like reed-solomon for files.

In order to achive the same with ZFS you have to run RAID-Z2 on sparse files.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: