I haven't checked in a really long time, but years ago ntfs-3g had a really suboptimal allocation pattern for spinning drives, and optimizations were only present in the commercial version. I don't know if the block allocation strategy was owned by ntfs-3g or if it was using something in-kernel.
If I'm remembering the details right (and that's a big if), large files written sequentially would have their blocks scattered all over the place, making e.g. large uncompressed videos written from Linux too slow to play due to seek times, when if they'd been written sequentially they would have been fine.
ntfs-3g is entirely implemented in userspace, and is far more trivial to hack on and test if anyone wanted to improve its file allocation strategies than an in-kernel driver would be.
Years ago when working on a commercial virtualization product I had to support initializing sparsely populated ntfs filesystems where only the filesystem metadata was written out, with allocated file extent ranges logged externally and not actually populated in the sparse ntfs filesystem in a file. I ended up deriving a small library for doing this from the libntfs-3g code, I don't think it took more than a weekend to do, the code was quite sane.
What I recall from that experience was that the file extent allocations on an empty filesystem was perfectly efficient, since you'd have to go out of your way to make it bad.
The real trouble was when the filesystem was used enough for its free space to become fragmented. At that point the allocation strategy was quite poor and you would end up with extremely fragmented large files. IIRC it was really just a naive sequential find first free blocks kind of thing. This aspect is consistent with your memory.
I think if we wanted to improve that aspect of linux ntfs support we're better off working from ntfs-3g.
Not to mention there are some significant down sides to in-kernel filesystem drivers, they are rather problematic when it comes to containers/namespaces and the security story is a total shit show.
I think it'd be nice to have a native driver instead of loading fuse and then a userspace driver. Is it going to revolutionize my NTFS access and make me toast in the mornings? No. Do I care to see it included? Yes.
Maybe there's something about your use case where the potential performance difference between a kernel driver and FUSE driver is hugely relevant and outweighs the security problems.
For me, the use cases for NTFS in my Linux-centric life are all fraught with trust issues when it comes to knowing what's in that filesystem isn't malicious and about to exploit a kernel fs driver bug for unfettered ring-0 execution.
It's less of a concern with filesystems you initialized from your host, on devices permanently attached to said host, with a clear chain of custody, accessed purely from the trusted environment. There's a lot of implicit trust when it comes to mounting filesystems with a kernel driver, the kernel fs devs aren't shy about admitting there's significant trust assumptions throughout those drivers - they aren't exactly hardened against malicious input, and they aren't isolated processes like w/FUSE.
Personally when a box has a sensitive enough use case (e.g. secret storage hosts) that attacks via the local filesystem are a design concern I simply don't include either FUSE or additional filesystem modules nor do I plan on mounting additional storage after boot. I configure everything with minimal possible surface regardless of if it could be constructed with more convenience.
I suppose some people may treat every box with that level of security profile but I don't think it should come as much of a surprise that level of hardening is not the normal use case considered when new drivers are staged in the kernel tree.
> Maybe there's something about your use case where the potential performance difference between a kernel driver and FUSE driver is hugely relevant and outweighs the security problems.
The idea of handling I/O errors is pretty recent in most Linux file systems, I don't think security concerns are much of an issue with adding another file system - the kernel is probably full of easily exploitable holes anyway.
ntfs-3g does not support NTFS journal replay/rollback (e.g. after Windows unclean shutdown), also it does not maintain NTFS journal for its own operations
Although I'd be surprised if this or any Linux NTFS driver could do that. It would require reverse engineering the journal format, which may even change with every Windows release.
> It would require reverse engineering the journal format, which may even change with every Windows release.
It does not. Thanks to Microsoft’s obsessive attention to backwards compatibility, you can safely mount a Windows 10 NTFS partition with journal on Windows XP (and I hear even Windows 2000) and get some form of safe read/write. However certain chunks of the fs on disk are not backwards compatible, this is mainly the system volume information used for system restore and shadow copy (and not accessible to typical user land in all cases).
More to the point, the submitted driver does, too:
I think that may be the point - it works, but not really well enough. If you need to use it to shift a large amount of data around, the performance is crippling.