With all the Terabyte of drives available for desktop machines and the affordable SSDs pushing toward the same limit, why is that every single USB stick (drive) out there is sold with the same FAT32 format? This is a hindrance as the max filesize on this format is 4Gig. So how come MSFT/APPL and Co have not agreed on a new file format to get past this out dated standard?
I've used UDF on external drives. Its only limitation was that Windows XP wouldn't write to UDF filesystems (from memory, Windows Vista onwards support writing to UDF).
It has been a nice little filesystem to use on thumbdrives where cross-platform compatibility is key.
The only downside is the sparse documentation on creating UDF filesystems on block devices.
If you were a USB drive manufacturer, which support requests would you rather deal with?
"Why can't I copy a 5gb file to my usb drive?"
or
"Why isn't my USB drive recognized on my computer running an old OS?"
Even if MSFT and AAPL and GOOG were to agree on a filesystem for future OS releases, it would take a very very long time until card manufacturers are willing to create drives that aren't FAT32.
USB sticks go in a lot of things that aren't the laptops the ultra-wealthy constantly upgrade every 18 months.
I've got equipment with USB ports on it that can only read FAT32.
Transitions take time. Unfortunately both MSFT and APPL practice the fine art of "Don't improve your own product, hamper your competitors' efforts" as a way to maintain their respective kingdoms.
As a number of people have mentioned, compatibility is one reason. However, there's another reason, particularly for flash drives: the FAT itself.
Flash drives tend to be MLC or TLC NAND flash, which is slow and can sometimes be error-prone. Oftentimes manufacturers will perform a trick and have the start of the disk actually be faster SLC NAND. Then, as a further enhancement, they'll make sure the table falls on sector boundaries.
With a more complicated filesystem this becomes more of a challenge. But with FAT, it's easy.
LWN suggests storage manufacturers have been optimising for FAT32, so even if other file systems were more widely supported, they typically won't be as fast:
https://lwn.net/Articles/428584/
Nothing rages me more than beeing unable to copy a 5GB file (4GB limit on FAT32) on an USB stick just because a PC and a Mac are actively _not_ willing to cooperate.
Recently discovered my TV was able to read ext4, never looked back :)
"Who uses cable Ethernet these days" - I had to twice in the past year - a hotel in Germany that provided internet via a cable. And when the router decided to loose the WIFI password and I had to use an old laptop to physically connect.
I don't use USB sticks very often myself, but I am sure there are tons of scenarios where people find it useful. Just because it doesn't match any of your use cases is no justification.
Actually, it is a justification. That's why a bulti-million dollar company is releasing a macbook with no usb 2 port. Just because a certain demographic would make use of such a port, a large demographic wouldn't. Plus, they are looking into the future. For every year that passes, people will use less and less physical media.
As for your ethernet example, that's too bad. Wifi capabilities will get better for every year that passes, too.
> As for your ethernet example, that's too bad. Wifi capabilities will get better for every year that passes, too.
I beg to differ. I've gone from 802.11g to n to ac and I've yet to see any noticeable gains bringing me even close to plain old Gigabit ethernet.
It cannot compete nor compare when it comes down to even individual aspects: reliability, individual throughput, total throughput (due to shared medium access), nor setup speeds.
I'm not just talking "a little bit slower". I'm talking 1 minute guaranteed to succeed (ethernet), vs 1 hour guaranteed to fail (wifi). In real world performance, it's orders of magnitudes slower. That's measurable. That's a fact.
For lots of use-cases wifi is utterly useless. Have fun trying to backup media to a iSCSI volume over wifi for instance.
I'm guessing you don't live in a heavily populated area where the wifi-bands are over-consumed and have little more effective BW than 25mbps to offer. The same crap I was promised would improve half a decade ago. It haven't.
A huge part of the tech-hungry power-users lives in these places and suffers through this subpar performance. We're not happy with mediocre wifi to cover our needs.
Here's one of mine: When you want to transfer files between two machines which (for whatever reason) can't be on the same network, and you don't want to install additional software into the environment.
Depending on the files and the network involved, using a USB-stick (or any other physical media) can often be easier and faster than trying to find a network-bridge for your files.
So why are you moving 5gb between PCs? Especially on a small flash drive rather than a portable harddrive (which comes formatted with a more appropriate file system to handle large files).
Does it really matter why? There are innumerable occasions you might have 5GB+ of data you want to quickly swap between machines. DVD images (especially OS install ones) spring to mind, and it's not uncommon to want the image itself, not to unpack onto your drive.
Portable HDDs are bigger, more expensive, more fragile, and generally worse in every way (except transfer speeds) than USB flash drives for occasional short-haul data transfers. Writing 10G+ images onto a cheap 16G USB drive and throwing it across the office is a relatively common occurrence here.
I recently was having trouble with backing up photos from a terabyte usb drive between ubuntu and mac os (which couldn't write to it). I was using NTFS which was the best choice (for the last decade).
Looks like exFat support has improved the last few years, so I bit the bullet, moved the data off and reformatted. Now working great on both OSs. I wish it were open sure, and I hear UDF is a possibility too, but stopped when it started working.
While major operating systems support UDF, they can all be a little picky about how it's used on a usb or harddrive. And windows has a tendency of not mounting something you didn't unmount properly (plugging it into a linux machine fixes this, from my experience).
The main problem you have is that there is no udf partition id, so I've never had it mount on OSX systems I've had access to (I have none myself, and it seems to respect the partition id, unlike windows and linux).
FAT32 is support on almost any platform out of the box without additional drivers/software etc.
There is no other file format with that wide range of support, yet.
NTFS-3G in at least Ubuntu derived distros work just as well as native file systems and comes in ready to use state, e.g. I didn't have to configure anything to read and write to my various partitions formatted in NTFS.
Fair enough. But in this case, the userland drivers are easy to install (if they're not a default part of your distro of choice) and work without a hitch, at least in my experience.
That's part of the answer to the OP's question. I thought that ext2 might have caught on, but I guess vendors were scared off by the GPL and (later, when BSD licensed implementations became available) the lack of support on Windows.
If you're being practical, the FUSE driver is just a PPA away on Ubuntu and works fine in my experience. I've switched over all my external media to exFAT, including my external hard drive with virtual BD-ROM functionality.
https://en.wikipedia.org/wiki/Comparison_of_file_systems#Sup...
and compare the number of green cells for each filesystem, specifically how "green" (ie compatible) it is.