Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How many decades ago was that? Sounds more like a partition converted from ext3. No ext4 partition I've seen in the last 15y, didn't have a ridiculous amount of inodes. I do support for several hundred Linux systems.


Zero decades ago? I run EC2 instances that process hundreds of millions of small files. I always use the latest Ubuntu LTS.

I'm also tired of trying to share my experience and having to choose between ignoring snide ignorant low-brow dismissals, and leaving them unanswered so they can misinform people. Ext4 does not have a dynamic inode reservation adjustment feature. XFS does. So with ext4, it's possible to run out of inodes while there are blocks available. With XFS, it's not.


From this paper https://arxiv.org/html/2408.01805v1 (2024)

> EXT4 is the general-purpose filesystem for a majority of Linux distributions and gets installed as the default. However, it is unable to accommodate one billion files by default due to a small number of inodes available, typically set at 240 million files or folders, created during installation.

Which is interesting. I knew EXT2/3/4 had inode bitmaps, but I haven't been paying them much attention for the past decade. Slightly surprised they haven't added an option for dynamic allocation, OTOH inodes are small compared with storage and most people don't need billions of files.


That person is being extremely silly when they call that "small".

Ext2/3/4 reserves so many inodes by default. One per 16KB of drive space. You don't hit that with normal use. Almost everyone should be reducing their inode count so it doesn't take up 1.6% of the drive.


> That person is being extremely silly when they call that "small".

Explain.

> Ext2/3/4 reserves so many inodes by default. One per 16KB of drive space. You don't hit that with normal use. Almost everyone should be reducing their inode count so it doesn't take up 1.6% of the drive.

Well almost, but not the OP who runs out of inode space with the default format.


> Explain.

Hundreds of millions of inodes is not a small number. I'm not sure how I can explain that much better. There are multiple orders of magnitude between "240 million inodes" and "a small number of inodes".

And on a 14TB hard drive, the default would be more like 850 million.

> Well almost, but not the OP who runs out of inode space with the default format.

I said "almost" for a reason. It's a bad idea for quite small drives or some rare use cases.


> Hundreds of millions of inodes is not a small number. I'm not sure how I can explain that much better. There are multiple orders of magnitude between "240 million inodes" and "a small number of inodes".

The poster with the inode problem in the subthread I replied to said "many small files" and "millions of small files", not "a small number of inodes".

> I said "almost" for a reason. It's a bad idea for quite small drives or some rare use cases.

Though source trees often are smaller than 16kB/inode, so the advice to decrease inode allocation just to save a fraction of 1.6% of space may not be too good. I would leave it as default unless there is good reason to change. And that's of course the trouble with fixed inode allocation.


> The poster with the inode problem in the subthread I replied to said "many small files" and "millions of small files", not "a small number of inodes".

I quoted the paper. I said nothing about OP.

> Though source trees often are smaller than 16kB/inode, so the advice to decrease inode allocation just to save a fraction of 1.6% of space may not be too good.

For drives that aren't going to have tons of uncompiled source trees, more than a percent is a lot. That could be a hundred gigabytes wasted. Almost any drive that isn't a small OS partition can do just fine at .4 or .2 percent.


> I quoted the paper. I said nothing about OP.

That passage is in context of a billion inodes though, where it is a small number by comparison. It's obviously calling it a small number by some absolute or objective standard.

> For drives that aren't going to have tons of uncompiled source trees, more than a percent is a lot. That could be a hundred gigabytes wasted. Almost any drive that isn't a small OS partition can do just fine at .4 or .2 percent.

If you have 10TB then saving 80GB isn't really a lot at all. It's about 0.8%. You really wouldn't recommend an average user change that at all. If they were desperate for a few more GB you might drop the reserved % down a couple of points first which is already more than entire inode allocation.


That paper is not saying it's small "in comparison", it's saying it's small for a filesystem. It's silly.

> You really wouldn't recommend an average user change that at all.

Yes I would. If they were stuck with ext4.

In general I'd recommend something with checksums.

> reserved %

Oh god definitely change that, it should cap at 10GB or something.


> That paper is not saying it's small "in comparison", it's saying it's small for a filesystem. It's silly.

It's not, it's clearly saying the inode count is too small for their 1 billion file test.

> Yes I would. If they were stuck with ext4.

I meant good advice. There's enough stories of people running out of inodes even of the tiny sample in this thread that it's not good advice to cut the inode count 4x.


I ran a very small personal webserver with limited storage on Ubuntu on EC2 for a while.

The EC2 instance, likely the smallest configuration available at the time, hit an inode limit just running updates over time.


Super small drives are a special case for sure. You don't want to go below a certain number of inodes even on a 5GB root, but you also don't want to scale that up 50x on a 250GB root.


Why are super small drives a special case? It's still the same data to inode ratio.


There's a lot of small files that come with your typical OS install, going into the first handfuls of gigabytes.

When you add on another terabyte, the distribution is totally different. The files are much bigger.


Right, so it's entirely about usage rather than filesystem size.


Different sizes have different uses.


With gentoo, if you allocate let's say 20G to / on ext4, then you can quite easily run into this issue.

/usr/src/linux will use about 30% of the space and 10% of the inodes.

/var/db/repos/gentoo will use about 4% of the space and 10% of the inodes.

Next you clone the firefox-source hg repo, which will use about 15% of the space and 80% of the inodes.


> Next you clone the firefox-source hg repo, which will use about 15% of the space and 80% of the inodes.

Looking at my mozilla checkout the source and repo average 6KB per file, which would eat lots of inodes.

But once I compile it, it's more like 20KB per file, which is just fine on default settings. So I'm not sure if the inodes are actually the limiting factor in this scenario?

And now that they're moving to git, the file count will be about 70% smaller for the same amount of data.


Based on the mke2fs git history, the default has been a 256 byte inode per 16KB of drive size since 2008, and a 128 byte inode per 8KB of drive size before that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: