Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That person is being extremely silly when they call that "small".

Ext2/3/4 reserves so many inodes by default. One per 16KB of drive space. You don't hit that with normal use. Almost everyone should be reducing their inode count so it doesn't take up 1.6% of the drive.



> That person is being extremely silly when they call that "small".

Explain.

> Ext2/3/4 reserves so many inodes by default. One per 16KB of drive space. You don't hit that with normal use. Almost everyone should be reducing their inode count so it doesn't take up 1.6% of the drive.

Well almost, but not the OP who runs out of inode space with the default format.


> Explain.

Hundreds of millions of inodes is not a small number. I'm not sure how I can explain that much better. There are multiple orders of magnitude between "240 million inodes" and "a small number of inodes".

And on a 14TB hard drive, the default would be more like 850 million.

> Well almost, but not the OP who runs out of inode space with the default format.

I said "almost" for a reason. It's a bad idea for quite small drives or some rare use cases.


> Hundreds of millions of inodes is not a small number. I'm not sure how I can explain that much better. There are multiple orders of magnitude between "240 million inodes" and "a small number of inodes".

The poster with the inode problem in the subthread I replied to said "many small files" and "millions of small files", not "a small number of inodes".

> I said "almost" for a reason. It's a bad idea for quite small drives or some rare use cases.

Though source trees often are smaller than 16kB/inode, so the advice to decrease inode allocation just to save a fraction of 1.6% of space may not be too good. I would leave it as default unless there is good reason to change. And that's of course the trouble with fixed inode allocation.


> The poster with the inode problem in the subthread I replied to said "many small files" and "millions of small files", not "a small number of inodes".

I quoted the paper. I said nothing about OP.

> Though source trees often are smaller than 16kB/inode, so the advice to decrease inode allocation just to save a fraction of 1.6% of space may not be too good.

For drives that aren't going to have tons of uncompiled source trees, more than a percent is a lot. That could be a hundred gigabytes wasted. Almost any drive that isn't a small OS partition can do just fine at .4 or .2 percent.


> I quoted the paper. I said nothing about OP.

That passage is in context of a billion inodes though, where it is a small number by comparison. It's obviously calling it a small number by some absolute or objective standard.

> For drives that aren't going to have tons of uncompiled source trees, more than a percent is a lot. That could be a hundred gigabytes wasted. Almost any drive that isn't a small OS partition can do just fine at .4 or .2 percent.

If you have 10TB then saving 80GB isn't really a lot at all. It's about 0.8%. You really wouldn't recommend an average user change that at all. If they were desperate for a few more GB you might drop the reserved % down a couple of points first which is already more than entire inode allocation.


That paper is not saying it's small "in comparison", it's saying it's small for a filesystem. It's silly.

> You really wouldn't recommend an average user change that at all.

Yes I would. If they were stuck with ext4.

In general I'd recommend something with checksums.

> reserved %

Oh god definitely change that, it should cap at 10GB or something.


> That paper is not saying it's small "in comparison", it's saying it's small for a filesystem. It's silly.

It's not, it's clearly saying the inode count is too small for their 1 billion file test.

> Yes I would. If they were stuck with ext4.

I meant good advice. There's enough stories of people running out of inodes even of the tiny sample in this thread that it's not good advice to cut the inode count 4x.


I ran a very small personal webserver with limited storage on Ubuntu on EC2 for a while.

The EC2 instance, likely the smallest configuration available at the time, hit an inode limit just running updates over time.


Super small drives are a special case for sure. You don't want to go below a certain number of inodes even on a 5GB root, but you also don't want to scale that up 50x on a 250GB root.


Why are super small drives a special case? It's still the same data to inode ratio.


There's a lot of small files that come with your typical OS install, going into the first handfuls of gigabytes.

When you add on another terabyte, the distribution is totally different. The files are much bigger.


Right, so it's entirely about usage rather than filesystem size.


Different sizes have different uses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: