Hacker Newsnew | past | comments | ask | show | jobs | submit | anyfoo's commentslogin

macOS also uses compression in the virtual memory layer.

(It's fun to note that I try to type out "virtual memory" in this thread, because I don't want people to think I talk about virtual machines.)


I'm getting tired of typing this, but swap space is not just to increase available virtual memory. If you upgrade from 8 GB to 24 GB, then with proper swap space usage, you have 16 GB that could be used for additional disk cache.

Sure, you're still better off with 24 GB overall compared to 8GB+swap whether you add swap to your 24 GB or not, but swap can still make things more better.

(That says nothing about whether the 2x rule is still useful though, I have no idea.)


There's a chance that those servers might run more efficiently with some swap space, for the reasons mentioned many times in this thread. Swap space is not just for overcommitting.

The theories are repeated a often but I have never seen any empirical data to back it up assuming one is setting the options I mentioned. These anecdotes usually come from servers with default settings and no attempt to tune them for the intended workloads and no capacity planning for application resources. Even OS maintainers are starting to recognize this and have created daemons such as tuned for the people that never touch settings. The next evolution will be dynamic adjustments from continuous bpf traces. I just keep it simple and avoid the circular arguments all together.

Oh sure, it might or might not make a significant difference at all. Chances are, if you do a lot of I/O on a large (or very large) amount of data, and you also have a lot of rarely used but resident anonymous memory, then swap space should help, as that anonymous memory can get paged out in favor of disk cache, but I have no idea how common that is.

Yeah I mean, I know what you mean but this is where it gets into circular reasoning. I will always have operations groups move the workload to a node that has more memory if that is what is needed. In my case having swap on disk would require it to be encrypted due to contracts requiring any customer data touching a disk to be encrypted but I just avoid that all together and just add more memory. If 2TB or RAM isn't enough then they get 3TB and so on. We pushed vendors and OEM's to grow their motherboard capacity. At some point application groups just get more servers.

Yeah, that seems like a reasonable approach for your case!

As has been mentioned a few times in other comments here, I don't believe that's correct. Swap space is not just for "using more memory than you have RAM".

I'm not an expert, but aren't you just reducing the choice of what pages can be offloaded from RAM? Without swap space, only file-backed pages can be written out to reclaim RAM for other uses (including caching). With swap space, rarely used anonymous memory can be written out as well.

Swap space is not just for overcommitting memory (in fact, I suspect nowadays it rarely ever is), but also for improving performance by maximizing efficient usage of RAM.

With 48GB, you're probably fine, but run a few VMs or large programs, and you're backing your kernel into a corner in terms of making RAM available for efficient caching.


The point is to have so much RAM that you don't need to offload anything.

I don't think that's correct. Having swap still allows you to page out rarely-used pages from RAM, and letting that RAM be used for things that positively impact performance, like caching actually used filesystem objects. Pages that are backed by disk (e.g. files) don't need that, but anonymous memory that e.g. has only been touched once and then never even read afterwards should have a place to go as well. Also, without swap space you have to write out file backed pages, instead of including anonymous memory in that choice.

For that reason, I always set up swap space.

Nowadays, some systems also have compression in the virtual memory layer, i.e. rarely used pages get compressed in RAM to use up less space there, without necessarily being paged out (= written to swap). Note that I don't know much about modern virtual memory and how exactly compression interacts with paging out.


Every time I've ran out of physical memory on Linux I've had to just reboot the machine, being unable to issue any kind of commands by input devices. I don't know what it is, but Linux just doesn't seem to be able to deal with that situation cleanly.

The mentioned situation is not running out of memory, but being able to use memory more efficiently.

Running out of memory is a hard problem, because in some ways we still assume that computers are turing machines with an infinite tape. (And in some ways, theoretically, we have to.) But it's not clear at all which memory to free up (by killing processes).

If you are lucky, there's one giant with tens of GB of resident memory usage to kill to put your system back into a usable state, but that's not the only case.


Windows doesn't do that, though. If a process starts thrashing the performance goes to shit, but you can still operate the machine to kill it manually. Linux though? Utterly impossible. Usually even the desktop environment dies and I'm left with a blinking cursor.

What good is it to get marginally better performance under low memory pressure at the cost of having to reboot the machine under extremely high memory pressure?


In my experience the situations where you run into thrashing are rather rare nowadays. I personally wouldn't give up a good optimization for the rare worst case. (There's probably some knobs to turn as well, but I haven't had the need to figure that out.)

Try doing cargo build on a large Rust codebase with a matching number of CPU cores and GBs of RAM.

I believe that it's not very hard to intentionally get into that situation, but... if you notice it doesn't work, won't you just not? (It's not that this will work without swap after all, just OOM-kill without thrashing-pain.)

I don't intentionally configure crash-prone VMs. I have multiple concerns to juggle and can't always predict with certainty the best memory configuration. My point is that Linux should be able to deal with this situation without shitting the bed. It sucks to have some unsaved work in one window while another has decided that now would be a good time to turn the computer unusable. Like I said before, trading instability for marginal performance gains is foolish.

No argument there. I also always had the impression that Linux fails less gracefully than other systems.

That only helps if you don't have much free RAM. If you've got more free RAM than you need cache (including disk cache), swap only slows things down. With RAM prices these days, getting enough RAM is not worth it to avoid swap. IME on a desktop with 128GiB of RAM & Zswap I've never hit the backing store but have gone over 64GiB a few times. I wouldn't want to have pay to rebuild my desktop these days, 128GiB of ECC RAM was pricey enough in 2023!

Yes, it was apparently very visible: https://martypc.blogspot.com/2024/09/pc-floppy-copy-protecti...

But as I mentioned in a sibling comment, I’m not sure it was ever confirmed that it was really a laser that made that mark.


Was ist ever confirmed that it was in fact a laser? I wanted to make a trivia question out of this ProLok protection, because “lasers for copy protection” sounds just weird enough to potentially be a nonsense answer without context, but I couldn’t confirm that the holes were indeed made with lasers, and not with other means.


Good question. I don't know the answer, but I'm quite certain that it didn't really matter what mechanism was used to mark a diskette. Any damage would be equally strong as a way to detect copying.


Yeah, it matters only in “interestingness” or “coolness”.


Their patent (https://patents.google.com/patent/US4785361A/en) doesn’t mention a laser, but of course that doesn’t imply it wasn’t a laser.

I would guess (more or less) identically damaging multiple floppy disks in the same way would be easier with a laser than with something mechanical (e.g. a knife or a drill) (it is fairly easy to control power and duration of a burn), so it might well have been a laser.

On the other hand, disk tracks weren’t exactly tiny at that time in history.


It could be a tiny drop of something corrosive, but with that I’m also still wondering if a laser isn’t simpler, yeah.

I have almost no doubt that it could be a laser, it’s just unfortunate (and maybe a little bit suspicious) that I haven’t found it confirmed anyway. Almost like they wanted it to be a laser (hence the folklore around it), but had to use a less cool method to do it. But of course it might as well just have been a laser, and they for some reason declined to market or even just document it that way, for whatever reason.


Maybe give it another try? I have been playing lots of games for the past few years, some vigorously. Not a single one of them has a single microtransaction, because that's an immediate turnoff for me.


Into the Breach only came out 8 years ago, but I'm still playing it vigorously.

I'm sorry to say, your nostalgia-colored-glasses are so strong, you're actually blinded by them. I grew up in the same gaming era as you (started around early to mid 90s, but the peak was later), and I too have fond memories. But there undeniably has been some magnificent progress in pretty much all aspects of gaming.

Somewhere between 2005 and 2010, I thought I had outgrown gaming, and that no game would have anything to offer to me anymore. But years later I learned that that was just because I was stuck thinking that JRPGs were the pinnacle of gaming, it turned out that I had grown out of those. Obviously your story will be different, but I bet there is some story to you somewhere.


This! Both FTL and Into the Breach are evergreen games imho.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: