On the 38th floor in Hell's Kitchen, I felt my chair and desk shake. It was like 10 seconds long.
One of my neighbours often close their doors with force which causes the wall to vibrate. Then I noticed things not attached to the walls also were shaking and understood it's an earthquake. I also noticed lots of birds flying near the Hudson River. I have never thought I would feel an earthquake here.
I also searched Google to see if there's an earthquake, and at 10:23am nothing was showing up. I remember a year ago Google used to ask "Have you felt your building start shaking", and nothing this time.
And he references the exact blog post here. I hear it as 6/8 (definitely a rest after the first three 16th notes which is born out in the timing) and I still think that's closer, but I appreciate the 21/32 argument and glad to see him actually try to find a common subdivision to those timings
mmap() will keep things in memory after first loading, but the page cache will _also_ keep things in memory after first loading. The difference is in order to re-use that you still need to read the file and store yourself (requiring 2x memory), instead of just doing a memory access. This has two consequences:
* 2x memory. A 20G data set requires 40G (20 for page cache and 20 for LLaMA)
* Things would be _even slower_ if they weren't in page cache after first loading. mmap is fast because it does not require a copy and reduces the working set size
This is a misconception you and parent are perpetuating. fork() existed in this problematic 2x memory implementation _way_ before overcommit, and overcommit was non-existent or disabled on Unix (which has fork()) before Linux made it the default. Today with CoW we don't even have this "reserve memory for forked process" problem, so overcommit does nothing for us with regard to fork()/exec() (to say nothing of the vfork()/clone() point others have brought up). But if you want you can still disable overcommit on linux and observe that your apps can still create new processes.
What overcommit enables is more efficient use of memory for applications that request more memory than they use (which is most of them) and more efficient use of page cache. It also pretty much guarantees an app gets memory when it asks for it, at the cost of getting oom-killed later if the system as a whole runs out.
It does not happen using fork()/exec() as described above. For it to happen we would need to fork() and continue using old variables and data buffers in the child that we used in the parent, which is a valid but rarely used pattern.
This is the only right answer. What actually happens is you instantly have two 10G processes which share the same address space, and:
3. A microsecond later, the child calls exec(), decrementing the reference count to the memory shared with the parent[1] and faulting in a 36k binary, bringing our new total memory usage to 1,045,612KB (1,048,576K + 36K)
CoW has existed since at least 1986, when CMU developed the Mach kernel.
What GP is really talking about is overcommit, which is a feature (on by default) in Linux which allows you to ask for more memory than you have. This was famously a departure from other Unixes at the time[2], a departure that fueled confusion and countless flame wars in the early Internet.
Edit: Many of these proposals in that have either been followed or exceeded, largely as a result of the September 11th attacks. Most notably Broad street is completely a pedestrian plaza, closed off to traffic.
> I've worked at RHT and based on the turnaround, it's the guy that writes codes for the card reader. I'm still impressed but not surprised by the level of service he got. Red Hat has a great hacker culture.