sysctl vm.overcommit_memory=2. However, programs for *nix-based systems usually expect overcommit to be on, for example, to support fork(). This is a stark contrast with Windows NT model, where an allocation will fail if it doesn't fit in the remaining memory+swap.
People disable memory overcommit, expecting to fix OOMs, and then they get surprised when their programs start failing mallocs while there are still tons of discardable page cache in the system.
I have a way simpler explanation. IEEE 754 double can only represent integers up to 2^53 without precision loss, so if you naively average two numbers greater than 2^52, you get an erroneous result.
It just so happens that 2^52 nanoseconds is a little bit over 52 days.
I've seen the same thing with AMD CPUs where they hang after ~1042 days which is 2^53 10-nanosecond intervals.
The comment said IEEE754 doubles can represent integers to 2^52. But I missed the double or assumed float. Floats cannot do that and it would be disastrous to assume so. For that matter, doubles also have some pretty big issues when you do operations on them (loss of precision), but as long as you are purely doing integer operations, it “should” be fine. A practical example with non-integers: 35 + -34.99
Having done exactly this math for GStreamer bindings in JavaScript (where the built in numeric types are double or nothing), this would also be my prime suspect.
Yes, you can chuck your SSD into a freezer. Data retention time increases exponentially in lower temperatures, so keeping it in a regular +4C fridge is enough to extend retention by decades.
Just remember to heat up the disk before writing and after storage.
If my three year old 350€ fridge has a no frost option that never failed or had hiccups in all that time, I assume the industrial one bought to store drives in that hypothetical situation would do to.
It really doesn't help that it's written in ancient K&R C, but if you spend ten or so minutes just staring at it, familiar shapes and patterns start to appear. (Give it a try!)
Incidentally, it's in line with how APL code looks like an alien artifact at first, but you get used to it fast if you have spatial reasoning to wrap your head around reshaping and transposing.
In datacenters, you're mostly limited by the power (and thus cooling). Most commercial DCs only let you use up to about 10kW per rack. For standard 40U racks it's just 250W/RU, give or take.
There are niche expensive datacenters with higher power density, but as it stands, exotic multi-kW hardware at scale makes sense if you either save a ton on per-node licensing, or you need extreme bandwidth and/or low latency.
>Most commercial DCs only let you use up to about 10kW per rack.
I think that was the case in 2020;
>By 2020, that was up to 8–10 kW per rack. Note, though, that two-thirds of U.S. data centers surveyed said that they were already experiencing peak demands in the 16–20 kW per rack range. The latest numbers from 2022 show 10% of data centers reporting rack densities of 20–29 kW per rack, 7% at 30–39 kW per rack, 3% at 40–49 kW per rack and 5% at 50 kW or greater.
We dont have 2023 numbers and we are coming to 2024. But it is clear that demands for high power density is growing. ( And hopefully at a much faster pace )
I have been told by tech writers that Google discovered that at some point electricians will refuse to route more power into a building. So even if you created a separate thermal plant, you still have issues.
Arc furnace: peak of about 250 MW to melt the steel [1]
Datacenter: seems to cap out at around 850 MW [2]
Same ballpark I guess? Probably both are limited by inexpensive power availability + other connectivity factors (road/rail, fiber).
[1]: “Therefore, a 300-tonne, 300 MVA EAF will require approximately 132 MWh of energy to melt the steel, and a "power-on time" (the time that steel is being melted with an arc) of approximately 37 minutes.” via https://en.m.wikipedia.org/wiki/Electric_arc_furnace
There’s always some limiting factor, and there’s always some (possibly crazy expensive) way to resolve it and get a bit more power until you run into the next limiting factor.
Any time my coworkers start acting like we are amazing for how many requests we handle per second I send them to wikimedia.org. That'll smack the smug right outta most people.
That seems pretty reasonable. That's for the app-server cluster to generate uncached results, and propogate them. In general the app server will do this upon the change occurring, before a user actually asks for the page/information.
Users should only really see this when performing "mutable" operations (submitting edits, adding new pages or content) or when searching uncommon queries.
I doubt it's anywhere close to the critical path for the average guest, casual user, or even contributor. I'd suspect the only type of user who would find themselves hitting those appserver requests frequently would be moderators and admins.
Also: 500 ms on wikimedia sites is very much still in the "okay" range, subjectively. They aren't really sites that you make requests to every minute, if loading the next article took 500ms every time then so be it.