> Once you understand your computer has 16 cores running at 3GHz and yet doesn't boot up in .2 nanoseconds you understand everything they have taken from you.
With their infinite VC money at their disposal, and with their programmers having 100 GHz machines with thousands of cores, 128 TB of RAM and FTL internet connections, tech companies don't really have any incentive to actually reduce bloat.
Edit: it's still quite sad. I feel like we had languages with a way better future, and more promising programming architectures, back in the 80s.
It's less about the lack of incentive to reduce bloat and more about the incentive to create bloat in order to justify one's position and pad the resume for the next positions.
And most machines don't even touch that limits. I remember some presentation about it [1] and actual limits aren't even touched, especially on servers.
From what I remember "hard" limits are CPU/DRAM-memory initialization and speed of read from flash chip storing, and sources of lag include firmware from add-on cards just being slow (if RAID controller takes 30 seconds to return, and firmware is not running initialization in parallel, that's your extra boot time. Or stuff out of left field like "IPMI controller logs stuff via serial so if you print too much text it slows down". Most BIOSes do things painfully parallel too.
> Once you understand your computer has 16 cores running at 3GHz and yet doesn't boot up in .2 nanoseconds you understand everything they have taken from you.
With their infinite VC money at their disposal, and with their programmers having 100 GHz machines with thousands of cores, 128 TB of RAM and FTL internet connections, tech companies don't really have any incentive to actually reduce bloat.
Edit: it's still quite sad. I feel like we had languages with a way better future, and more promising programming architectures, back in the 80s.