If I had wrote that, it would have been about how it led to the development of the Cesarean section operation. In the US, and many other places, the prevalence is much higher than the medical need. When there is no medical need for a C-section, there really isn't any benefit over the risks to the mother and child. Also--and this will probably sound quite hand-wavy or too woo-woo for most of HN's demographic (I assume)--there is great emotional benefit from a baby passing through the vagina.
And once a mother has a c-section, most (all?) future births are via c-sections as well.
There is a cost to the mother - major surgery - to the benefit of children. Modern society considers this a worthwhile trade, but it would have meant the death of the mother not even a century ago.
I have no real way to tie this back to programming; other than perhaps to note that we're likely still in the dark ages of programming; the programming equivalent of a cesarean section is likely still years away.
Churning out "apps" and delivering deliverables is the focus, not engineering. Ticking off boxes for enterprise is more important than product feel and user satisfaction. And so on.
And once you got hooked on docker and nodejs (with typescript and VS Code) why would you go back to hacking perl scripts that invoke long forgotten binaries written in the darkest C dialects, darker than void* itself, even if that damn docker image is 600MB, the runtime memory cost is not much less, and it's managed by kubernetes which needs gigabytes of RAM just to run an apiserver and scheduler and a kubelet (node/server manager).
> I have no real way to tie this back to programming ...
One facet of the tie-in is the age-old question of what to optimize for: Programmer time? Execution speed? Code compactness?
Code maintainability? Cf. The Story of Mel [0].
Keep in mind that a C-section is done only when normal labor isn't enough; it's an alternative to the use of forceps (which is very difficult to teach and can be catastrophic) and not to normal vaginal delivery.
Actually it's not. There are now elected c-section surgeries, where the mother is scheduled to go in to the hospital and has the surgery. And this isn't because of medical risk. It's pushed by doctors and hospitals as a more convenient alternative.
Besides elected c-sections, the prevalence has been increasing, where a labor shows the slightest risk and suddenly it requires a c-section.
I mean... the entire thing is disturbing. From the strangely credentialist approach to the entire birthing process to the image of a newborn injured by forceps.
But I think that the part that most rubbed me wrong was the realization that it Apgar (a rather crude system for summarizing neonatal health, IMO) that, in part, caused the C-section to rise in popularity.
That's gross. It's like using `free` to measure RAM usage on your server, and then determining that every app with usage over n needs tighter JPG compression, reducing image quality. In other words: you measured the wrong thing, implemented the wrong fix, reduced the quality of the outcome, and then made the above standard practice.
Why is doing one thing a qualification for a good piece of software? I could see that some users may prefer this model of tools, but I use plenty of good software that provides a more complete solution than just doing one thing.
And then skip ahead and read chapter 10: git internals. This was really helpful for me to understand the fundamental concepts of git and why it is designed the way it is.
This has no been the case historically, I'm not sure why it would be now. Why would ms invest in languages like vb.net and f# if so? Choice of languages that target the CLR has always been a selling point even though c# is dominant.
$100 more for 2x RAM and storage, that doesn't seem like a "ridiculous premium". Unless you mean ridiculous in some other way than the actual dollar amount.
Considering that $100 easily buys you an 8GB so-dimm or an 80 GB intel m.2 ssd at retail prices, I think it is pretty steep for an increase of 2GB of ram and 32GB [ed: whops, 64GB, but still] of storage...
The point was probably that this is not scientific. Whatever model you come up with won't really have any predictive power. See arguments against pseudo-science, e.g. Freud and Marxist theories.
"Writing cached data back to persistent storage is bad", what is the "right" way to write data? I'm not terribly familiar with distributed systems, just curious. Is this referring to write-back vs. write-through?
I believe he's suggesting that caches should only be allowed to read from persistent storage and that values read from cache should not be assumed to be current. For example, if I read $FirstName, $LastName from some cache lookup for a user and then go to update $Address I should not write $FirstName and $LastName along with it as a "whole user update" sort of thing; just update the value that you need to.
I think this is referring to write-back caching, where the write operation is confirmed when the cache accepts it, but a separate process is responsible for replicating the write operation from cache to the persistent storage.
Write-through is more widely accepted, since the operation is not confirmed until it's successful on both the cache and the persistent storage.
I was wondering how this relates to Datomic... I'm not really familiar enough to say much about similarities and differences, but would be interested if someone who is could comment.