It is exactly this "lulled into complacency" that I rail against when most people cite that line. Far too many people are trying to shut down down dialog on improving code (not just performance) and they're not above Appeal to Authority in order to deflect.
"Curiosity killed the cat, but satisfaction brought it back." Is practically on the same level.
If you're careful to exclude creeping featurism and architectural astronautics from the definition of 'optimization', then very few people I've seen be warned off of digging into that sort of work actually needed to be reined in. YAGNI covers a lot of those situations, and generally with fewer false positives. Still false positives though. In large part because people disagree on what "The last responsible moment" in part because our estimates are always off by 2x, so by the time we agree to work on things we've waited about twice as long as we should have and now it's all half assed. Irresponsible.
I'm with you, and been on a bit of a rampage about it lately. Honestly, just too much broken shit, though I'm not sure what the last straw was.
A big thing I rail against is the meaning of an engineer and that our job is about making the best product, not making the most profitable product (most times I bring this up someone will act like there's no difference between these. That itself is concerning). The contention between us engineers and the business people is what creates balance, but I'm afraid we've turned into yesmen instead. Woz needs Jobs, but Jobs also needs Woz (probably more than the other way around). The "magic" happens at the intersection of different expertise.
There's just a lot of weird but subtle ways these things express themselves. Like how a question like "but what about x problem" is interpreted as "no" instead of "yes, but". Or like how people quote Knuth and use it as a thought terminating cliche. In ML we see it with "scale is all you need."
In effect, by choosing to do things the easy way we are choosing to do things the hard way. Which this really confuses me, because for so long in CS the internalization was to "be lazy." Not in the way that you put off doing the dishes now but in the way that you recognize that doing the dishes now is easier than doing them tomorrow when you 1) have more dishes 2) the dishes you left out are harder to clean as the food hardens on the plate. What happened to that "efficient lazy" mindset and how did we turn into "typical lazy"?[0]
One of the aphorisms I operate by is that when the order of magnitude of a problem changes, the appropriate solution to that problem may also need to change.
Here we are sitting at four to seven orders of magnitude separated from Knuth, depending on whether you mean number of devs or number of machines or size of problems tackled.
Size of machines is pretty amazing. The PDP-10s and 360/67s Knuth was talking about in 01974 were about 1 MIPS (with 32-bit or 36-bit operations) and could scale as high as 16 mebibytes of RAM. Today you can put 6 tebibytes in a 384-core two-socket AMD server that can do in excess of 10 trillion 32-bit operations per second: 6 orders of magnitude more RAM, 7 orders of magnitude more arithmetic.
But that's really understating the case, because those were large shared mainframes. Today's equivalent would be not a 2-socket rackmount server, or even a whole rack, but quite plausibly a whole data center, three more orders of magnitude. 9 orders of magnitude more RAM, 10 orders of magnitude more arithmetic.
Probably also worth mentioning that the absolute number of computers has also increased a lot; every USB-C power brick contains a multi-MIPS computer.
I agree that the number of devs has increased by only about four or five orders of magnitude. In 01974 I'd guess there might have been 50,000 programmers; my grandparents took programming classes around that time involving batch submission of punched cards, and that was a reasonably common thing at US universities. Today, Roblox has 380 million monthly active users; what percentage of them write their own games in it? And the popular programming environments Microsoft Excel and Google Sheets have over a billion users each.
> It is exactly this "lulled into complacency" that I rail against when most people cite that line. Far too many people are trying to shut down down dialog on improving code (not just performance) and they're not above Appeal to Authority in order to deflect.
Your comment reads like a strawman argument. No one is arguing against "improving code". What are you talking about? It reads like you are misrepresenting any comment that goes against your ideas, no matter how misguided they are, by framing your ideas as obvious improvements that can only be conceivably criticized by anyone who is against good code and in favor of bad code.
It's a rehash of the old tired software developer cliche of "I cannot do wrong vs everyone around me cannot do right".
Ironically, you are the type of people Knuth's quote defends software from: those who fail to understand that using claims of "optimization" as a cheat code to push through unjustifiable changes are not improving software, and are actually making it worse.
> "Curiosity killed the cat, but satisfaction brought it back." Is practically on the same level.
This is the same strawman. It's perfectly fine to be curious. No one wants to take the magic out of you. But your sense of wonder is not a cheat code that grants you the right to push nonsense into production code. Engineers propose changes based on sound reasoning. If the engineers in your team reject a change it's unlikely you're surrounded by incompetent fools who are muffling your brilliant sense of wonder.
> If you're careful to exclude creeping featurism and architectural astronautics from the definition of 'optimization', (...)
Your comment has a strong theme of accusing anything not aligned with your personal taste as extremely bad and everything aligned with your personal taste as unquestionably good that can only possibly be opposed if there's an almost conspiratorial persecution. The changes you like are "improving code" whereas the ones proposed by third parties you don't like suddenly receive blanket accusations such as "creeping featurism and architectural astronautics".
Perhaps the problems you experience, and create, have nothing to do with optimization? Food for thought.
> No one is arguing against "improving code". What are you talking about?
This is actually a frequent occurrence. But honestly it usually doesn't come with that exact same phrasing. It usually comes with "let's spend our time on this other thing that isn't important but is new therefore important". It's true, sometimes you need to triage, but there's a clear bias that once something "works" (no matter how poorly) there is much less incentive to make it work not poorly.
Some food for thought: your comment entirely relies upon wisdom of the crowds. Which I agree is usually a good bet to go with. But there are conditions where it fails, even at scale. It's actually fairly easy to upend. All you need to do is remove independence of actors. Which, you can probably guess is fairly common in our field. The more homogeneous we become the less reliable wisdom of the crowds is.
Plus, people have different experiences and different environments. What has been true is your experience need not be true in mine or any others.
I'll give you an example of irrationality here. I was working at a big tech company and as part of breaking down my project I was playing around with another part of the code. I more then doubled the prediction accuracy on customer data and made the code run faster and use less memory. Objectively better. It even happened "for free" because the tasks involved were part of my original job, though the goal was not explicitly. What happened? Last I know, the PR is still open. Boss never pushed for it because they were more invested in their other work that was yet to be completed but had more buzzwords. That work was only a bit more accurate than mine but 100x shower and required 10x more memory. I even told them what I did would work for the other one too, yet it never happened.
I've seen many instances like this. Sometimes people just don't want things to be better unless it's better in a very specific kind of way (and not necessarily via performance). Sometimes it's just politics.
On the other hand, I've seen very good teams where there's the exact opposite experience. It's a very common thread that those teams openly discuss and explain why a seemingly good idea is actually bad. Frequently, they'll let you have a go at it too, because it's a no lose situation. If you're right, we all win. If you're wrong there's an important lesson about the code that's leaned because there's hidden complexity that makes the seemingly good idea bad and it's often hard to explain. But you end up with a lot more knowledge about the code, which helps you in the long run
> This is actually a frequent occurrence. But honestly it usually doesn't come with that exact same phrasing. It usually comes with "let's spend our time on this other thing that isn't important but is new therefore important". It's true, sometimes you need to triage, but there's a clear bias that once something "works" (no matter how poorly) there is much less incentive to make it work not poorly.
Exactly. “It technically functions and therefore doesn’t need attention” has become an industry norm. There’s a massive bias towards piling on more features over making older ones good.
> “It technically functions and therefore doesn’t need attention” has become an industry norm.
Most commonly seen with the aphorism "don't fix what isn't broken." But I really hate that aphorism. There's some truth to it, but it is commonly used as a thought terminating cliche. If you see a rusty pipe you should sure as hell fix it. Sure, it isn't "broken" in the sense that it has burst and is leaking water everywhere, but it sure is a ticking time bomb. And boy, is it a hell of a lot cheaper to fix a rusty pipe that isn't leaking than to fix a burst pipe and also pay for all the water damage.
The aphorism misses that "maintenance" is much cheaper than "repair". The aphorism misses that you can get major cost savings by doing thing differently. Sure, it costs more in the moment, but are we trying to run a business paycheck to paycheck or are we trying to create something sustainable. I sure hope your business is thinking more than just 3 months out. "More expensive now" is far too common of an excuse, enough that I see it used to justify not implementing things that would have a ROI within 6 months! Which not doing anything with under a year ROI is absolutely batshit insane (unless you're a startup living on the edge or something, but I see this in big tech companies and I hope we understand that's the context here).
I became the SME for batch processing on this project. The whole project was a read heavy workflow that should have had all decisions made at write time but was so far in the weeds when I got there that I couldn’t pull it out. And everyone I convinced of this fact decided to work somewhere else instead of stay and help fix it.
But there were parts we could sort out at build or deploy time, and I was able to fix a number of those.
One guy was tasked with a project I got turned into an epic: build fallback error pages at deploy time and push them into CDN. I told him not to build it the way he was, copy the one I had been working on. I got busy and he left, and we discovered that they hadn’t been updating when a customer changed their contact info and noticed months later that we still had the old info.
The build trigger had no downstream dependencies and there was no alert being sent for failure. The job was timing out for reasons similar to the ones I’d fixed. I tried to bandaid it still errored out an hour into what looked like at least a 100-120 minute workload.
I discovered the main problem was that our customers could have multiple domain names and he was frobbing through three or four service calls trying to find the canonical one, and making one call per customer to see if they were still customers (we had an endpoint that would paginate all active customers, so ~400x more efficient and rising due to turnover).
The main table I had been using had one column I didn’t use, which had a url in it. I confirmed with our SME that this was in fact the canonical domain name, I just had to add the field to the QL and do a `new URL(url).hostname`. At the end of the day I ended up extracting about half the code from my tool, deleting over half of his code and replacing it with calls into mine (do it like I did it, but literally).
Four and a half minutes, and no more intermittent failures, because my version was built on back pressure to throttle requests under high server load, and total outbound requests around 1/8 of the original.
This just kinda made me think, are there less war stories being shared on HN? It sure feels like it. But what got me thinking is how sharing these types of war stories is an effective way to teach and share with one another. Maybe it's just my bubble, but I feel like I hear them infrequent. Anyways, this was a good one. Thanks for sharing.
"Curiosity killed the cat, but satisfaction brought it back." Is practically on the same level.
If you're careful to exclude creeping featurism and architectural astronautics from the definition of 'optimization', then very few people I've seen be warned off of digging into that sort of work actually needed to be reined in. YAGNI covers a lot of those situations, and generally with fewer false positives. Still false positives though. In large part because people disagree on what "The last responsible moment" in part because our estimates are always off by 2x, so by the time we agree to work on things we've waited about twice as long as we should have and now it's all half assed. Irresponsible.