Hacker News new | past | comments | ask | show | jobs | submit login
Perfectionism – one of the biggest productivity killers in the eng industry (eng-leadership.com)
155 points by RyeCombinator 9 months ago | hide | past | favorite | 132 comments



Last week half of civilization came to a grinding halt because of the crappy engineering practices of a single company. I don't know if now is the right time to complain about perfectionism in IT.


The article is pretty muddled, in large part because they don't seem to really understand what perfectionism is. Their anecdotes aren't really about perfectionism at all, they're about times when they didn't pay attention to the context they were working in and made poor decisions as a result (usually because of inexperience).

In other words, it's not perfectionism that's the productivity killer they've identified, it's inexperience.

I'm a perfectionist in the clinical sense. Shipping something that I know is imperfect gives me a high amount of anxiety. But I also have enough experience to avoid wasting my time on things that I now understand don't matter.

I don't get anxiety about not having the perfect abstraction anymore, I get anxiety about things like whether we have enough monitoring in place, whether our release plan will cause problems in prod, or whether I considered all the edge cases in this critical piece of security-related code.

The perfectionism isn't gone, but it's not really hindering my progress in my career because it's focused on the things that actually matter. I'm working on addressing the anxiety aspects of my perfectionism, but that's a mental health issue, not a software development cycle issue.


I think you have hit the nail on the head, focusing on the wrong things.

The problem is that software engineering is still, in many ways, in a pre-scientific era [1]. For example, civil engineers know how to build bridges that are good enough, i.e. that barely stand.

In the software field, we don't really know how to proceed in the same way. We have formal methods, but they are quite costly, and therefore not used very often.

[1] https://www.di.uminho.pt/~jno/ps/pdbc.pdf


> For example, civil engineers know how to build bridges that are good enough, i.e. that barely stand.

And "good enough" is a relative and changing thing. What was "good enough" thousands of years ago is nowhere near standards today. This is even true a hundred or 50 years ago with the specific example of bridges. Engineers learn this the hard way because the results are visibly catastrophic and usually through the loss of many lives. But I think one of the difficulties of software is that failures are often less apparent and can be difficult to accurately attribute. But then again, when a bridge fails there are both internal and independent investigations that are extremely thorough and result in updated policies. This does not seem to be the same in software (at least to nearly the same degree), in part due to the abstracted nature of how software can cause harm. It is actually harder to attribute error (like you said, formal methods exist, but are costly and not just in the monetary sense).


> they're about times when they didn't pay attention to the context they were working in and made poor decisions as a result (usually because of inexperience).

I think there is an irony in this because often someone managing a group will accuse them of perfectionism when they themselves do not have the appropriate context to understand that the arguments are really about goodenoughism. I do think this is why companies that are fairly transparent to their employees have been successful. Because it helps them understand the balance and constraints of which they are operating under. To have an environment which is closed and groups are unable to communicate (e.g. in a highly classified setting[0]), the person managing the groups needs to have intimate knowledge of all the pieces coming together, because somewhere along the line someone needs to glue all the puzzle pieces together.

So I wouldn't call what you express perfectionism. Perfectionism doesn't really exist because perfection doesn't exist. Rather it sounds like you are detail minded. It also sounds like you've learned to understand what details can (or have to) be ignored, what details can be temporarily ignored but need to be revisited, and to understand that keeping track of details is still important to understand how one needs to improve and mitigate future catastrophes. Obviously there is a balance, but I think this is very different from someone who is trying to turn a map into the territory.

[0] I should mention in many classified settings people are still encouraged to communicate and bounce ideas off of one another. Just to do so exclusively in secure areas and to ensure the other person also has the requisite clearance.


To be clear, I literally experience the clinical definition of perfectionism, at a level that is arguably disordered [0]:

> the tendency to demand of others or of oneself an extremely high or even flawless level of performance, in excess of what is required by the situation. It is associated with depression, anxiety, eating disorders, and other mental health problems.

At my worst, every single mistake I made would give me hours to days of stress and anxiety. I've gotten it much better under control in general, but it still does cause a lot of stress from time to time.

This is why I object to people equating "perfectionism" with "being bad at prioritizing truly important work". I'm pretty darn good at prioritizing what's important. I actually perceive wasting time on unimportant work as a mistake, the opposite of perfection, and if I do it and realize my mistake I get stressed about it.

Perfectionism is holding yourself (and/or others) to excessively high standards, often met by anxiety or depression when those standards aren't met. The amount of perfectionism someone has is completely orthogonal to the question of how you measure perfection—a perfectionist with a definition of perfection that is well aligned with reality may accomplish a lot while a perfectionist with a definition that's less well aligned ends up wasting a lot of time.

And while you're right that perfect doesn't exist, that doesn't stop a perfectionist from pursuing it.

[0] https://dictionary.apa.org/perfectionism


> To be clear, I literally experience the clinical definition of perfectionism

To be clear, I was trying to make the point that this is ill-defined due to what is a reasonable level of acceptance is difficult to define and how the common usage of the terminology can be ill suited due to this being dependent on understanding what one is conditioning on. I'm not saying you don't place too strong conditions, I don't know you. Despite running into you here often, I don't know that side of you and it isn't something I can reasonably conclude in either way. But I do know it is common that too lax of conditions are placed. It's a spectrum after all and either end is bad.

> every single mistake I made would give me hours to days of stress and anxiety.

I can definitely empathize with this and have been there.

I think you and I are decently aligned in our belief. Maybe we're bad at explaining and/or bad at interpreting. But there are enough similarities that I think we should not discount. But I do think we were using the word perfectionism in different ways. I was trying to use the version we'd see in normal language and how it is used in the article to agree with you in that this actually isn't perfectionism (certainly not the clinical definition). Though you're right to also point out that the clinical definition of perfectionism is not about actual perfection. But I don't think this invalidates my point that the true difficulty is in determining what is appropriate levels of standards, that this is a difficult balance to strike, and that many conflate comments of details/nuance with excessive standards when they may or may not be, including that the recognition of details is important in determining this. And I do think there is a natural bias in that it is easier to identify excessive standards than it is to identify insufficient ones.


> In other words, it's not perfectionism that's the productivity killer they've identified, it's inexperience.

experience without growth is just inexperience with more "years of experience".

This is pretty much what happens in more technical parts of industry when you have your best workers jumping every year, poor documentation of proprietary tech, and a lack of proper training because you expect "experienced" developers to simply grasp your entire tech stack from day one. You have people falling into the same pitfalls every year (or few months even) and building on a foundation that is not fully understood. Because there is no time to properly understand.

Maybe there should be some "perfectionism". Maybe not in shipping "things that actually matter" (as business defines it), but in the boring stuff that keeps a ship afloat. They could also simply give raises to their best workers instead of having them walk, but clearly that's not in the cards.


It depends entirely on how you define perfectionism. The article unfortunately doesn't explain well what they meant.

Assuming it is following, paradigms, design patterns and making the code "cleaner", then I would agree with the article 100%.

However if it was about actual engineering concerns like better error handling, performance, test coverage, nailing down invariants/validation/parsing, etc. Then I would say that we should probably be more diligent rather than less.

The issue is that the former gets too much attention and the latter too little.


In my experience, the people who are best at error handling, performance, test coverage, nailing down invariants/validation/parsing, etc, are also the people pushing for those nebulous perfections like "cleaner" code. Getting all those things right requires the code to be pretty darn clean.


My experience is different.

Given the choice between fixing a quality issue that can be easily measured/detected and one that cannot, a lot of people will prefer the measurable, easy-to-detect one.

So the folks on the 'code quality' committee end up introducing tools that make it impossible to merge if your markdown file has two consecutive blank lines - rather than spending the time on performance or security which are much harder to keep on top of with automated tooling.


I'm not arguing against decomposition, modularisation, clarity or abstractions.

Just don't do them for their own sake.


How do you decide when it’s ’for it’s own sake’ and when it’s not?


Personally, the "Rule of Three of Software Engineering" has been a good enough practice for most cases, as usual with any rule of thumb there are exceptions but most of the time I can apply it.

I've been using it for many, many years and it has saved me a countless hours by postponing abstractions, modularisation, etc. to the point where it actually emerges as a pattern in the software I'm working with. Let the code repeat itself, copy and paste, and only after some pattern repeat itself for the 3rd time I start to think on how to abstract, or modularise it.

It's always so much easier to rework repeated code/patterns into abstractions than fixing misguided abstractions done too early.


if we're being frank: When you care about the company and feel the company cares about the tech. Both of those aligning are rare. Broken windows and all that.

Now for a proper engineering answer, there's a few (but not limited to) factors to consider when determining software rigidity:

- Halo effect: How much code does this module affect? The more things that can break when the module fails, the more need to get proper testing/architecture.

- Frequency (in both ways): How often are you changing the code (is it cutting edge, or bog standard 50+ year old algorithms?) as well as how often the application calls the code. The more times its called or changed, the more you may appreciate ways to test new implementations with minimal impact to the current iteration.

- Understanding: pretty straightforward. The less you or the team understands the code, the more need to validate that code through tests. This often falls into frequency, because inevitable the less you understand the more you will need to tinker, possibly for unknown unknowns.

It's still subjective at the end of the day, but there are metrics you can use to make your case.


To be cynical and uncharitable, in my experience non-engineering leadership tends to be receptive to the idea that engineers are by their nature wasteful and self indulgent, and saying “just don’t do that stuff for it’s own sake” reinforces said leadership’s prejudices and will ingratiate you with them. I’ve most often heard such arguments from people shitting on me for advocating for tests.

But the bulk of my experience is with small startups where the stakes are high; tensions around productivity are natural, and working through them is essential.


That's a very good question that I need to think about way more.


I like that word: diligent.

Jordan seemed to be talking about it: prioritizing; helping, rather than burdening others. Gregor seemed to be describing a lack of diligence: not fleshing out learning, fretting over less relevant details.

In my mind arguing for dilegence is a better here. It implies that the product matters, even if it is imperfect. Arguing against perfectionism is bad. To some it will imply that the product does not matter, that sloppy work is okay if you can somehow justify it (getting the job done, doing more, etc.).


IMHO article defines perfectionism doing both things you have mentioned too early. E.g. you do prototype and instead of showing that to stakeholders you clean it up, cover with tests and etc.


>design patterns and code cleanliness aren’t “actual engineering concerns”

sigh


Clean code leads to more testable code leads to more tests leads to better products. Don't be so quick to dismiss "clean code" as some BS.

I've seen first hand how someone writing a 500 line function struggles to test it. But splitting it up into more manageable chunks of functionality makes it easier to write a test.

Imo perfectionism is chasing that last 5% performance.

Clean code, design patterns etc are just good engineering practices... Not perfectionism ....


> Clean code, design patterns etc are just good engineering practices... Not perfectionism ....

This is what I don’t like about software engineering. The dogmatism. The absolutism. The more I work in the field, the less I talk about “clean code”, “ddd”, “good practices”. I find myself saying more and more “it depends”, and more often than not what the business requires is not 5 layers of code in which the concerns are separated among dozens of files and following good design patterns, but a damn single file that get things done.

There’s place for what you call good practices and clean code, but many times those things don’t lead to good products.


I really liked how jmac03 put things, and I don’t see the dogma you’re seeing — consider that it’s “struggles to test” not “impossible”. I don’t see any advocacy for ”dozens of files”, just breaking functionality down into “manageable chunks” to facilitate testing. In fact, “dozens of files” are arguably not “manageable“, and I don’t see an argument for that, any more than you’re arguing that every function needs to be 500 lines.

Every single startup I’ve worked for has eventually come to terms with needing some best practices. In my view, there actually is a baseline: CI, partial coverage unit tests for the most easily testable code, and modularization/loose-coupling so that unwinding tech debt has linear rather than exponential cost. But even that is driven by my opinions about ROI, not theory in isolation.


>There’s place for what you call good practices and clean code, but many times those things don’t lead to good products.

Clean code for a bad product won't turn it into a good product. That's the core issue. These are engineering principles but you are looking at them through business logic.

The best metphor here is to compare clean code to insurance. You don't want to insure this? Okay, you can take that risk and maybe save if nothing happens. But if/when something happens it'll be more expensive than if you paid the insurance up front.

Meanwhile, Clean code from a good product will make it resiliant to becoming a bad product. I'm sure if you played any modern video games that you've had at least a few games that you loved but had horrible technical hiccups. Unoptimized, untested logic, crashing. The issue is some of these games still sell, so maybe the business logic prevailed over properly engineered code. But they are entertainment at the end of the day, not Crowdstrike.


Absolutely. Breaking a design down into verifiable, testable chunks is, imo, the major delineator between a school project and a professional product.


Le me clarify: good modularization and abstractions emerge from engineering concerns (like the ones listed above) and not the other way around.

The reason I wrote the above comment is because I've seen this being flipped on its head too often by myself and others.


I was thinking the same thing. The state of software is generally so poor, lacking any attention to detail and showing a disregard for the customer (who has been reduced to a single “revenue” line item in a financial analysis).

There’s an epidemic of “minimum viable” when it comes to software, and that results in flying really close to “not viable”.


The startup culture celebrates an even lower bar, "not viable but appears to be minimum viable". Think of the fireside stories about launching e.g. an AI image recognizer, while secretly using humans to do the recognition.


Yep. And that deception is presented as wisdom! We are supposed to admire how cunning they are with their fakery.

And what do they all say? “We should have launched sooner.”

That’s what happens when the goal of the vast majority of startups is not to build an actual business. Their goal is to “get funded”. It just feels rotten.


The bar can always be lower, too. Think slideware and vaporware. You don't even need to have a product, let alone a minimally viable one. You just need to convince someone to give you money.


It’s not all or nothing

Perfectionism is a disease

That results in other shipping sub optimal stuff

It’s perfectly ok to ship 50% of the features 100% well and keep shipping


As always, there's a bunch of people who should seek higher perfection, and a bunch who should seek lower.

And most people are wrong about which group they're in.


How do you know perfectionism didn’t cause the issue? Perhaps some team has been laboring for months or years on an actual rollout system to replace what they’ve got, but have never shipped because things took too long.


Whatever the reason for CrowdStrike's failure, a lazy attitude toward allowing vendors to push updates at any time in the name of a dubious "best practice" for security reasons let the bug have massively wider reach than it should have. Sysadmins should have instead thought more critically about the value disallowing such updates and pinning dependency versions to ensure they can predictably know what's running on the machines they manage, and subscribing to alerts about security holes they need to update and making informed decisions like seeing how the new updates are actually working in the wild instead of handing over the keys blindly. Want to call that "perfectionism"? Be my guest.


By what standard would you evaluate a AV definition?

Crowdstrike had a sloppy process without meaningful testing, only parser validation. Assigning that task to tens of thousands of admins, most of whom have no idea what they are doing just shifts blame, not accountability.


This fails to explain how they had such a shoddy system in the first place, only possible due to the opposite of perfectionism.


If we had a professional association for software engineers that was nationally recognized one benefit of this would be around standards and practices being set industry wide and having a professional association that advocates on behalf of engineers to have them enforced.

Other engineering professions do have precedent shown for this.


I think it any make sense for critical infrastructure, but the software industry is still in high growth mode.

Allowing individual engineers to be de-licensed creates a culture adherence to norms is the priority. If you ever worked with a civil engineering organization… it’s not exactly a hotbed of innovation. Alot of the professional associations spend more time creating make-work regulations than actual safety or engineering. For example, in New York, something as trivial as refurbishing a moderate volume street crossing in an urban area requires $750k-$1M of services to address ADA compliance, multi-modal transit accommodation and other factors.


Tom DeMarco (co-author of 'Peopleware' and other SE classics) has argued that the market is a better selection mechanism than a certifying body, which tends toward seizure of power rather than societal good. I can't turn up the original article, but Jeff Atwood quotes the relevant bits [0].

I'm not convinced that either certification itself or the market represent the most effective means to achieving societal good. I think what history shows is that real consequences for shoddy work are required [1]. To date, software makers haven't really been held liable for consequences, but I expect that will change if we don't clean up our act.

[0] https://blog.codinghorror.com/do-certifications-matter/

[1] "229 If a builder builds a house for someone, and does not construct it properly, and the house which he built falls in and kills its owner, then that builder shall be put to death."

- Hammurabi's Code, ~1750 BC


The perfectionists I have worked with were never the ones advocating for rolling out releases to users. The _only_ thing that can prevent catastrophes like Blue Friday.


I think standards could be higher while perfectionism could be lower. I'd even go as far as to argue that perfectionism is a contraindicator to quality software and high standards.

Perfectionism doesn't necessarily mean better implementations, good practices, or even doing the right thing. Chasing and stacking needless abstractions to make things appear clean is an example of such perfectionism that actually harms the industry and erodes standards.


In the second paragraph you name what seems to be your definition of perfectionism. I can't say I agree with it, because to me it seems to move away from perfect.


What I was alluding to, but perhaps should have communicated better, is that perfect within our domain, subjective.


Perfectionism is not a fix for this. Indeed, perfectionism can be a cause, by diverting energy and attention away from critical things toward polishing the spot on the wall in the basement while the load bearing walls and electrical wiring remains unaddressed.

I would argue that perfectionism is also about an inability to prioritize, something which, in the real word with all its time limits, will bite you.


Only in the scope of the IT nerds. Average people I'm betting have no idea that anything happened especially if you work with your hands. Its the same old day to day out the window. The sky didn't fall.


I think the demand for perfect surveillance is a huge influence on what happened here. It wasn't perfectionism of IT though.


I haven't seen a postmortem or RCA from Crowdstrike yet. Do you work there?


> I haven't seen a postmortem or RCA from Crowdstrike yet.

I got you! It’s relatively new, published four days ago. The relevant section is called “what went wrong and why”:

https://www.crowdstrike.com/falcon-content-update-remediatio...


There’s a lot of fluff, that boils down to “we tested the code in march”, “we did not test subsequently”, “we validate the content we push”, and “there was a defect in the content validation process”


And it seems to confirm, although not completely explicit, that there was _not_ a canary facility in place.


Correct. It sounds like they test changes to code in such a facilty, but “content” is only validated by a parser. Mcafee did something similar a decade ago.

It’s a hard problem - the whole point of Crowdstrike is to quickly respond to threats. (And they are very good at it) So you want time to market for marginal changes to be very fast. Normally I’d give them the benefit of the doubt, but their arrogance and poor response triggers my spidey-sense.


Thank you! I'm surprised this didn't get more coverage in my social media sphere. I'm still seeing memes and jokes.


MS just provided theirs, it's a bad architecture issue where CS overused the kernel mode, and should have instead done a listener in the kernel and logic in User land.


Yeah, but I don't think we all need to define our engineering practices by what should be considered the norm for a company deploying software that operates in kernal mode to millions of devices globally. Most of us aren't doing that (thankfully).

On the other hand, I couldn't count the hours of my career that I've wasted perfecting some piece of functionality - either at someone else's behest, or due to my own motivation/professional pride - that ultimately nobody gave a shit about.

I don't really want to do that any more, and I get pretty fed up pretty quickly when some PM (or whoever else) pushes me to do something that I know ultimately isn't going to matter. I can add a lot more value to a team than by simply churning out code, and it's frustrating being in situations where I'm not allowed to do that and am instead forced to waste time endlessly fiddling with stuff that doesn't add a lot of value for users and customers.

And I'm not talking about compromising on UX or anything like that: I mean actually spending time building functionality that doesn't get used. Or spending too long on barely used and non-key workflow aspects[0].

What's the point?

[0] A really great example of a barely used but ABSOLUTELY CRITICAL piece of functionality is the export/publish functionality in Ableton Live (it's a while since I used it so I can't remember exactly how it's labelled). This is a piece of functionality that is barely used compared with other functions, at least by many of us, but it's also - in some sense - the whole point of the application in the first place because it enables you to export or publish a finished piece of music, so it needs to work and work well.


Pursuit of perfectionism wouldn’t have saved it either

It wouldn’t have existed lol


No. "half civilization" came to a grinding halt because the responsible IT departments didn't bother verifying 3rd party updates before applying them.

this was not the failure of a single company.


The update was automatic and bypassed policies to the contrary


I mean, that was at least 3 parties at fault:

1. Crowdstrike 2. Microsoft 3. The companies that think it’s a good idea to run what is essentially an enterprise rootkit


Perfectionism is the practice of seeking perfection, not achieving it. Lots of devs spend time not shipping because they feel their code isn't good enough. Meanwhile Crowdstrike built a $62bn company shipping things. Poor quality things that break the internet, but that's kind of besides the point.

If you want to build value you must ship, even if the thing isn't ready yet because otherwise it will never ship and there will never be any value realized.


Exactly! And it's important to remember that while the outage did economic damages north of $5 billion, that this is not a cost to Crowdstrike, but to their customers. So the take away is ship fast because the costs of shitty code will be felt mostly by people who are not you!

Sarcasm aside, you're not wrong about people being caught up in perfectionism but that doesn't make you right about people/managers conflating perfectionism with goodenoughism. I'm sure with this error (and many others) you can find someone who was trying to solve the problem/s and someone else that accused them of perfectionism. I rarely see actual perfectionism, but I do frequently see people disagreeing what is good enough. I also frequently see problems that would have been small at the time but to fix later on require massive expenditures. The problem is that not-good-enough compounds while good-enough is neutral. You need to also dedicate time to maintenance and repair, because that's what fixes not-good-enough, which frequently happens simply because we aren't omniscient.


while the outage did economic damages north of $5 billion

If you're going to measure the dollar value cost of their errors as a cost of CrowdStrike being in business, shouldn't you balance that against the dollar value of savings from the attacks they've stopped?


Not exactly, but close. You need to balance against the dollar value of savings from attacks they've stopped BUT that their competitors would not have.

Just comparing against the attacks they stopped is a naive and gross overestimation because it requires an assumption that if a customer did not use CrowdStrike they did not use _any_ alternative. In reality, Crowdstrikes customers would have used some other product. So the attacks that competitors could have also defended against cannot be accurately attributed to Crowdstrike. i.e. if Edison didn't invent the light bulb someone else would have. We know this because he didn't invent it, though he did make the first practical version[0].

But since we're on topic, I'd call what I suggested a "good enough" estimation. We have a reasonable estimate under reasonable constraints. It's more complex, but the amount of accuracy gained is tremendous. Arguably the other model is so inaccurate it is less than meaningless (it isn't even "back of the envelope") because its conclusions would lead us to believe things that are not true, and far from the truth. But if we want to talk about _actual perfectionism_, well this gets much more complicated much more quickly. In fact, gets exponentially complex in exponential time.

A true analysis needs to consider the counterfactual question of what if Crowdstrike did not exist at all. This is far more complicated and contains many subltities that could dramatically change the results. Because we need to understand things like how other companies would have advanced and developer were the money/manpower/resources that were allocated to CrowdStrike were allocated to others. To understand how CrowdStrike both creates and discourages competition in the space. Systems like these are (mathematically) chaotic. Who knows, maybe a manager at Crowdstrike pissed off an employee who then turned blackhat. But maybe Crowdstrike gave a job to someone who would have turned black hat would they not have gotten hired. Maybe because they were laid off from a competitor! Realistically performing these calculations to a solution that is accurate (which would still be probabilistic in nature) is intractable. Of course we can improve our earlier model by using some of these factors, but I want to illustrate that there's a difference in perfectionism which is unachievable from being reasonably accurate ("good enough").

[0] Left unknown is how quickly someone else would have come to similar practicality but my best understanding of this specific story is that it would not have been long. This is not an uncommon story in science and technology and if you look hard enough you'll almost always find that any discovery or invention was also being pursued by another group. There are exceptions, but they are rare. You just never hear about the other groups, but you can confirm this by watching advancements that happen while we're alive and can see the competition in real time. Specifically in domains where you are close to.


I have very rarely run into the situation where I have put too much polishing into a piece of software to the point where its had a material impact on the project. However the opposite where there is pressure to rush and push out something that isn't really ready has been a very common problem to have to manage.

I think we see the results of rushing in all the software around us much more than we see the delays of perfectionism.


When you are the one calling the shots yourself, it can be easy to procrastinate by way of polishing. (Depending on the kind of temperament you have.)


Depends on who's calling the shots. Last few years dev market was so crazy management walked on eggshells to avoid pissing off engineers in most places I've worked at. As a result I've seen plenty of situations where abstractions and tech choices lead to delivery delays and business impact - followed by engineers responsible scramming for next position with latest buzzwords added to CV.

Thankfully the market correction made this much less of an issue.


Yes, most places I've seen could barely get a minimally functioning product out. There is never even the opportunity to get to the point where an engineer's perfectionism might kick in. I've seen projects that spit out an average of 5 lines of compiler warnings for 1 line of source code, and that's when the code actually could be built successfully. The big open secret of the software industry is how barely everything works.

I practice my perfectionism at home, on hobby projects, where I can run lint with every option if I want, and resolve every warning, static analysis issue, style issue, I can profile my code and find/fix slow parts. I can run valgrind on it and find memory leaks. And I never have to release the software! None of these things were common practice in any software job I've ever had. It was always "Get it to barely work and then ship ship ship!"


> The big open secret of the software industry is how barely everything works.

Maybe it's the pessimism of age, but it feels like that describes all biology, economies, politics, etc...


I've seen many people treat software like some sort of art project, and spend much time refactoring things, rewriting perfectly functioning systems, and generally creating little to nothing of value and generating tons of meaningless churn.

While the article doesn't offer specific examples, it's probably talking about this sort of stuff. e.g. "perfecting code, often code that hadn’t been touched in years and didn’t need to be touched". This sounds like a classic "oh, this isn't how I would have written it, so let me just refactor everything here".

"Not how I would have written it" does not equal "bad".

This isn't making the code better, less bug-free, or "more perfect" in any way. Actually, it often introduces bugs.


Yes, but what if you'd just...finished and started working on something else?

I'll take a 95% solution delivered in 2 weeks over a 99% one delivered in 2 months.


> I'll take a 95% solution delivered in 2 weeks over a 99% one delivered in 2 months.

This depends a lot on the context. If I deliver the 95% in 2 weeks can I ship an update in 2 months that delivers the 99%, or are we stuck forever at 95% (due to technical limitations or business reality)?

If I'll be able to deliver the 99% soon, then sure, let's ship the 95% sooner rather than later.

But if for whatever reason I realistically cannot do that, then you have to consider the context. What is left out in the 95%? How soon will it be a problem that we're missing that remaining 4%? Who will notice the missing pieces? Will they be able to work around their absence?

Many times you may still come to the conclusion that the 95% is the right move, but sometimes it's better to not offer the two-week plan as an option if the fallout from that missing 4% is going to be high.


This is exactly the analysis that is missing from the article and any facile, absolutist statement decrying “perfectionism”. Productivity is an optimization problem. There are tradeoffs which have to be managed. Depending on the context (e.g. prototype phase vs mature, widely used software) you manage those tradeoffs differently.


> are we stuck forever at 95% (due to technical limitations or business reality)?

E.g., the Twitch codebase

They can't change it or add new features or fix problems because it's too much of an unholy startup crack-fueled mess.

Not even companies who stole the leaked codebase can improve the horrible user experience on their Twitch clones whatsoever.


This is just the "engineers are ADHD children and need an adult (manager) to keep them on task" schtick.

Give the engineers clear requirements and let them interact with the stakeholders. Get rid of the intermediaries and knowledge brokers.


Engineers with decent social skills tend to get promoted into roles where they're managing stakeholder relationships.


That's how it was in the initial days/years. However those days are long gone.

Now there are these Product Managers who have absolutely no clue how software is built.

Interestingly I'm seeing similar trend in many other fields; my car goes to servicing and there's some dude who just knows jargons interfacing with me, for the minutest of my questions he will "get back to me after consulting the line workers", same with my house construction. I had to get through 4 layers of people to finally get to the person who is actually doing the electrical wiring to clearly explain what is it that I wanted.


> my car goes to servicing and there's some dude who just knows jargons interfacing with me, for the minutest of my questions he will "get back to me after consulting the line workers"

That's because he's not there to explain what they are doing. He likely doesn't know a spark plug from a fuel injector. He's there to sell you more services.


Hopefully LLMs will replace all of the "I deal with the god damn customers so the engineers don't have to" folks


And said engineers often end up wishing they hadn't. Interacting directly with stakeholders isn't all it's cracked up to be.

Maybe I've just had good luck with product managers, but I tend to enjoy my work the most when someone else is filtering the stakeholders' long and constantly changing wishlist for me and turning it into something coherent and actionable.


I see no tools in this article for managers who wish to optimize the productivity of their team while taking into account the natural, unfortunate, human tendency towards perfectionism. It’s an unsophisticated and unrevealing analysis.


> and let them interact with the stakeholders

There are several reasons why this is not a good idea. And those reasons show itself usually quickly once that is tried.


Bit of an aside... In my experience, game companies are particularly prone to treating engineers like children in this and other ways.


Spot on, it’s so frustrating with all these hoops some companies put in place, and the middle managers at some point start to act as wranglers.. Not to mention the gap that always happens between the engineers (who build and do the work) and the clients or stakeholders (who require the work), which always ends up with last-minute changes and fixes before the demo.


My favorite quote on perfectionism and one I often think about,

"You know, the whole thing about perfectionism — The perfectionism is very dangerous, because of course if your fidelity to perfectionism is too high, you never do anything. Because doing anything results in … It’s actually kind of tragic because it means you sacrifice how gorgeous and perfect it is in your head for what it really is." - David Foster Wallace

Anyone that has built and shipped something has likely struggled with it. You have a great idea. You build it and it's never as great as it was in your head. You need to ship it anyway, but doing so means getting over that perfectionism. Some can. Most can't.


Thanks, I needed to have this thought formalized. I see now why I have a hard disk full of perfectly architured dead projects, and also why the live ones are never going to be perfect


Classic problem: management hits workers for “perfectionism”… and then hits them again when things aren’t perfect.


This or the manager wants it perfect but complains when it takes longer


Indeed. An article on “perfectionism” really needs to grapple with “good, fast, cheap: pick two”.


This article is fluff trying to fill up their blog so they can sell paid subscriptions and have "engagement". They even link to one of their paid blog posts.


I had to opt out of giving an email address before seeing the article, which I stopped reading after a few seconds.


The bliss of deciding everything and taking no responsibility


The beatings will continue until morale improves.


This feels a little like confusing early career learning with perfectionism. The stories involve their own junior career where they were learning what good code should be and trying to apply it. Later, as they write better code without needing to learn they are shipping higher quality code faster.


That depends. Shipping too much too fast without making the base that can handle it properly results in a ton of tech debt that will bite you in the end one way or another. So there must be some balance about it.

The "we'll fix it later" turning into "no one ever fixed it" is way too common of a pitfall.


Yeah, that's a big one. In one case I reproduced a bug that had been plaguing us for a decade but could never seem to reproduce when it happened to a customer.

Then the lore came out that the original programmer had slapped it together over a weekend. It worked in the lab, and mostly worked in the field but failed sometimes in weird ways.

Then engineering tried fixing it. They failed three times, so they decided to replace that feature. It took two years to replace that one feature and get it to work right. It didn't kill the company, because it was a less used feature, but it reduced trust in the marketplace so growth took a hit that let other take and keep the lead.


Also the reason we can cross rivers without the bridges collapsing, why we have skyscrapers that withstand earthquakes in Japan, and we were able to launch an international space station into orbit.

Now that the prevailing school of thought is that shipping crap is OK we have a checks notes Boeing Starliner.


Something to consider is that if you cut quality to the point that your colleagues believe you are pushing a half-baked product, or they have to spend even a small amount of time fixing what they know could have never been a problem in the first place, their morale is going to tank - and the brightest are going to leave first.


Corporate disdain for quality of craft is the death of humanity.


I’m not sure it’s the death of humanity, but I find myself sharing a similar sentiment.


I vehemently disagree...

The thing that's killing productivity is the pushing out of half baked garbage.

There's simply no solid foundation to build upon.

OK short game but horrible long game.


On an individual level, if you are prone to perfectionism, take heed of its dangers to your mental health and productivity.

On an organizational level, perfectionism is a red herring. Yes, there is such thing as debilitating perfectionism, but 95% of "perfectionists" are really just people who care a lot about doing things properly (providing all the benefits that begets), but who aren't 10x greybeards, so they take longer to do it than a cheap dev would to shit out what looks like the same product at first glance.

(You can say the same thing about "premature optimization", perhaps to a lesser extent.)


I’m more of a 75-80% perfect guy at most Fortune 500 companies. That 90-95% perfectionism is very hard to achieve and honestly not worth the effort.

Nothing wrong with “perfectionism” but it should be achieved as a team and that requires getting good people around you. Unfortunately, in my experience, that’s where it usually falls apart.

Clueless management continues to think hiring at the bottom of the barrel and “training” them is the right way to go. There’s a reason why they are taking the lowest possible bid…


My experience with software is 80% or more is slop that could have used more perfectionism. Most engineers couldn’t create a perfect piece of software even if they wanted to. Get to the point where you have that ability and then dial it back as needed. Don’t spend your entire life writing slop that users hate but makes your managers look good because they hit some arbitrary shipping KPI.


Jordan says that a high volume of changes early in their career was a waste. Maybe for the team, but I would argue the experience gained from many low impact changes was probably better in the long run. It’s better to overshoot and dial back than be lacking, especially early on in a career.


People who complain about perfectionism typically don't provide the proper software requirements.


Is there any cliche in this industry that sets up a strawman faster than "productivity over perfection"?

Half the nation's personal data is breached every week and we're over here talking about how we need more productivity and less perfection.


"Productivity" means shipping the first 3 features fast, then getting overwhelmed in spaghetti and unreliability, things slow down to a halt, and the features you shipped turn out to be buggy.

The desire of money people to sell hack/slash/burn kind of products to customers is so pervasive nowadays, that there is no wonder why we have:

- the Boeing travesty (people actually die because of this) - the CrowdStrike debacle (who needs functioning airports and hospital appointments?) - the AT&T hacks

And some manager type (who probably did a 1 month business course and is also a certified Scrum master) is gonna tell you to "ship faster, value, value, value". The disdain I have for them has no bounds.


Absolutely guilty of that! Although the article mainly focuses on software that I also assume isn’t critical, sometimes you have to get the work done perfectly, or it’s a major risk or an overall issue for the project success. For example, if you are designing, building, and programming a drone, the margin for error is slim. A single issue that you are not even directly responsible for can jeopardize the whole project, like if a cellular drone control link (C2) is down. Or you are working on an ICS that controls utilities or nuclear plants. So, you have to perfect all scenarios and outcomes, plus all possible testing too.


"Perfect is the enemy of shipped" is one of my favourite aphorisms.


Don't even want to read this, especially considering software nowadays is not nearly as good or even close to perfection.

Last year's CrowdStrike crapware incident made critical infrastructure crash, flights delayed, hospital appointments delayed.

I'd say to the business people to f*ck off and leave the engineering to us, the engineers. Stop trying to squeeze every drop of "productivity" from everything.

It's done when we say so, and it's good enough when we say so. Go sit in some Zoom meetings or something, and leave us to work.


Article resonates with me a lot. I've worked on countless projects where I put a lot of effort in designing things cleanly and correctly, only to find out afterwards that it wasn't really necessary. In many cases they were R&D projects that might have turned into concrete products but didn't. It's hard to design software systems and architectures in such a way that perfection is added incrementally as needed. But I think it's in the core of being productive.


I think optimal level of perfectionism depends on impact of the decision and how hard it will be to change in the future. I think it's worth to be perfectionist when it comes to architecture, which is hard to change. So you want to make sure you do things right (or extensibly), and consistently. However, in minor things like method names or code structure, it's OK not to be consistent.

I would also say being able to identify where you should be consistent can be itself a huge productivity enabler.


Back of the napkin graphs that are not based on any collected data have grown to aggravate me. Although it shows the mental model of the author, it adds an air of empiricism that gives it more authority.

In this case, how do we know it is an exponential chart at all or there is not discontinuous jumps in value at “perfect”?


For starters, the graph would have been easier to intuit if time had been on the usual horizontal axis. That would better illustrate the “point of diminishing returns”, which is the theme they’re trying to explore. By presenting it in this confusing configuration it seems more profound than it is.


Yes, I interpreted it as a sum of total value over lifetime; but when you are building it, the software could live for 1 year or 20, in very unpredictable ways.


Why do we call overzealousness perfectionism? If perfectionism disregards context it is arguably NOT perfect.


There is only a handful of software products in the whole world that you can sincerely call “almost perfect”.

We are not even close to having any noticeable perfectionism in the industry.

Over-engineering is not a form of perfectionism, it is a lack of empathy and lack of professionalism.


It's kind of needed, see this problem... https://news.ycombinator.com/item?id=41095279


Strange article. The three things listed near the top:

- Write down your priorities

- Deliver small units of value

- Seek feedback early

Are solid bits of advice. I felt the rest can be skipped.


Pursuing excellence by shipping and doing a little beater each time is the antidote to hiding behind perfectionism


Perfectionism or just anxiety?


Seeking perfection via automated systems - yes

Blaming people for not being perfect - no.


Yeah, no. Most software is cludged together with little design and grows organically. Overall, putting on the time to do it right -- if possible -- parts give dividends.

Software done right from 30+ years ago is still in broad use. There is huge value in that

Doing things right involves prototypes, iterations, and experiments, so high velocity is good. However, if a prototype ships, you often get on trouble.


The tyranny of the nitpicker comes to mind.


Don’t let perfect be the enemy of good.


This is complicated because it is almost a truism, but my problem is it often gets trotted out when defending the bad.


In my experience that saying usually is said to mean something that is “good enough” but that’s not the same as quality work.


The enemy of good is not perfect but "good enough".


This “Jordan Cutler” guy has a nice grift going, makes money writing a newsletter about career advice when according to his resume, he only has five years of programming experience at best: https://jordancutler.net/assets/images/JordanResume.pdf


Bad requirements are the biggest productive killer.


It's complicated. It can become an excuse too: "if only they (waves in the general direction of customer relations) could give us good requirements". Some stuff is just difficult to figure out.


Also often times the customer does not know what they want, they only know that what they have isn't it.


I HATE articles like this and I think they are counterproductive.

While perfectionism is not good, I've rarely seen this be an issue except in maybe young engineers. But part of the problem with perfectionism in them is that they don't even know enough to understand the whole picture and the reality is that "perfect" doesn't actually exist. Solution spaces are too complex so recognize global optima are rare.

Instead, what I see far more of is not understanding what "good enough" is. Not paying attention to details. AND dismissing details with some comment about how you shouldn't be a perfectionist. This is a much more nefarious problem because shittiness builds over time. But the damage it does is incredibly costly. The temporal component and the fact that you yourself are often not impacted by your own actions make this much harder to recognize. Worse, people often get rewarded for short term shortcuts that lead to catastrophic failure in the future. But I hope we all know that a little maintenance is far cheaper than repair. It's why insurance companies want you to do a yearly checkup and get your teeth clean. They know its cheaper.[0]

What really matters is a very difficult balancing act. You need to balance your short term progress with your long term progress but humans are bad at long term planning. You need to move fast enough to meet your short term goals and deadlines but you cannot forget what the end goals are. This is hard because the end goals are fuzzy, change, and only come into resolution as you progress. So you should "move fast and break things" BUT you also need to slow down and fix things. This is part of why the software world is ruled by the lovecraftian god of spaghetti and duct tape[1].

I'll give a quick example: You can write sloppy code to get something working, but spending a bit of time to clean it up and add a few comments is always worth the effort. The problem is you might not feel the pain from the lack of cleanup or comments. What's the worst thing that happens? You spend an extra hr because you realize there's a logic flaw when writing up the doc? Or if no problems, 10 minutes to write words? We've all been stuck trying to understand what something does and why (sometimes we wrote that code[2]), especially when onboarding to a new codebase (think about the connection to the 80/20 rule). Which takes more time? Doing cleanup and commenting as you code or the time lost by every person who gets stuck on that line? If you've ever onboarded into a codebase that is well documented I know you know. And you can probably see how the costs are outsources from you, but not necessarily your company. It's the inverse side of what makes software so powerful: scale.

Perfectionism is rarely a problem and is a junior problem that's about not realizing there's no global optima. But (not)-goodenoughism is common and rampant at all levels. If it wasn't, we wouldn't have daily comments on HN about enshitification[3]. But if we were better at understanding the latter I'm sure the former would also be less common. Move fast, but also make sure you slow down. Never forget details, especially when you need to disregard them to move forward.

[0] I have one example at a company where I worked where I was told I was being a perfectionist and then months later our product literally exploded. An engineer wanted to extrapolate data, I said he was extrapolating too far to be reliable, boss overrode because he was more stressed over the timeline.

[1] Duct tape is a terrible tape and you have no reason to own it. Fight me.

[2] At the time of writing only god and I knew what this code does. Now only god knows.

[3] Enshitification is not happening because people are being malicious. It is because "good" exists in unstable equilibria and part of long interacting chains that all have to go right.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: