Hacker News new | past | comments | ask | show | jobs | submit login
It's OK if your code is just good enough (shiftmag.dev)
114 points by Vatavuk on Nov 11, 2023 | hide | past | favorite | 144 comments



In the vast majority of cases, writing good, maintainable code does not require more time. The real problem is that the majority of people working as software engineers barely know what they are doing, and use excuses like this because it makes some amount of sense to the incompetent managers in charge of them.


I work with someone who regularly opens PRs for untested code. I'm talking stuff that hasn't even been run once: missing imports, undefined variables, etc... not bugs. I'm fine with bugs. I'm not fine with not testing your work in the most basic sense.

PR reviews don't mean throwing crap over the wall and hoping the reviewer figures it out. With this guy, there is so much back-and-forth hand holding, it would be simpler to close the PR and do it myself. But I don't...


Sounds like something to raise with their manager. Or with this person, before even reading the PR, "Hey - just to check before I review this, does it work on your machine? Have you tested it?". If they say no, close the PR and tell them to do that before opening it. If they lie, call them on it. If they aren't learning, don't waste your time hiding this useless coworker's failures.


I have raised it in the past and will do so again. One of the problems is the manager has no software engineering experience and his view of "working norms" often go against mine.

Some of my complaints are fairly basic: test/review your own work before asking someone else to review it. This should be applicable regardless of industry.


It is applicable regardless of industry. If your manager isn’t technical, drag the nearest senior engineer into the conversation too.

As for the PR, don’t bother. “Hey this code you’ve submitted for code review doesn’t even compile. I’m your colleague, not a human compiler. Please don’t waste my time with this again” -> Close issue.


time to go next?


I've been thinking about that for a while, for a variety of reasons. This is one of the smaller ones. Probably after the holidays.


I'm 100% fine with opening a PR for "I just typed it" code...

with two caveats:

- a HUGE disclaimer at the top saying "DO NOT MERGE: not tested"

- also, you'll probably want to politely ask someone for a review, and be specific about what you're looking for

"Draft PRs" are fine for discussing topics or code or goals with a team mate.

It's all about getting feedback!


That's fine. What I'm talking about isn't that at all. This was a "this feature is done, here you go" PR.


Looping back to TFA: I've been trying to use this workflow more to get quality code written more quickly. I definitely have a habit of spending too much time polishing code - getting a draft/do-not-merge PR up with the core of my proposed changes up for review helps me avoid that, plus getting an earlier review helps with finding issues.


In addition to what the others have said, make sure you have a CI and don’t review the PR until that’s showing green. The CI should ideally include tests, linting according to a common code standard and other “grunt tasks” that are unnecessary in a PR review.


If you are in charge of the review process, simply start closing their PRs, list the reasons why, or better yet create a document you can reference establishing your guidelines. Don't back and forth; if they can't meet a simple set of guidelines, the PR is not ready for review.

Let their manager take issue with it. Fight it, bring it to their manager's manager, whatever. A company like that is not worth working for, so try to make it a company that is, or leave, but do yourself a favor and find a team you love.


> better yet create a document you can reference establishing your guidelines.

This. Remove as much ambiguity and grey area as possible.


> better yet create a document you can reference establishing your guidelines. How about working on a document that establishes the companies guidelines?

I can not imagine having to meet everyone's personal quality bar.


You don't have to meet everyone's personal quality bar, just the person who is in charge of reviewing your code. Write good code and you won't have a problem!


My expectations here were a bit lower, given the individual involved: write code that actually runs.


Ha! I think you might be asking for too much these days!


Isn't this what draft PRs, CI/CD and automated integration tests are for?


This sounds like what I do, except I open them as draft PRs in some functional state that I've run locally first in order to solicit feedback as early as possible asynchronously.

Most teams I've seen don't practice any sort of iterative development, despite claiming to do so.


Wouldn't a linter catch most of this?

You can also block commits with listing errors


You are not alone. It is not fun.


> In the vast majority of cases, writing good, maintainable code does not require more time

Yep. Especially with practice. You can pretty much get to a point where you build things reasonably well by default without even thinking too hard about it. You have to want to attain it, and be willing to ruthlessly evaluate and file down your design repeatedly.

I believe there's a compounding effect at play here, which accounts for super-linear gains in ability, given enough focus, time, maturity, and number of projects.


> You can pretty much get to a point where you build things reasonably well by default without even thinking too hard about it.

That's mastery. There are probably about as many master programmers as there were master... let's say blacksmiths. The problem is there are 10, 20, maybe 50 times as many programmers as we ever had journeymen blacksmiths. And they all seem to think that tenure equals mastery. If we had 10, 20, 50 times as many masters, we'd have enough people to keep an eye on things. But we don't.


> that's mastery

Sounds more like competence


Nah, competence means that with some effort you can make things that work well enough, if you don't need to put in such effort it's mastery.


I equate it a lot to writing an essay. Once you’ve gotten enough practice, you know the general structure. You’ll write a draft and then do some edits with some red ink. Then, if you have people willing to reas it, you get their eyes on it.


This.

And it's often simple techniques like preferring pure functions or using immutable data structures that enable a huge improvement in maintainability, yet few seem to be employing them.

Occasionally the difference comes purely from the fact that someone looked into the library docs and chose the appropriate API method.


A large fraction of people doing woodworking videos are overtly or covertly pushing the fundamentals. If you don't have the fundamentals down what are you doing trying to do complicated stuff?


If I had a dollar for everytime I heard “we had a tight deadline” as an answer to a question “why is this code so shi*y” I’d have more money than Musk.

There is no good code vs. bad code, there are just good programmers and bad ones.

And given how many programmers there are in total, roughly 98.76% of them are the bad ones :)

I am in my 26th year of this career and I can count on one had situations where a bad programmer wrote good code and good programmer wrote bad code.


I’d consider myself a good coder, and though I have the standard number of digits, I do not have enough to count the number of times I’ve written bad code. But I’ve written a lot of code, and over the decades I’ve learned a lot. I think that process requires some bad code along the way.


I would love to see an example of your code, please!


$450/hr - nothing comes free


Sure, if it's as good as you say! Let's make a deal :)


And this is not a total narcissistic opinion at all. Just say all programmers are bad except you.


If you actually read my comment I never said I was the good one mate ;)


I think one of the exceptions is heavily optimized code, though the surrounding code can still be maintainable and clear, with the weird stuff to tickle the compiler or inline assembly being heavily commented.


For those kinds of cases, I love having as-simple-as-possible reference code checked in alongside it, even if it's just #ifdef'd out for common builds. Having a baseline makes it sooo much easier to understand and debug the optimized code.


Ideally, the slow code is available alongside the fast code and the unit tests confirm they are sufficiently equivalent. If that's not possible, the go-fast bits need to be refactored until it is possible. Don't forget that error handling is part of the equivalency testing.


Beyond unit tests, this seems like a great opportunity to leverage property-based testing, at least for pure functions; checking that both versions return the same results is an easy property to think of.


Agree completely. In fact it often takes less time to write code properly because you won’t have accidentally introduced a regression that you need to fix before shipping.


Things like large functions or code duplication are not necessarily bad in the first place. A far bigger problem that I encounter regularly is the invention of extreme layers of abstraction to avoid a small amount of copy-pasting + edit in the name of DRY.

But an even bigger problem is lack of understanding of the problem domain and a lack of documentation on how you plan to fix the problem.


I have to admit: I am terrified of WET code. I do stop short of introducing abstraction monstrosities, but I usually do create what others would call unnecessary abstractions, to stay DRY.

Why? Because I tend to write all my code such that a complete stranger should be able to drop in and understand it. I constantly imagine that stranger looking over my shoulder while coding. I imagine the code should be maintainable and speak for itself without me there at all (I do write comments).

So, such a person SHOULD be able to change some value or logic somewhere, and rely on not having to do that anywhere else. That is the magic of local reasoning, as brought about by structured programming, after eradicating goto statements. WET code erodes that. I find it a very important principle though and value it highly.

An example where this falls apart is config files. For example, a port number might be repeated in different places. Comments are indispensable then, but they rot. So if possible, I encode it using actual language constructs.

In summary, I do err on the side of DRY rather aggressively, but don’t follow it all of the time.


I've never understood this mentality, the magic of local reasoning is completely and utterly destroyed by abstractions. If I'm looking at your code it's not because I'm doing literary analysis, it's because there's something wrong or because I need to change something. The abstraction only increases the number of locations I need to look to fully understand what's really happening. There is no clever naming of functions and methods that explains to me the reader better what's really happening than the code that's actually doing the real work. And worse it bites you in the ass when you realize that 5 layers down the call-stack you need to change the behavior of something only to realize that doing that breaks a bunch of unrelated places in the code that need the current behavior. So much for locality.

If your code isn't abstracted and WET I actually only have to look at the code currently in front of me on my screen to know fully what's happening and I can be absolutely sure that changing it won't affect anything else. True locality of thinking. Needing to use :vimgrep to update code in multiple places is smooth brain completely mechanical compared to the hell that's having to re-WET the code to split off and isolate the (potentially long) codepath that needs to change. And devs rarely put in the effort for that, more likely is they'll plumb down a flag all the way through the call stack to spooky action at a distance change the behavior of an unrelated function. Good luck figuring out that dependency later when you're starting from the lower function.

My motto has always been software is like pottery, once is DRYs it's much harder to change.


I agree with you in that DRY for "just not repeating yourself" is not good. But your local approach is flawed.

You still have to do the global analysis. You have to do that because the local code you are fixing might be a piece of business logic that has been dripped all over the code by a WET programmer. Now you fixed the logic in one place but all other places are still wrong.

The correct way to do it is to stay DRY when the reasons for changing a piece of code are going to be the same. An example would be this hypothetical business logic. If the code doesn't just look the same but is for something like business logic that needs to be the same in all 15 places it's getting applied then stay DRY. Other obvious examples are things like sorting algorithms. We banned those and put them in libraries for a reason.


I think this is the right mindset. As programmed, we need to be aware and able to distinguish between when two things look the same, and when two things are the same. Only one of those benefits from an abstraction.


Based on your name, I'd expect you'd be quite comfortable with producing WET work.


> Because I tend to write all my code such that a complete stranger should be able to drop in and understand it

This isn't an achievable goal for most complex systems. Even very well written and documented code bases (for e.g. tcmalloc, bigtable) require a good deal of background reading to develop a baseline understanding of what is going on.


> I am terrified of WET code.

and

> I usually do create what others would call unnecessary abstractions, to stay DRY.

Seem completely incompatible with

> Because I tend to write all my code such that a complete stranger should be able to drop in and understand it.

Now, I don't know the codebase you're in. It could be that your abstractions are perfectly fine, but DRY code != maintainable. You're sacrificing a ton of locality of behavior (LoB) to get that DRYness, and introducing potential spooky action at a distance. Not to mention the cognitive overhead of the abstractions.

I'm not saying to never abstract, but abstractions really only supply a benefit when the things involved are guaranteed to vary together. Usually via some sort of physical process. If it's just business logic having them vary together in the same place, eventually some dictate comes down from management to change one of them but not the other.

When this happens, you introduce weird bugs on the other side of the system. That's how a platform gets the reputation of being unmaintainable. In the bad old days, it used to be that it was globals being referenced by many different functions as well as unrestricted gotos being used to jump into the middle of a function, but I've seen it happen quite often with abstractions, even ones that seem like a good idea at the time.

The code we're talking about was actually pretty DRY. You need something that does the same thing as the last half of this function? Just push a different return address onto the stack and jump into it to re-use the code. Why repeat yourself? But it had terrible LoB, changing one function could break a completely unrelated function halfway across the project (and one that didn't even obviously call the function if you're using some sort of computed goto).

You've also identified that structured programming brought about an end to the worst of these abuses, but I think you've got the reason wrong. It's not about reducing the number of places that you need to change something. You can write perfectly structured code (actually, it's hard to write unstructured code these days) and still need to change logic in 5/10/20 places. And local reasoning is still preserved in this case, as locally would consider each of those places by themselves, assuming the logic is all in different functions/modules/etc. Structured programming changed so much because forcing functions to have defined entry/exit points allows for easier preservation of invariants. You can't have meaningful invariant checks if someone can just jump into your function just after those checks. It's also much easier to see what in the project depends on the code you're changing.


There's more than one way to implement DRY. Lots of times there is no superclassing to capture commonality, but there are functions that can be written only once. Organizing a set of complex algo steps that share some commonality and have some differences is just hard sometimes.


Yeah, it feels like we cargo culted too many “principles” like DRY without understanding what they actually mean. I see it all the time at my job (I review 5-10 PRs/day).


My formatted-by-productivity-standards brain agrees, my heart disagrees.

I enjoy the art of programming. I love to think that, for certain types of projects, I am allowed to aim for and reach perfection.

My vision of perfection is not yours, so what. If your "good enough" is actually your perfection because of business impact, user happiness or optimal time management, good for you. Just don't tell me that my perfection does not exist.

Sometimes, it's good to know that you can do something just for the beauty of it, and programming could (should!) be one of them.


The funny thing with this is, that often times someone’s perfect is someone else’s future headache.


It's more subtle that that. There is a great saying "Always code as if the person who ends up maintaining your code will be a violent psychopath who knows where you live".

I've seen countless bright minds wonder in the pursuit of instant pleasure by adding unnecessary complexity. I have seen others outright sacrificing projects that support people's life to achieve an instant goal of learning a particular library or acquire a useful skill or worse make a point against an imaginary adversary.

Due to the incompetent management, these suckers are never punished. They usually jump board and venture into greener pastures before their playgrounds turns into bloody combat fields where much less sophisticated but more honest former colleagues die or deliver.


Really? The best peer coders I have met always produced simple solutions straight forward solutions.

So I can not share this experience.


A similar saying, which I like more: "code as if your (hypothetical) children will have to maintain it"


Another alternative: "Code as if you'll have to come back and maintain this after you've completely forgotten how it works or that it ever existed, because there's a nontrivial chance that you will.".


I call that Tuesday.

Or this week, Saturday.


How good your code is depends entirely on context and priority.

Context and priority should be defined for a project, not just left to the decision of each developer.

I am building code for a startup right now. The context and priority is to "get the damn thing working". Thus code quality is largely irrelevant - this codebase is flat out garbage - it is full of commented out code, duplication, files that were obviated ages ago. It is unstructured, disorganised, uses different approaches to solving the same problem all over the place - there are ZERO tests, no CI/CD, the code is uploaded directly to production. This is exactly the right way to build this because none of those "terrible sins" matter when you have no customers and your only goal is to get something working as fast as possible and every secong spent making things nice is a waste of time and money because if the business fails then every second spent making things nice was wasted.

If however I was working for Nasa on code that was running a rocket launch system, then hopefully it is stated to all programmers working on the system that reliability is priority one. This informs every about how the code is written from that point. It means few lines of code, alot more eyes on the code, must more rigorous quality control and much lower overall output.

If however I was working in an ordinary business making a CRM system then the stated priority I imagine would be something like "we want a balance between productivity, reliability, maintainability" etc. This explicit definition of the context and priority sets the scene for how the code will be written.

I've never worked anywhere that is was explicitly stated across a range of parameters what the code should prioritise in terms of security/reliability/performance/maintainability/time to market/quality etc.


We're in full agreement.

I'd also add that in contexts where there are hard safety/quality/reliability requirements, having individuals or teams or project team leads responsible for a bunch of conflicting objectives and constraints often produces situations where there will be pressure to make visible short-term progress, at the expense of increasing longer-term risks of issues that are less visible and difficult to measure.

> If however I was working for Nasa on code that was running a rocket launch system, then hopefully it is stated to all programmers working on the system that reliability is priority one.

To avoid this, responsibilities should be structured so there's a different team responsible for QA whose review and approval is necessary to proceed for go / no-go decisions. The QA team responsible for the QA function should be independent from and isolated from the delivery team doing the work, and the pressures and influence of the delivery team's management chain.

In contexts where there's a lot of potential for harm, ideally the QA team shouldn't even be part of the same org (e.g. QA by an external public-sector industry regulator with the power to withhold or retract licenses to do business from companies that cannot demonstrate their products and services are safe), to make the QA team less susceptible to pressure.


> This is exactly the right way to build this because none of those "terrible sins" matter when you have no customers and your only goal is to get something working as fast as possible and every second spent making things nice is a waste of time and money because if the business fails then every second spent making things nice was wasted.

There's a lot of truth to this, but I would also like to see companies, and possible even individuals in the most negligent cases, be held liable for damages that come to customers when security breaches happen.

We wouldn't build a bridge with that attitude: "Just scribble whatever on those plans! We need to get this thing built right now! None of this matters if the bridge doesn't exist and people aren't drive across it!" For the same reasons we wouldn't do this with a bridge, we shouldn't do this with software, although to a lesser extent.


You would build a pontoon bridge with that attitude - that's a floating type of a temporary bridge that gets built when it's very important to quickly be able to cross a body of water.


>> when security breaches happen

Again, that must be defined as context and priority.

Priorities:

* deliver as fast as possible

* security matters

These two priorities are somewhat in conflict but it's still important to state them, then developers know where to focus.


Somehow code quality has become a topic completely divorced from product quality.

Your users don't care how clean your source code is, but they definitely care if it's slow and buggy.


> Your users don't care how clean your source code is

They do care about bugs and new features though, and bad code quality will lead you to more bugs and slower shipping of features in the medium/long run. At least, that's how I define good code.


I have seen people strive for cleanliness from the get go, but that has resulted in garbage outcomes and overabstraction.

The customers are unhappy. I am unhappy. Everyone is unhappy.

I prefer "messy" simple code that works and is easy to understand. Yeah sure I put too much logic into the controller. I could put it into a dozen different files. Fight me.


yeah, the number of times I've cleaned up some code because it was "just wrong" and as part of making sure it had test coverage for visible impact, identified a "oh, this has been flaky at a customer site for years, but noone had a good angle on it"...

That's also pretty much the only hope of getting product buy-in on this kind of thing, showing that "no really, there was this particular customer impact". Or "by doing this upgrade in a timely manner, we're not screwed by no longer being able to buy old hardware because new kernels/libraries support the modern replacements"...


Chasing code quality and code-quality-adjacent metrics often results in buggy apps.

Copying and pasting a line of code can become a future bug when someone doesn't refactor a line of code. But building some overwrought framework to avoid ever repeating anything can introduce classes of bugs that are much much harder to solve because of the layers of indirection.

And depending on what vertical you're in, sometimes releasing something is the best path to a more reliable app. If you're writing crud apps where 99% of the complexity is defined by business rules, getting a V1 out so you can learn all the places where the business rules were improperly captured gets you a better result than bikeshedding endlessly about having the most polished version of those broken assumptions.

tl;dr: sometimes "worse code" leads to a legitimately better end product for your users.


> building some overwrought framework to avoid ever repeating anything can introduce classes of bugs that are much much harder to solve because of the layers of indirection

The development of an engineer:

1. newbie - follow the rules because you're told to

2. master - follow the rules because you understand them

3. guru - break the rules because your knowledge transcends them

I often see code that follows the rules right into a swamp. Your example is a fine illustration of this.


I like to say that users includes the people working with (using) your code in the future. It changes the definition of user compared to the normal usage, but I think it's a good point.


You can change the definition but you change the fact that those "users" aren't paying you.


Every coworker I’ve had who thinks this way has left a minefield of gotchas and inscrutable interdependencies for the unfortunate developers who come after.

Yeah they “got it done” but we spend 80% of our time fighting fires and the 20% left on new development takes ten times longer than it ought to because zero thought or care was put into anything other than “it works for me”.

This to me is the difference between engineers and programmers. Programmers can get something done and out the door, but engineers can build something that is easy to iterate on and easy to reason about.


> Every coworker I’ve had who thinks this way has left a minefield of gotchas and inscrutable interdependencies for the unfortunate developers who come after.

I mean, then they weren't good engineers? Nobody said that approach is good.

But I've also seen enough for my share of engineers that knowingly write buggy code that eventually blows up in someone's face because that code was simpler and turned out elegant that way. Code simplicity and reality don't always go hand in hand. The startup graveyard is filled with businesses with otherwise great engineers that lost sight of the customer's actual experience.


It’s also littered with the bodies of companies that failed to keep up with their early initial development speed because their development team cranked out two years’ worth of “whatever works” and walled themselves into a corner.

In my experience, that happens way more often than teams failing to produce value because they’ve spent eons polishing something to perfection.

On the other hand, our industry’s culture of not taking the time for anything to be built a little better means we have an enormous number of seemingly-experienced engineers who lack the understanding of how to write well-built software even if they are given the time. Which leads to individuals concluding that time spent cleaning things up is a waste because they end up with something worse and more complicated afterward. So they don’t invest in learning this skill, and the cycle repeats.


You are correct that good code does not translate directly into revenue, but it affects it indirectly e.g. through ease of future development, maintenance, and fixes.

If the thing being written is not going to be updated at all, then, sure, quality is not important.


It is worse than them not paying you. "You" (the company) are paying them. That means you want to minimize the amount of time they spend on the software without getting further returns of some sort.


The point I was making was that if your product is good, even if the source code is terrible, you can still keep selling it as-is and keep making money. You won't be able to improve it easily, but at least the current state is producing value. By contrast, if your product is bad, then you're going to miss out on revenue until you fix that, no matter how awesome your codebase is.


Youtube has more than a couple people who pull apart tools and see how they tick, then they speculate on how what they found informs the users' likely experiences with this model over another one.

I wish we had someone doing that with code.


Your users won’t but your colleagues (and future you) definitely will


Your users definitely do, but the signal is usually so attenuated that people can pretend that they're mad about something else other than code quality.


Yes let's happily dive & swim in the sewer of mediocrity that is the modern software industry. Our hardware keeps getting better and better and our devices become slower and slower, while the apps keep glitching and crashing at an ever increasing rate.

It's like the fat acceptance movement "it's OK if you're plus sized, or plus plus plus sized, or I guess multiply exponent factorial sized".

But it's really not OK. Not just apps but even our operating systems have turned into layer upon layer of unfinished and half forgotten features that sum up into something literally worse than what random natural selection wrote in our DNA. That's right. Random chance, throwing crap at the wall is better than our "software engineering".

Be better.


If you haven't seen Jonathan Blow his talk about preventing the collapse of civilisation, you are probably his soulmate.


Checked it out and that's about right. I wish he had more concrete ideas about how to start on this journey. Zig is nice, but it seems to be of modest aspirations.


In terms of quality, hardware suffers also problems due to increasing complexity. Errata sheets of highend SoCs can be rather intimidating on their own already.

And it is this complexity which drags down performance as well. If a smartphone app is nothing else than a glorified web browser showing some heavy javascript riddled abonimation you don't need to wonder why the things are sluggish and memory hogs to boot. Not all apps are like that, but you get the idea.

But to lighten the mood a bit: https://www.youtube.com/watch?v=gWVmPtr9O0g (Titan 2 demo on SEGA Mega Drive / Genesis)


I think there's a pretty wide range between 3 and 4 that is worth exploring. Maybe the real problem I have with this article is that once you peg 3 as "good enough" and 5 as unachievable, then there's a whole mess of interesting quality levels squeezed between the 3..5 range.

If we peg 3 as "good enough to ship and stand behind it", then I'm immediately thinking about getting the code to somewhere in the 3.5 range. After you ship, you're in a good position to revisit some of the decisions you mode. Everything's fresh, and you can go in and squash some bugs that you know are in there, expand some tests that you knew weren't as thorough as you wanted, and trim some of the crap that you realize that you don't need. Maybe it's time to do some refactoring, now that you have the big picture. Maybe it's time to chase after some performance improvements. Maybe it's time to make the integration tests faster and better.


Agree. I always try to remind my self:

- make it

- make it work

- make it fast


Make it work

Make it right

Make it fast


Yeah even better!


Make it last?


Excellent addition.


Reasonable engineering decisions depend on context.

POC-quality code that doesn't have clean boundaries and is tightly coupled isn't necessarily a problem if it is an internal detail of some application or library that is cheap to change in future if necessary, and its impact is localized. As long as it works, if there aren't any forces that cause it to be revisited, maybe it can be left to be low-quality forever, without any further impact.

Where things get concerning are if the cost and coordination required to change the design in future grows over time or becomes effectively impossible. E.g. if the POC-quality stuff ends up being propagated internally throughout the codebase over time as developers make changes and add it into more and more places -- maybe its within the control of a single team to fix it, but if left unchecked the effort grows from a few hours work in once place to something requiring planning, systemic refactoring, testing, dedicated effort over a period of months.

Or, worse, if the POC-quality poor design has ended up polluting system interfaces between components owned by multiple teams or multiple organizations, so removing it would become a multi-month or multi-year coordination process between groups of people with different priorities, requiring a V2 release, deprecation of V1 & migration.


These are the wrong yard sticks.

Here's another person making a dangerous analogy between code and a goal with a fixed end date. A paper that has been graded is done. A book that has been published is 99.9% done.

Code that is no longer being touched is not done; it's dead.

I have a five year plan for every tree in my yard. You can't rewrite trees, and there's a maximum rate at which you can refactor them. So there's what you can do now, what you will do next, and everything beyond that is educated speculation. You can't control it. You can't control the elements or disease or accidents.

So I know what I want to do, and I know how much I will do in the spring, and how much I'll have to delay until next year. And next year, or the year after, I'll step back, look at the whole thing again, and make a new plan, that might not look too much like my current plan. It all depends on what the other forces acting on my projects get up to in the meantime.

Like the trees, you can't control your coworkers, you can only influence, steer and remove. If you try to exert more control, you end up with a tiny little tree. And the dirty little secret with those is that the tree still does largely what it wants, and the skill is in making what actually happened look like it was on purpose.

If you want a big happy tree, you have to focus on the irreversible decisions, and let a lot of the little shit go (for now), and sometimes try again later. If all goes well, the only person who thinks the end result is a mess will be you. A layperson will think it was all going according to your plan.


> Code that is no longer being touched is not done; it's dead.

Scripting code I wrote in 1998 worked in 2005 and still works today. Javascript I wrote 5 years ago, works today. Language choice matters as much as how it's executed. I assume VMware running a vm from 2008 is still running somewhere.

If it's not being executed, it's dead. There's a big difference from the "always needs to be maintained" assumption.


> Code that is no longer being touched is not done; it's dead.

The goal should be to write code not needing maintenance.

Four weeks ago I contacted a coworker to ask about some routines he wrote 5 years ago. He said he hadn't touched them in 5 years. The code has been tested continuously in the interim. His old code worked perfectly for me the first time and it saved me hours.


Code that hasn’t been touched gets forgotten. Even if it’s not accumulating new known security holes, and new performance or correctness deficits from not leveraging newer APIs.

It’s basically abandonware that is waiting for one major problem to render it obsolete. I don’t entirely agree with npm and GitHub ranking projects by recent activity, but they’re not entirely wrong either.

You can always be clarifying variable names or shoring up docs. Updating dependencies and keeping track of APIs without necessarily changing the fundamentals of the project.


Different levels of code quality are important for different teams / projects. Teams that are still discovering the domain and defining patterns should aim for a lower quality so they can iterate more easily. In this mode, knowing that code was written quickly and is fine to throw away / reshape is critical. Aiming for Very Good is likely to be a waste of time here.

In other projects, the domain is clearer, or the system already has well defined patterns that should be followed. In this mode fast iteration is also possible, but it's because the code is clean and follows strong patterns making it easy to understand. Good Enough code here is quite likely to slow the team down as they grapple with needless bugs and code that's hard to decompose / refactor.

The most important aspect of quality is that the team defines the level of quality that's needed for the project or the work being undertaken, and they deliver to that. Have the conversation up front about what level of quality to aim for and why. Then the team is on the same page, and everyone can move forward with the same expectations.


The problem for the last 4 decades has been getting people to admit that we were 'building one to throw away'. In the last 2 we've tried a couple of different tricks to get things done anyway, with varying degrees of success. But that's all external-facing problems

The internal facing problem is getting a team to agree to differing quality gates for different parts of the system - the absolutely knowable and the arguably unknowable parts should not be written with the same mindset if you want to maintain velocity. If you get lucky with the org chart you can fake some of that quality diversity via code ownership, but that's a rough approximation at best. People seem to prefer picking something static and not thinking about it too much, rather than having to reason about every feature. I'm curious to see what we try next to deal with this.


I've never seen a PoC that was allowed to have the time to be cleaned up properly to make it to the Good Enough phase. Management types tend to want to take the PoC and move it directly to production and assume you're incompetent if you push back.


In my experience it's often the _developer of the PoC_ that goes "oh, this will just need a little bit of cleanup" rather than clearly communicating "this PoC has validated risks X and Y, but we still need to mitigate risks A and B and the current implementation has taken shortcuts which introduces risks D and E".


That’s why I try to make my POCs at least 80% as good as production in terms of code quality. Much easier to fix that last 20% later than if you had started at 20% and have to fix 80%. And usually it doesn’t take longer, you just have to have more intentionality with the changes you are making.


"Quality code", in my experience, often means "code that looks like how I would have done it". In other words, it's usually pointless nitpicking and you're better off not engaging in it. Of course, there's some convergence on this topic because certain programming influencers successfully pushed their opinions onto many people who choose not to have opinions of their own which is something that happens in every field because many people find actual independent thought hard or scary.

There are some things that genuinely matter such as minimizing repetition, using variable names that are clear/easily searchable with "find" (meaning without tons of false positives) and not writing undebuggable code if you can avoid it[0]. I also think performance matters even if it seems fast enough on your machine. In my view, you shouldn't use Integer instead of int in Java unless you absolutely have to because Integer wastes resources creating an object containing an int and dramatically increases cache misses[1]. But in general, it isn't worth worrying about unless you can actually come up with a coherent explanation of why your preferred way of writing code will make the software perform better or be easier to maintain. Of course, the only absolute rule in code is that there's always an exception to every rule.

[0]: I'm generally in the "C/C++ macros considered harmful" camp especially when they resemble functions and feel similarly about anything else that makes the code execution path less than straightforward to follow.

[1]: I have a strong suspicion that OOP itself is an anti-pattern and that the entire paradigm is a wrong turn that needs to be abandoned. It's weird because I had a favorable opinion of OOP before I learned what it is in college but it tripped my brain's BS alarm. But I've never worked in a large enterprise environment so I haven't actually seen it in practice enough to fairly evaluate it.


I'd like to see us start measuring the quality of libraries by how difficult it is to trace through them from client code.

I've worked with a few too many instances of code golf where the resulting code requires too many brain cells to comprehend. If I wanted to dedicate 5% of my attention to 50 different libraries, I'd need 3 more brains to do it, but most libraries are written that way. Some seem to think they're entitled to 10%. More.

Show me a library that's a snoozefest to figure out why I put it 5 and got out false when I expected true. That's the one I want to use.


Ironically I don’t really agree with your list of things that “genuinely matter”. I would pick entirely different things.


This is a great excuse to repeat one of my favorite quips, from the late Jim Weirich (from memory, but this is at least very close): "half-assed is OK as long as it's the right half of the ass."


Okay advice for day-to-day, but, horrible advice to take over the long term. Just Good Enough isn't going to improve your skill, it's going to keep you exactly where you are.

Your code is a distillation of how well you understand the problem and how it's being solved. Confusion usually means either the requirements are not well-understood, you still have unknowns, or you simply don't understand the problem/solution well enough to express it to both humans and the computer fluently. All of those involve thinking more and getting more information.

Really, I write the best code I can given the circumstances so I don't have to keep coming back to the same section of code over and over. I want to solve it as well as necessary and move onto something new.

Also, why is the tech industry so weird in how it continually feels the need to degrade the importance of technical skills? Is it seen as taboo that there are still large differences in individual programmer skill?


I think it is super hard to make world where only best developers are working.

You need huge numbers of average developers to keep running all the software there is.

Just like in army, average Joe can be a soldier because there will never be enough “best of the best” to have an army of only special forces.


no, but until recently, programmers were not known for their social skills, and as such, differences in individual skill levels was not handled in an emotionally mature way, resulting in unhealthy, bordering on toxic, environments. it's not taboo, but it's maybe unsavoury to some


> Good enough code is a nice middle ground between implementing a feature fast and maintaining the code quality.

For something to be "good enough" it still has to be good. This feels like evil propaganda aimed at the poor souls who work for cash-strapped and inexperienced entrepreneurs.

Implementing a feature fast is no excuse for writing crappy code.

There are many sets of constraints to satisfy when you're writing code. I agree chasing "perfection" is pointless, but too often you see inexperienced people rationalizing their shoddy work. If you're excusing yourself from bothering with crazy optimizations that have little to no business impact, fine it's good enough. If you're excusing spaghetti, you're the inexperienced person I'm talking about. The "good enough" example from the article sounds like spaghetti.


> Take a look at the infobip-spring-data-querydsl library.

That looks like an absolute nightmare and not an example of very good code.


They have a `FactoryBean`! I always assumed that was just a meme. Must be a serious concept then.

https://github.com/infobip/infobip-spring-data-querydsl/blob...


That's also what I thought as well. This is the kind of overabstracted code Java gets a bad reputation for and I would not want to maintain that.


One of the biggest problems when discussing code quality is that there are almost no objective standards. What looks like "good well-named" variables to one person is "overcomplicated garbage" to another, and there's nothing to inform us on which person is correct.

The closest thing we have is "does this code do what the user wants it to do". To me, this is the only question that really matters.


It’s true that it’s fairly subjective but that doesn’t mean we should abandon all judgment. Food is subjective as well but still you will get most (not all!) people agreeing that Nobu is better than McDonald’s.


My standard personally is "can I understand by just scrolling the code without working on it?" And here it's clearly not it.

There's a lot of factory code stuff which don't convey any information and very long chains of folders which does not help comprehension


I once saw a 5000 line file of shit-tier code making a business something like a million bucks cash per day.

It was a single huge function, called from cron every 5 minutes. No locking to prevent concurrent runs if it took longer than five minutes to execute. No exception handling. One giant nearly incomprehensible everything-function. Global variables. Bugs everywhere.

Easily hundreds of thousands of dollars of net profit per hour (some hours).

Since then I never worry much about code quality in my prototypes. Build one to throw away.


It’s great until there’s some new regulation or costumer requirement and it can’t possibly be added to the monstrosity and so you lose those millions until you can rewrite, which takes months.


Rewriting 5000 lines doesn’t take months. I actually ended up refactoring it in under 24 hours to make it about 10x more reliable and performant (after I put out the immediate fires that had me looking at it in the first place).

In general, I agree. I don’t write code that bad, even for prototypes. That said, I worry a lot less about being super meticulous DRY and best practices in my prototypes that in 90% of cases will never touch millions in value. Done is better than perfect.


Done is not always "better than perfect". A bridge that is done isn't better if it collapses due to poor engineering and kills people.

All software is not life or death. But software can be something people come to rely on.

If I choose (unknowingly) to rely on software not done well and it bites me, I personally would rather not have relied on it at all.


Upfront, I mostly agree with the post.

However, I may be a tad famous about striving for perfection in my code. [1] [2]

Why? If "good enough" is good enough, why do I go further?

For a few reasons:

1. I want the industry to be more professional [3] where it matters, and I need to set an example.

2. The kind of software I write already has alternatives, so mine needs to be far better to get adopted. And it does. [4]

3. Also, I am just a perfectionist. It's a problem.

Anyway, "good enough" is good enough most of the time; just make sure that your situation doesn't require more.

[1]: https://gavinhoward.com/2019/08/why-perfect-software-is-near...

[2]: https://git.gavinhoward.com/gavin/bc/src/commit/22253a3a6/ME...

[3]: https://gavinhoward.com/2022/10/we-must-professionalize-prog...

[4]: https://gavinhoward.com/2023/02/my-code-conquered-another-os...


The first file I looked at in that codebase has a “goto” (as well as some IMO hacky-ish logic). Now I’m not going to say this is never right (but it probably isn’t), but it takes a lot of hubris to claim you are “striving for perfection” and I just don’t see perfect code using goto, sorry.


The goto's are for proper cleanup on error.

All of the options for doing so in C are awful; I just think goto is the least bad option. Otherwise, you get if statements that keep nesting, deeper and deeper.

And what's the hacky-ish logic you're talking about?


There are often 2 consumers of your code: users and developers.

If a developer can write error-free binary code that improves performance (as seen by the user) by 0.1%, BUT the next developer (or even the same dev months later) can't adjust the code without all hell breaking loose, then that code is basically awful.

Side note: add your newline at the end of your files before commit! Ugh


the issue is collaboration on software implementation. this is extremely hard to do well, think lkml.

the typical collaborative implementation environment is a disaster. we are baking a cake, slowly over weeks and months. we aren’t sure why or who’s at fault, but we are absolutely sure it looks awful and tastes worse.

the only silver lining is that the solution to this disaster is hiring more collaborators. jobs and ubi all around.

microservices obviously didn’t quite work, but were an idea in the right direction. we need to collaborate at a higher level than code. we need to work in a bakery together, but each bake alone.

then we can easily evaluate the quality and pace of each other. there is no ambiguity of individual responsibility.

when my cake is bad, i should feel bad. i should look around the kitchen for better cakes, and ask their baker what they do that i don’t.

when my cake is bad and i don’t care, my boss should move me to less important cakes, or out of baking all together.


I wonder what's the median life expectancy of a piece of code.


At least for my own code, I'm pretty sure its an inverse relationship to quality.

The masterpiece I fretted over for endless hours is guaranteed to be obsolete within 1 year.

The crappy hack with the comment that says "@TODO make not be garbage sorry" is cursed to live on for eternity.



Only the good die young


It varies pretty wildly but in my career a lot of code has lasted over 10 years and in some cases over 20. Also because code is sort infinitely copyable some code just keeps moving from product to product. Or some design moves from product to product. The longer you work in programming the more you have a set of tools that you can reach for over and over even if you're effectively writing it from scratch each time.

I think good code is vital to the software development process. It doesn't have to start out good but it should end good. Because you're going to be back at this code over and over. A little bit of effort up front can save you a lot of time in the long term.


Hard to know. I know there's plenty of code that never gets into production. Otoh there's code I wrote in 1996 that's still in widespread use.


Two years, maybe.


The worse it is, the longer it will linger.


Two beefs with this article. One, it creates a linear scale for what are probably multiple orthogonal concepts. Accounting for even one more axis would make the article much more interesting and useful.

Two, I don't think 4 is necessarily more effort than 3 (for some values of 4 and 3). What does take a lot of time is if different engineers have different ideas of what 3 and 4 are but lack the perspective to understand each other and choose a common standard. Everyone will choose faster if everybody follows static typing because you can rely on assumptions you otherwise couldn't. And, everyone can move faster if we don't worry about any of that static typing crap. If engineers take different approaches, everybody will move slower and probably hate their jobs as well.


Not to mention Perfection is defined differently be everyone. It's all opinion, someone's "perfect" code is another persons shit code.

And it's not even just different among people. Along the time dimension your perfect code can become bad later as requirements change and things evolve you may realize that what was once (in your opinion) perfect code was actually a very bad way to incorporate a certain feature.

Think of perfect code as a controversial literature novel. There is literally no point in building perfection unless your goal is only to build perfection for yourself rather then a customer/audience.


Unless the code is running on critical systems that put human lives at risk, good enough is the perfect amount of good.

Getting things done is more important. Excluding above scenario, either you will make mistakes, or you are not tackling meaningful tasks. And that's okay. Allocate time for clean up when there's less ambiguity. The more you explore the problem, the better the issues become.

First implementation will always be bad, so throw it away and build something that's good enough to get the job done and uses what you've learned as you explored the space.


Code quality is for developers, not end users. It's fine for code to be atrociously structured if literally no one is ever going to read it, even in medical devices, as long as it works.


As another poster has said, code you no longer touch is dead. Usually, software needs to be maintained and modifying a badly written code is a nightmare scenario. That means that requested features are piling up in the backlog and the resulting mess is growing slower and buggier overtime.


> As another poster has said, code you no longer touch is dead

This is a poorly considered sentiment.


In the visual effects industry, project management software catagorises shots according to their rediness: just started, nearly ready, good to go etc. 'Good enough' is one of those catagories.


At the time of initial integration, sure. But if it's only good enough when you write it, it's on a rapid path to becoming technical debt - and that's just not good enough.


Reading the headline evoked the question: Good enough — but for what? Glad to see TFA (and the comments here) delve into that question, albeit without using those words.


> Take a look at the infobip-spring-data-querydsl library. Although it sounds perfect, it’s not.

It sounds like a steaming pile of garbage to me.


I mean yeah definitionally sure, but also have some goddamn self-respect.


1) Make it work.

2) Make it right.

3) Make it fast.


Then some jackass who doesn't know an std::int32_t from a *std::basic_string_view demands you cut off somewhere around step 0.99.

Meanwhile if you 2) 3) 1), you get cut off around step 2.99 and everyone's life sucks less.


The TLDR for me: be pragmatic.

don’t get caught up in dogma and ideology in search of the “right” answer. Don’t be a “fanboy”. There is no such thing as the “right” or “best” solution because every engineering decision has tradeoffs.

You have to make rational, pragmatic, decisions based on the facts on the facts on the ground.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: