Hacker News new | past | comments | ask | show | jobs | submit login
Anti-patterns and malpractices in modern software development (2015) (archive.org)
232 points by barry-cotter on Jan 5, 2020 | hide | past | favorite | 151 comments



The best programmers I know write good code within the constraints of a fast paced business environment and they are also able to clearly articulate trade offs to stakeholders. They don’t complain about business constraints and they put out stellar work.

“business people” most often do understand what makes the money flow through their company (and into engineers’ bank accounts) and they typically make decisions in order to maximize good business outcomes. Sure they don’t understand the technical details of an engineer’s work, but the engineers are far from understanding all of the things that must fall into place besides lines of code in order for their paycheck to clear. It’s not always vital to the business to have a good code base. Also, for better or worse, if an engineer is not of the variety I listed above, they are replaceable in the eyes of the business.

(fwiw I’m a director-level engineer so I’m technical but bordering on being a “business person”)


In my experience, simply doing the work without complaint is one of the most inefficient relationships with a programmer. Sometimes changing a minor business constraint can have an outsized impact on development costs. Oftentimes these decisions are even made by business people that think they are making things easier but are actually doing the exact opposite.


> simply doing the work without complaint is one of the most inefficient relationships with a programmer

There is a difference between complaining, voicing opposition and articulating trade-offs less technical people may not be familiar with.


I have worked in team where programmers constantly questioned business requirement - without having any understanding of business.

Eventually customer got tired of having to defend how many statuses he wanted in combobox or what process their company want to use.

By that I want to say, there is definitely such a thing as programmer not willing to consider business point of view for a second and thus being impossible to work with.


IMO that sounds like a bad team.

It is vital to understand business needs. A lot of times businesses want changes that are hard, and very hard to implement. It is our responsibility to work with the customer to come up with a solution. The customer must accept the consequences that come with their requested change.

Unfortunately, some things are impossible to do. It is never good to be the bearer of bad news - I have myself told that we cannot do it this way, but I have always given a thorough explanation of why it is impossible and suggested workarounds and alternatives. I think sometimes this behavior can be the difference between the perception of being impossible to work with and helpful.

Truth is, it takes a lot of effort to balance your work against the ever-changing set of requirements coming from your stakeholders.


Sometimes it’s hard to understand the business. I see this happen when developers are considered fungible resources to implement a spec. If their only visibility into the business problem is through a proposed solution, then yeah, questions about how many statuses are supposed to be in the combobox are to be expected.


It was not issue of ambiguity. Specification was clear. It was issue of programmer assuming that customer is wrong all the time and basically wanting the customer wrong. The statuses were right, but the programmer fought to merge few of them into one, because it looked "simpler" to him.

By questioning in don't mean having honest questions nor trying to understand, but being convinced that there must be mistake and forcing customer to defend every tiny choice.

And sometimes the customer wants radiobox instead of combobox are just wants to have it without having to spend 20min on meting to defend the choice.


On the other side I've had customers ask for things that are impossible or against best practices and their own interests.

For example, password requirements. Many companies are not aware of the latest best practices and guidelines and request that we block browsers from generating or remembering passwords in password fields. Or they give us a list of draconian password requirements that are clearly debunked by the latest guidelines.

Or they ask us to basically break the HTML spec.

Like I tell my four year old son, I'd love to get a million dollars but I'm not going to get it because I stamp my feet and demand it.

Trade-offs need to be made on both sides. I agree that an engineer that doesn't provide alternatives when a customer demand is not technically feasible should be corrected. However the customer should be prepared to choose from those alternatives or change their demands as well. The first step to success is shipping and the most common reason for software projects shipping late is a failure to start. I'm not against firing bad customers.

It's much easier to ship without a contentious feature and figure out how to add it later than to delay shipping (for certain features within reason, of course -- please don't ship half-baked security libraries or aviation control software).


Yes, but that is different situation.

I was specifically talking about situation where requirements are possible and not ambiguos. And where engineers starts "knowing" customer is wrong and keep knowing it despite it regularly turns out it was enginers lack of domain knowledge.

There is no trade off between these two situations.

I don't know what you mean by breaking html spec. Browsers keep it and we work either with it or around it?


Ah okay, that makes sense.

re: HTML -- I've had strange requirements come in that required us to work around the HTML spec in order to get the UI to work they way they wanted. We eventually found a better solution...

but the point was that not all requirements are golden; sometimes the customer is plain wrong, ignorant, and stubborn.


Yes, I agree with that. It is not that customer is always right. But it is not that customer is always wrong either. We have to look at requirements, give them benefit of doubt, ask and listen to answers to figure out which are wrong.


The problem here is that business people don’t understand the details of the business requirements or the model of it in the code.


Sometimes things have been trundling along on so much fuzz and handwaving that nobody actually knows how things are supposed to work when it gets down to formalizing them into a set of rules that could be implemented in code. Many such cases.


If someone doesn't understand something it's because it's either not been explained well enough or that person doesn't have the background necessary to understand it. Either they need more information or they shouldn't be making decisions about it.

If the problem is that the issue hasn't been explained well enough then the developers need to work harder to better explain what the problems are and why they're problems.

If the problem is that the business person doesn't have the necessary background to understand the issue then you have a huge problem. That person is going to be a problem at every step. I don't envy anyone in that situation. In the past 25 years of development I've never actually encountered that situation though.


I’d put forward that there’s one more possibility: the person’s paycheck is contingent on them not understanding.

Unfortunately, some business people are compensated as direct function of revenue (growth), even though they have to pull engineering levers to obtain that delta in revenue.

For a lot of these people, once they’re convinced that doing X will lead to growth, no explanation will convince them that it isn’t worth it.


Engineers always have to give time estimates for their work. This is their way of complaining.


Honest question - is this a part of your hiring pitch? "Our engineers don't complain about business constraints" is a red flag if I ever heard one.


... and is quite a funny statement. If "our" engineers dont "complain", then someone is not listening. Or you have really bad company culture / recruiting / engineering. The DUTY of engineer is to complain to bussiness decisions as they are quite frequent orthagonal to what does make sense to do and in which order.

Make sockets as first priority and then write tcp stack as "unneeded"/postponed is quite frequent from a bussiness standpoint. Not complaining about situations like this is just hurting the bussiness.


I think depends on what constitutes the “complaint”. If it’s just a bunch of griping with no suggestion for an alternative it’s not very constructive. Clearly engineers shouldn’t just build whatever they are told, but not caring enough about the business constraints is also bad. Like it or not at most places the software serves the business. Yes you want to engineer things so they are maintainable, performant etc, but if it doesn’t meet the business needs then none of that really matters. I’ve worked with engineers for example that outright refused to build things that are clearly needed by customers, for “technical reasons”. The result, good software that no one used.


On the other side, you have software that doesnt have promised features, is late etc. Bussiness side should always work with engineering side (which is more than not an exception) and the engineering side should never reveal what is possible to marketing (so they sell what they have instead of selling the... clouds... ehm... need better wording... vapors).


> Yes you want to engineer things so they are maintainable, performant etc, but if it doesn’t meet the business needs then none of that really matters. I’ve worked with engineers for example that outright refused to build things that are clearly needed by customers, for “technical reasons”.

The core problem is that too often business needs are dogmatic and management is too entrenched to fix it.

This is the reason why so many large-scale IT/digitalization projects end up colossal clusterfucks: no one on customer side has the spine to say "our processes and workflows were made in an age where people did things by hand, let us modernize them while we're already at it".


Depends. To

> clearly articulate trade offs to stakeholders

may be the constructive cousin of complaining.


> write good code within the constraints of a fast paced business environment [...] It’s not always vital to the business to have a good code base.

You're on the edge of contradicting yourself.

If the business environment is fast paced, ie your development process for ingesting features and specifying behavior is ad hoc, ("we are agile/do scrum, except we service requests from customers immediately as they come in instead of waiting until the next sprint") it's absolutely vital to the business to have a good code base.

Being able to make rapid changes without breaking everything requires a good code base. If your code base is full of landmines, you necessarily cannot send an average coder out to implement some average feature, because they'll necessarily have to (re-)teach themselves about half the idiosyncrasies of the codebase before they can do anything.

It's fine to have a shitty code base if your business environment moves stodgingly. ("You want a new form in the UI? OK great! We need a scope of work, a formal specification, we will charge you by the hour, you will need to pay us to recertify the entire product. Let's set up a meeting with contracting and legal next quarter to iron out the details.")

In my experience, the people who implement features quickly in a shitty codebases do so by breaking stuff in non obvious ways. This includes myself. This is fine for my organization because we only ship once every six months, and we only ship anything after two months of exhaustive manual testing. If the organization just decided, "we're a fast paced organization now, now we ship every two weeks" we'd lose all our customers fairly quickly as a result of the precipitous decline of the quality of our product.

I suppose fast paced business environment+shitty codebase is also ok if the goal is just to sell the company as soon as possible. But that's a personal objective, (and a selfish one) not a business objective.


I've worked with such people and on applications made by them.

All that is usually done at the expense of others - be it teammates or future maintainers.

It works great for MVPs and such, but falls flat on its face when you have to add new features to a three year old application, whose original creator moved on a while ago.


This is exactly the situation I'm in at my job and I'm the only developer left from the MVP team, which was headed by a consultant whose scope was supposed to be smaller but convinced management, when they would not budge or listen at all to their in-house leads (I was a junior at the time) on the same issues, that they'd get results if they gave him the project (and a helluva paycheck). To his credit, he helped us put out a decent MVP, but he booked it when the company wouldn't give him an even more insane pay raise to stay on full-time and it turned out the MVP was not scalable and was designed for a different case than the one our company had and so suffered perf issues the moment things got serious. But management was okay with all this cause they got the MVP they wanted to show off to potential acquirers and-lo-behold the company got sold, that management disappeared, and I'm stuck behind walls of intervening actors from addressing fundamental issues in the design


Interesting. I assume nobody figured that it's high time to rewrite the damn thing?


Every (successful) rewrite I've ever done was in secret.

I once spent 6 months fixing someone else's MVP. Every bug fix exposed 2 more. I begged for a do over. Nope, we've invested too much money to abandon.

Once I understand the problem well enough, I banged out a full rewrite in two weeks. I had just one bug before release (one of the trig equations had the wrong sign, facepalm slap).


>> “business people” ... and they typically make decisions in order to maximize good business outcomes.

You had me until this statement. It's been my observation that "business people" are usually self-interested manipulators who hire people they don't need if it makes them look important, game/trick OKRs, hide all failures from their superiors (and would fire you if you went behind their back), often damage brand trust/equity to try to get good metrics for a quarter (e.g. google), willingly let massive problems (e.g. security holes a la equifax) fall on the floor rather than risk being associated with problem by helping.

Not all, but more than half.


I would not call it 'complaining' to point out a business constraint, I would call it part of the job.

To make it easier there really are only 2 business constraints: Time and Money. Everything can roll up to one or both of those.

I'd also like to note that asking an engineer to estimate a project, is work. That work (creating the estimate) takes both time and money. The more time and money engineers spend on creating an estimate, the more accurate (and valuable) it will be.

Here. This is an estimate for almost every business software project: Between $0 and $100MM. That took me no time and as a result, has no value.

The other end of that spectrum is an estimate with perfect accuracy and full value. Meaning -- we know, to the penny, what it is going to cost. An estimate like that is going to be EXPENSIVE and might even eclipse the cost of the entire software project.

Knowing that creating estimates take time and money, the question inevitably becomes "Well how much will it take to create the estimate?"

The answer comes down to: How much do you want to invest in it? Or better put: How important is estimate accuracy to you?

Business people decide the value of things, rely on that. "What is the dollar value of this to the business?" is a legitimate question to ask to a business person. If it's worth $1, then use my estimate above. If it's worth $1MM then the next question is: What percentage of that inevitable end value is it worth to spend on the estimate?

Most of the time the number that comes back is not derived from some magical corporate formula, it's gut feel. Which by the way is essentially arbitrary, which I wouldn't call useless. But let's call it what it is, it's a number that is palatable to the business to lose.

So, given that number, we now know how much time to spend. Your accuracy will directly reflect that amount. If we want to be more accurate, we'll have to spend more money.

Business people are responsible for the money and engineers can not manufacture more time.

Rely on both of those.


I think what you mean is: the best programmers I know add value to the company.

As a freelance programmer I had to learn this. Companies don't give anything about the language, the VCS, the LOC, spaces or tabs, or anything unrelated to them.

You just add value to the company by creating great software they can use. This also means software that can be maintained and updated.

Some people in this thread complain about the "They don’t complain about business constraints". But when you start complaining about their business constraints, how can you add value to their business? You can only add value when you think about how you can create something that works best for them to keep those constraints. And ofcourse this includes discussing constraints when you are sure you have a great idea to make them better, but most of the time other people thought better and harder about those constraints than you did.

Edit: maybe it is unclear what you mean by "business constraints". In my mind you are talking about the company business, but others might thing about programming business logic.


> “business people” most often do understand what makes the money flow through their company

My anecdata for the last 15+ years doesn't support this claim.


Mine goes back almost 4 decades, and understanding the business and tradeoffs is largely hit and miss with executives at all levels (also ran two startups in the days before the web). I've dealt with CEOs who understood everything and those who knew less than nothing; also seen engineers who understood nothing in business or development or both and those with a great deal of knowledge who could suggest changes and improvements that made sense to everyone. But I think the negative versions were far more common than the positive ones in every kind and size of company.

Sometimes dumb people can still be lucky and make money (one place was dumb all around but their business was a unassailable monopoly, and another had 40 years of dumb customers who agreed to terrible annuities so failure would take decades to happen no matter what was done) and sometimes bad decisions or understanding in business or tech lead directly to failure.

There is no single type of success or failure in either business or tech.


This sounds exactly like the kind of engineers I never want to work with. That's just a breeding grounds for yes men with zero interest in challenging the status quo. Nothing personal, but you are the type of director I would also not like to work with. Three management style described sounds like a walking, talking red flag.


I am just a fair programmer.

But I create great insights. Whole new ways to see a problem. Making the intractable and complex simple and obvious with a novel mental model.

For instance, I replaced a high end CAD/Illustrator style print production workflow with a simple form. Answer some questions and out pops a production plan.

It took me 4 years to earn that insight. The entire industry modeled the flow information exactly backwards (https://en.wikipedia.org/wiki/Job_Definition_Format). It took me forever to overcome my initial misunderstand of the problem -- I mean really, how conceited am I to think everyone else is wrong and no one else thought of this before me -- and see the problem like the production workers do. I had to bridge the domains of print production and software design.

Alas, it's been a while since I've had a gig which could afford that kind of investment. My last gig, recommenders for fashion, which turned out to be a complete hoax, devs were slaves to JIRA, "just do 80%" and move on, throw everything away every 2 years, unless it's "legacy", in which case just suffer with it.

I wonder if there's much room left for dinosaurs like me.


> It’s not always vital to the business to have a good code base.

Many engineers may cringe at this... but this is true. Code is only as good as it is able to fulfill a business need: that is help the organization meet its strategic goals.

Technical constraints - maintainability, security, performance,... - are really subservient to that principle.

That is, they aren't unimportant. Rather, their importance is relevant - or a priority - in so far that they help achieve the higher goals.

... but this isn't a universal maxim!

> Also, for better or worse, if an engineer is not of the variety I listed above, they are replaceable in the eyes of the business.

An employer/employee relationship is very much a business deal between humans. An employee sells expertise and potential for a fixed amount of time in exchange for a fixed salary. As with any deal, such business relationships extend beyond simple monetary compensation. Especially because employees subjugate themselves to the authority of their employer, making it inherently a skewed relationship.

When humans invest their time and effort in a venture, they hope to get a due sense of satisfaction and self-actualization from doing so. That's where aligning values makes all the difference.

What many business people fail to see is that employees don't identify themselves as resources, liabilities or labor. They see themselves as partners who hope to make a meaningful contribution, and they are willing to do so provided that they are treated as such: with a due sense of empathy and respect for their opinions, what motivates them to come to work each day; and so on.

If they don't get that, two things might happen. Either employees will walk out rather quickly, or - worst - they stay and will only give you the bare minimum of their efforts - living for the weekends - wasting both their own and your time.

It's true that companies aren't obligated to keep employees forever or that they can be let go if both parties want different things ...

... but publicly stating that any employer/employee relationship is "replaceable" without that nuance, is the shortest route towards foundering any hopes to attract potential hires who care about your business at all.

> They don’t complain about business constraints and they put out stellar work.

Given the above, humans will also factor in the moral and ethical side of what they are doing. And they can, will and should criticize those business constraints if those don't align with what they feel is "good" or "morally sound".

Business constraints aren't just limited to implementing some weird feature. That's just the output of a business constraint. Neither are styling rules or automated tests business constraints. Nope. it's far more then that. It's workplace culture as a whole. It's the shared values that are espoused on a day-to-day basis by everyone.

It's normal to have some level of conflict or tension as the net result is an ever-moving compromise.

Expecting that employees will never complain and only put out stellar work pretty much makes any healthy discussion moot. It's a non-argument as "stellar" is only meaningful in the eye of the beholder. All it does is create the perception that employees are seen as automatons.

Again, when employees pick up on this; they will either walk out or - worst - stick around and waste yours and their own time only doing the bare minimum to cash a paycheck.

In the short run, the bottom line of a company may reflect the financial benefits of optimizing for efficiency. But in the long run; not taking the human factor into account isn't sustainable at all. Neither for the employees, nor the business owners.


>> They don’t complain about business constraints and they put out stellar work.

> Given the above, humans will also factor in the moral and ethical side of what they are doing. And they can, will and should criticize those business constraints if those don't align with what they feel is "good" or "morally sound".

My thought after reading the article was that the author seemed to consider writing good code, maintainable code, and having a strategy prioritizing long term over short term needs as moral imperatives. Or at least the words and terms used were ones I often hear and use when discussing ethics and morals.

Which made me think, do many engineers consider these things as ethical decisions, rather than just business choices?

Personally I enjoy being involved in strategic decisions, both providing input and opinion - but I also see the value in adhering to a strategy once decided even if it isn’t the one I’d advocate for if it was up for discussion. For ethical decisions I’d reason differently - it’s not ok to break laws or destroy lives (etc etc) just because a business decision said so.


> My thought after reading the article was that the author seemed to consider writing good code, maintainable code, and having a strategy prioritizing long term over short term needs as moral imperatives. Or at least the words and terms used were ones I often hear and use when discussing ethics and morals.

I don't think the article's author said this. In my eyes he meant it more like "at least give us SOME breathing room to do our best engineering work!" -- meaning give some time for refactors, for changing tooling, for stepping back for 1-2 months and just looking at the whole thing in general.

Creativity needs time for pondering and reflection. Chasing ruthless and practically impossible deadlines while having to systemically ignore everything that made you a good engineer in the first place makes for a toxic employee<->employer relationship. And one that rarely ends well.


Not every business model is the same.

When you only have short term goals, sustainability is doesn't make sense.


@ismejeff, sorry but you are (probably) a business person.

- is an IDE open on your desktop right now?

- how many commits have you done this year?

Talking about code and software isn't anywhere near the same as being a programmer. If I am wrong about you, I apologize; you are the exception.


I think as a profession we've been bad at quantifying and communicating the value of clean infrastructure and good development practices to the business. Why should they pay you to make things "nicer"? One of your big customers is threatening to churn if you don't deliver this new feature! To them, it sounds like you are asking them for a ton of money for better feng shui - pseudoscience with unclear benefits - when they can directly see the benefit of delivering features to the customers.

Instead, I think we need to learn how to measure and quantify the value of a good codebase and good practices, and communicate that to the business. Google's error budgets are a step in the right direction here, but I think there's a lot more that can be done.


> To them, it sounds like you are asking them for a ton of money for better feng shui

I've used the analogy of kitchen / food hygiene. It doesn't necessarily improve the taste of the food but it prevents cockroaches and death by food-poisoning.


Well put, thank you.

Not to mention being shut down by the health department, losing money and requiring new staff to learn non-standard ways of doing thing which takes longer AND causes an over-dependence on long term employees who are the only ones who 'know how to do that'.

Better fung shui? No, more like 'not cutting corners'.


Those are quantifiable costs though. If your restaurant is shut down or your reviews are bad, you can say how much it'll cost the owner. The owner can then make a more informed decision about how much to invest in cleaning.


I've been brainstorming this with colleagues at my job and tossed out the idea of including both an immediate development cost as well as a "long term cost multiplier" for every requested feature. So far, no one has the like the idea or phrase. I can't really blame them.

But there must be some quantity or term, with a nice memorable ring to it, that could get this across and at least let business people make decisions with full information at hand. Granted, any such numbers delivered in that guise would be wild guesses. But still, better than nothing.

Edit: However, I've since realized that the notion of "story points" and "velocity" already sort of provides a way to get this across if used carefully.


We need to quantify it though. Thinking about long term implications and talking about it is good, but that doesn't break us out of the bubble of communicating it to the business. After all, they don't know what a story point is (heck I don't really, and I've been doing this for a while).

As the article points out, management can be short sighted- they want to deliver on their roadmap this quarter or half (because that is what they are paid for).

I don't know what the answer is, I struggle with this. Testing has some promise here: if we can deliver (failing) tests early on that demonstrate progress toward the roadmap, and use those to show progress to the business and help them hedge risk, we can use tests as a forcing function for quality - testable code, especially for business processes, is usually good code.


I sometimes give two estimations in one: up-front cost of development and long term consequences. Could be something like,

"8 points + three support cases per week" for making a new feature without fully thinking it through.

Or

"32 points + misconfigurations every other month" for including a complicated feature that can more easily be abused.

Or

"We can go with our current best guess, and the effort of that is maybe 2 points + potentially 12 points later if it turns out we were wrong. Contrast this with 6 points to be certain of doing the right thing now."

In my experience, this is highly appreciated in environments that take you seriously, listen to you, and understand what "estimation" means,


Your competition gave an estimate of 2 points, did the work, got paid and moved on. Now you are supporting the feature...


If that was the right call, then great! If not, hopefully someone learned from their mistake. If nobody did, I'm probably not interested in that business anyway; it sounds like it is a recurring source of friction.


How can story points be used for that? To me that's only relevant in the short term. Can management be made to understand that velocity can be too high?

Edit: Expanding with my previous thoughts on this - as a software architect I've explained my role to management by saying it's like I work in an agile team, but the waterfall is in my head - someone needs to keep the architecture together and drive it towards green pastures where it keeps great flexibility for developing future anticipated features without giving up stability.


My thought was that story points can be used once they are turned into velocity by dividing by time. And you can say that adding this feature half baked now in short time, will inevitably drop our velocity by 10% in the future.


Like a huge afterburner that is a dead weight after 3 seconds...


Isn't that what tech debt is about? You write code that works right now, but you know it should be refactored because of long term costs. Keep track of it, put it on a wall, quantify it (days, story points, whatever) and talk to your manager about what tech debts need immediate resolving, what are the nice to haves. The worst thing you can do is to build up tech debt and be silent about it.

Tech debt needs to be handled, or else your software won't be maintainable in the long term and new features will cost much more and put the system in risk even more.

And there is a nice slide about it: https://www.slideshare.net/cairolali/resolving-technical-deb...

(In case you are wondering, the circle means, you're past any maintainability. Go to start, rewrite. If your client or manager wants to avoid it, he/she should plan for refactoring).


I hear you on tech debt, but in my conception, it is just one portion of "long term cost multiplier". Because even a perfectly coded and tested feature adds weight to a system and slows future development.


"Growth". That is one word that makes managers look like deers in headlights.

At least the first time you tell them. Maybe the issue is much bigger than just communication. Perhaps they also realize that they don't work for, or will get rewarded for, far future profits and benefits.


> However, I've since realized that the notion of "story points" and "velocity" already sort of provides a way to get this across if used carefully.

Not really.

What you're missing is called risk. risk with respect to missing the deadline and long term risk when doing a sloppy job.

Business people understand risk, yet software developers go through all this effort to talk about story points and velocity.


Your right, a large part of the problem is that there isn't a quantifiable way (outside of things like bugs/kloc, which are themselves suspicious) to identify a clean code base. I believe that is because much of it is personal preference, the same way some people find single family homes optimal while others like high rise condos. One person may think that adding some "clean python" code to an existing pile of bash scripts "cleans" it up while another may think that keeping the code base using a single language is cleaner while trying to minimize churn and bugs.

Bottom line, I frequently question much of what is being sold as developer productivity these days. It seems in my experience "this code is ugly/garbage/etc" is simply translation for "I don't understand much of this code". Engineers that truly understand the code base just go and fix it, usually with commits that are either small one or two line bug fixes with long commit messages describing how the system is failing and what is actually being fixed. Alternatively, they have "cleanup" patches that have extremely high removal/add line ratios which are replacing large chunks of difficult to understand code with small pieces that are easier to understand, and long commit messages detailing where the redundant functionality was. Very rarely are these changes to a different language/toolkit. But the evidence of skill in both these cases just about never include followup patches to "fix" new problems introduced by the previous round of fixes.


Yes, I’d say Chesterton’s Fence often looms large in these cases. While “build one to throw away” is often ideal it is contingent on deep understanding first, and even then there is significant risk of overlooking major flaws in a new untested design.

In practice I think the best approach is to primarily refactor incrementally in service of ongoing work; management doesn’t need to approve this, you just build it into your estimates as an IC. If technical debt is being taken on call it out early and loudly and secure cleanup budget in advance if possible.

Beyond that recognize that big rewrites are rarely justifiable. Even if a code base is complete shit, it’s most likely not worth it to rewrite. As an engineer this reality may put you in a lose-lose situation. Learn to separate your personal feelings and innate desire for order, and make an objective call on whether the code is workable and start looking for a new job if not.


But you can just rewrite that complete shit and keep your current job. And rewriting is very fun. I did it multiple times and don't understand those who're ready to switch job or work on a bad code. Isn't it better to embrace your desire for order than to fight yourself? There are some people who don't care, good for them.


> outside of things like bugs/kloc

Since #bugs is strongly correlated with loc I'm thinking this might turn out to be very low in signal -- so much so that you might instead just throw a dice to determine the "code quality" of each project, if using this metric is the alternative.


you may need to look at bugs/ksloc as more of an enterprise measure than one for an single project. when it gets down to it, this is the measure that will tell you if your organization is 'improving'; end of the line reducing your defect rate is one of the most valuable things to accomplish.


Assuming it's more noise than signal, increases or decreases to that measure don't tell me anything about actual improvement.

I'm all for reducing the defect rate, I'm just not convinced optimising bugs/kloc is meaningful.


Well anything done directly to influence bugs/kloc is bound to throw the whole thing off. I think the parent here was mostly interested in discovery rate, which also can be influenced by hiring/firing test and QA, or for that matter gaining/losing customers that use the product.

And everything being mostly equal, it could be a reflection of code maturity more than anything. The more mature product may have a higher bug/loc ratio only because more of the bugs have been found.

Which is why I mention there are metrics, but they all seem to have major problems.


Benefits of oil change are also invisible until it’s too late, yet no one has problems understanding it. Can we learn something from that field, maybe? I’m grasping at straws here.


Code is too reliable, in many cases. If you don't do your 100 and 500 hour PMs and change the fluids on a bulldozer, then it will pretty reliably break down or render itself into a smoking million dollar chunk of scrap iron, in spectacular and unavoidable fashion.

Software can limp along broken without anyone noticing for far longer.


One of the reasons that oil can more or less easily be validated is that there is a spec to the engine and the spec can be show to be invalidated when oil is not used.

Directly measured, the engine cylinder walls will grow wider and compression will be lost. Indirectly we can do an oil analysis and see the metals deposited.

One of the reasons we do not understand the cost of low internal quality code is that we do not know the "spec". Defect rates is one proxy measurement, but it's hard to know if it's just "bad programming" or "bad internal code quality" ...


> One of the reasons that oil can more or less easily be validated is that there is a spec to the engine and the spec can be show to be invalidated when oil is not used.

no, it's because the engine will seize and fail if you don't change the oil, and it's readily apparent to everyone when it happens.


But the benefits (and downsides) of doing or not doing an oil change can be measured. You can run tests on engines that skips or delays oil changes and see what the effects are and measure them in dollar amounts. Like cost to replace / repair the engine prematurely. Sure YMMV but you can get a rough estimate. What is the rough cost in dollar amount if I don’t do this refactor?


Definitely. Developing a unit of account is key to solving this problem. The problem was never that businesses didn't "get" technical debt, the problem is that they could only measure it by the volume of developers' whining.


If the business lacks sufficient capital reserves to continue operating through that customer churn then issues of clean infrastructure or good development practices become completely moot. The business will fail before the engineers have time to improve anything.


This is why I prefer to turn the problem on its head and ask the business how much time they want me to spend working on 'quality' (i.e. refactoring/tooling).

If it's on a spreadsheet that their boss can see saying that they asked me to spend 0% of my time on quality then I'm ok with it. If they can justify putting that zero down then presumably there IS a good reason.

If they're going to put me under implicit pressure to deliver as many features as possible and then cry havoc later when they get bugs then I'll lack sympathy.

Product managers are often resistant to the idea of tracking this number, however, since it provides a new angle through which they can receive flak from above.


What I meant by that example wasn't an existential threat (I agree if you're there it's already too late), but to contrast how the business people are thinking. As in, they're thinking "here's $1MM of ARR that's likely to go to our competitor away if we don't ship this feature" when the engineer is asking them to get time to clean up the code.

I'm not saying that clean code is a waste of time - it is incredibly valuable! I'm an engineer, and I value good architecture a lot. I'm saying that to get organizational support for it means that we have to get better at demonstrating that value to non-engineers, which will both hold us as engineers accountable to working on the highest impact things (is delaying features worth the potential short term ARR loss or slower customer acquisition? We should have a good answer for that) and help the business balance long and short term prospects.


"I suspect that most discrimination that happens against older programmers is not, in fact, age discrimination, but is, in fact, ‘wisdom discrimination’."


As long as hiring and technology decisions remain largely hype-driven, having someone around who says "we don't need k8s, hadoop, machine learning, and AWS with some big-ass Terraform-driven constellation of supporting serices for our expected data, workload, and use case; look, give me one middle-weight server, or one heavyish VM somewhere, and I can show you how to do all this with awk, sed, cron, and maybe a few lines of Perl or Python. We can have it up fast and anyone who can read man pages and a README can admin it, and anyway it's so simple and the workload static enough that once some initial kinks are worked out it'd probably run in a closet without issue for so long you'd forget it was in there—so, longer than this project is likely to be alive, anyway" isn't really something you want unless you plan to ignore them. For their fellow developers, that's résumé poison, you need "real world" experience with all this crap for the next job hunt. For the product manager, owner, and sales, you can't baffle stakeholders or investors or prospects with fancy bullshit when there isn't any fancy bullshit.


It’s always fascinating to me how people can get to this point where they eschew new technologies just because there is an older way that works fine. Yes, having a tried and true pipeline is important, but it shouldn’t be surprising that companies are made of people who want their company to push the envelope to further evolve the systems. Someone was the first person to use Python for a production project, and that evolution has to happen somewhere, with real applications to test them.


It’s always fascinating to me how some people churn from tech to tech, burning most of their time reading docs and fighting with configurations, just because a new technology might be a bit better than the existing, perfectly adequate system.

It’s not that things can’t be improved, just that there is so very much low-hanging fruit that has never been implemented at all in a usable manner. We should focus on the huge gains to be made there before the tiny incremental gains to be made by endlessly iterating.


There was a blog posted on HN or reddit (I forget) talking abou how they were moving the tech behind their blog yet again. And went through how they did static when it was popular, reactJS, node, and a few others.

I read thinking to myself "my god, it's just a fucking blog engine, why are you spending so much time rewriting it in shit, realizing what you wrote it in has downsides, and thinking the new shiny wouldn't?".

It's like someone actually believe there's a tech silver bullet just around the horizon, rather than a new way of doing things that eases some burdens and creates others.


True, it is an amazing waste of time. But then, so are all human endeavors. :-)


You really missed the point. It's not "there is an older way that works fine". It's that there is a simpler way that can be built in less time, takes fewer developers to maintain, will have less bugs, performs better, and is more reliable, because it's simpler than all the crap that people want to use. But people will insist on using all the trendy crap because it's trendy, FAANG is using it, it looks good on a resume, etc, etc.


If all those were actually the advantages of established technologies, you would have a point. But anything that does something useful is inherently complex. To someone who's been programming for 25 years, sure, sed/Perl/Makefiles are straightforward and well-established. But to everyone else, to build or modify them, first you have to go back and learn Perl and sed, which are both riddled with layers of historical baggage, gotchas, and divergent dialects.

You can't actually get all of those things: speed, team size, low defect rate, high performance, high reliability, simplicity, as they are at odds with each other. Generally, the rule is "good, cheap, fast: pick two."


The post you originally replied to listed specific technologies that purely add complexity when introduced to a project. Hadoop isn't replacing something equally complex, it's simply adding complexity. Now if you simply can't do with anything else, and you're at the point of rolling your own distributed filesystem and job scheduler and zookeeper and all the rest of it, then sure, try Hadoop, but that's not what we're talking about. Note that I can pick on Hadoop because the hype cycle has peaked there, but this applies to plenty of newer techs where it hasn't.

You literally can get more of all the good things in my list by removing unnecessary complexity from a project. And yes, there really are some technologies that, most of the time you see it being used, the entire project would literally be better off with literally nothing in place of it. And yes, people still prefer to use the trendy thing.


Fair enough, if replacing that complex of a stack with Python/Makefile/cron is even possible, then it's over-complicated and you can refactor the complexity away. Unless of course you will soon need to implement a feature that requires bringing back in that whole stack. But I suppose this is why hindsight is a valuable asset when you get a project in maintenance mode, as you can see where the peak complexity of the project actually ended up being.


I understand your point, but a minor retort: You get the same sort of "gotchas", historical choices, baggage, etc from new-ish code that does "too-much" and can be considered black-box "magic" without delving into it, reading docs, tutorials, and having sometimes just plain having experience in it.

I think a potential overarching point that started this thread: Tech should evolve, but it should also stay steady and simple to use. Too much of the "new" stuff is not just a good natural evolution of people solving problems, it's people adding layers of complexity and fluff where there need be none.


> Too much of the "new" stuff is not just a good natural evolution of people solving problems

I guess it depends on what you're lumping in with "new stuff". I've experience the opposite quite a lot of refusal to acknowledge that a new constraint or condition has modified the parameters of what a solution is trying to solve, dismissed as one off events or something to monitor, then 6 months later when the problem was isolated becoming the new norm, and monkey patches being required because of refusal to acknowledge that the environment has become more complex.


Unless you plan to work at the certain job until you retire, you gotta think about your resume. You don't want to be the guy looking for jobs with Visual Basic and Excel macros experience.


Your job as developer is not to improve your CV, you need to solve a problem and you should use the best tool for that job not what looks cool on your CV, you should use cool shit on your own time not attempt to force on your work project latest cool stuff and 1 year later leave and let the rest deal with the fallout.


I think your resume looks really good if you explain that you solved complex problems with Visual Basic because it was the right tool for the job given the circumstances. If it’s the only thing you know, maybe not.


Companies push the envelope via their product they are selling or marketing. If they are not selling consulting services then, in a vacuum, using e.g. nodejs instead of Java is not pushing the envelope in any meaningful way. If using nodejs actually lets you push the envelope in your actual product then yeah that's good.

But in my experience companies are full of people thinking there are lots of silver bullets out there. The silver bullet is the new way to do it. The wrong way is the old way to do it.

Related, lots of people jump straight from being 100% naive about performance and architecture to thinking FAANG is the only way to go. They don't stop to think that there are a multitude of midpoints between those two options.

If your business model is to be Facebook or bust it might make sense. But I see a lot of pretty boring and simple businesses resorting to really over-architected solutions. Because the developers are so inexperienced that they think you need eventually consistent distributed microservice soup to serve 10,000 requests per hour.


Software engineering would have been a higher quality profession of python had never gotten into any production project. Now we need to patch over the mess by adding mypy but we could have used a language with explicit types right from the start and it would have saved a lot of bugs in the category of 'every typo is a new variable'.

This so-called evolution is basically creating messes everywhere and code bases that only use one language and one build system are the most understandable ones. Also, once upon a time a programming language was a thing in which you could write anything of your fancy. If that is the case 'multiple programming languages' is itself somewhat of a weird thing.


Software engineering would have been a higher quality profession if we were still punching machine code into cards! Now we need to patch over the mess, deal with all this operating system nonsense, and not to mention this entire internet hype.


I once had a coworker who, when working to pointlessly extend the lifespan of an obsolete external tool, wrote a translation layer. Then he stopped and re-implemented it in Rust. This added nothing to the tool except time to the release schedule.

I don't eschew new technologies. But increasingly I need to see an expected value to justify the uptick in cost.


With wisdom you come to the conclusions you have. The value is somewhere in between. You shouldn't mindlessly reimplement working tools, but you shouldn't keep patching old tools either. Somewhere in between, you project the current and future requirements. Sometimes you have the old tool on life support, sometimes you restructure parts of it, sometimes you reimplement it from scratch. You will often make a wrong decision and that is okay - you just learn that minimal work is often the best way forward unless you positively know otherwise.


Choosing technologies with higher long-term maintainability (support, operations, ongoing development) potential may be worth it in absence of strong software development culture and extensive engineering resources—which is often the case with customers of small consulting businesses like mine.

This does not mean eschewing new technologies, necessarily. Still, it is somewhat like investing in stocks: there may be signs allowing one to tell whether a new trend or stack is likely to go out of fashion soon (which may lead to increased maintenance costs), but it is often safe to assume that the older the tech, the longer it will remain in active use and easy to hire for, making it a sensible recommendation.

Goes without saying that it’s not the only metric, but I find it important, and not applying it at all a violation of customer’s trust.

Software-driven companies with stronger culture (including but not limited to FAANG), on the other hand, have legitimate reasons and resources required to prefer the riskier on average newer stacks—not coincidentally, their teams are frequently at the forefront of developing such tech.


> the older the tech, the longer it will remain in active use

You may already be aware, but this effect has a name:

https://en.wikipedia.org/wiki/Lindy_effect


Yes, I definitely agree there are other reasons to discount the new tech (maintainability, maturity, risk, etc.).


Sure, lots of newer tech has legitimate uses and reasons to choose it over other options in some situations, but there are a hell of a lot of projects out there that are way more complex than they need to be, often at UX cost in addition to development cost, because everyone making the decisions have incentives that don't align very well with UX or controlling costs. For every legitimately-good-use of a hyped new software technology, there are probably (I'm being conservative) three unnecessary ones. Sometimes these can compound into some real monsters of over- and mis-engineering.


Well, maybe if salary compression and inversion didn’t exist and that companies would pay their existing developers based on market value instead of HR enforced policies of “no raises more than slightly above cost of living”, incentives would be better aligned.

But as long as statistically the best way to make more money is by job hopping every two or three years, developers are always going to look out for their own resume and best interest.

No job is permanent It is irresponsible not to be focused on keeping yourself marketable.


The older way wasn't working fine for the extremely limited audience where it didn't. It's not without merit, it does have a purpose, but many more people wish they had Netflix-scale problems than people who really do.


I've been fiddling with the Kaldi speech recognition toolkit over the past 6 weeks or so, which provides "recipes" for creating speech recognition frameworks (basically a combination of preprocessing/featurizing/training steps for various speech recognition models (HMM/GMM, neural nets, speaker adaptive training, etc).

These recipes are a bunch of bash scripts to glue the toolkit binaries together, all connected via Unix pipes.

With all respect to the authors, I feel that Python would have been a much better choice for this task. Bash is a verbose scripting language to begin with, not to mention much more difficult to modularize. I find it takes me at least twice as long to decipher bash scripts compared with their pythonic equivalent.

I think it's a good example of "old, tried and true" not necessarily being "best".


Incidentally this frustration with Kaldi is exactly why we made the Persephone ASR project in Python: https://github.com/persephone-tools


Bash and Python are effectively the same age. Python is 30 years old and Bash is 31.


True, but I don't think Python really eclipsed Bash in terms of popularity (or Perl, for that matter) until much, much later. I've always thought of Bash as the "traditional" approach to gluing binaries/pipes together.

Bash has been a staple of most Linux distributions for as long as I can remember, but Python only seemed to start being packaged as default in the mid-2000s (or at least, that's the impression I have. My memory could be failing me).


Bash is largely a reimplementation of ksh, which is just an extention of sh.

Pretty much anything in bash would be written much the same way in sh, which dates from, when, 1972?

So, no, bash and Python are not the same age, in practice.


Yes, indeed. But not all problems need or benefit from a modern solution.


I find this argument to be tantamount to saying English is fine, but nobody really needs it when there is French. Yes, French is perhaps older and much of English draws from French, and yes, English is more complex than it needs to be. But in the end, pretty much anything you need to say, you can say it in both French and English.

Most problems don't need a modern solution, but that isn't an argument against modern solutions for those problems. In the end, it comes down to, which would the group of people writing the code rather have, an old-style solution or a new-style solution? If both are equivalent, there's no reason not to choose the option that looks like it will be the future.


Well, the "older tech" is ridiculously cheaper.


I work mostly in the "webshit" and mobile industry. The last time, in a work context, that I saw a site (in this case, from a vendor) and went "goddamn, for what it's doing, that is fast, this is really nice to use!" it turns out it was... PHP running on some normal-ass single instance VM or server somewhere, serving mostly just HTML.


It's really a spectacular outlier how well-optimized PHP is contrasted to how poorly designed the actual language is.


Thank Facebook for that. Likely, more specifically, Andrei Alexandrescu.


Not universally, no. For instance, C programmers are harder to find than Python coders, and the farther back you go the harder it is to find good people (COBOL, Fortran). The newer the tech, the more people there are riding in on the hype train to choose from.


There have been multiple submissions here on HN about how somebody figured to replace Hadoop with a well-crafted bash script which also calls several UNIX tools, and achieved 235x speed improvement.

I think the posters you reply to meant exactly such cases.

For the record, I have entirely replaced `grep` with `rg` (ripgrep) in all my workflows. `rg` is newer, written in Rust, and utilises all CPU cores -- unlike `grep`. So I am not against objective and demonstrable progress.

But as another poster replied to you, many of us are against wasting huge amounts of time and effort on very marginal improvements just because the tech is new.

Tell me your tech offers 2x or more speed (or less defects)and I'll use it. But tell me "hey, let's use Kubernetes because.. well.. well... a lot of people use it", and I'll pay zero attention.


Eschewing newer technologies in general is clearly a terrible idea but it's not such a bad idea to eschew hyped technologies (particularly corporate driven hype).

Python is an example of a technology that grew relatively slowly and without a hype cycle.

IME hype in this industry correlates pretty strongly with crap.


That is true, but the trick is how to separate the hype from the gold, which is almost as much superstition as science. I would disagree that Python didn't have a hype phase; there were comics like this[1] after all.

[1] https://xkcd.com/353/


If you look at the Google trends for ruby on rails (where the hype is clear) it looks markedly different to python (steady trend).

I remember that comic - I think it coincided not with anything special happening in the python world, it was just when Randall Munroe learned python.


Although I would say 2007 was close to the peak of Python bleeding "mindshare" due to the hype cycle of Ruby and Rails.


Very little money was ever spent hyping Python.

Python benefited enormously by contrast to Perl, despite being slower.


That's a function of when Python was popularized, where there was very little major corporate support for new open source libraries / frameworks. Ruby and Rails had their hype cycle around the same time, more or less on the back of a <50 employee company.

Nowadays, open source development has centralized far more on large public companies, so there's a lot more marketing effort being put on all the languages / frameworks / libraries out there.


I think there should be a middle ground. On the one hand, what you are saying is correct for many projects/companies and they might be needlessly burning money on tech. On the other hand, these solutions (devops, CI/CD, version control, etc...) were made for a reason. At a certain scale, the old ways break down badly.

Would you suggest that version control (git) is over-kill? You can still use FTP to upload your website.


Excellent point. Ironically, FAANGs, and especially Amazon, are quite conservative in adopting technology from other sources and are not hype-driven.


Or just Heroku with a Docker container. Use any language with an HTTP stack. Admin qualifications include: can slide a bar left and right based on demand for the service.


Honestly, for smaller teams just starting out, Docker itself might be overkill. For most popular languages, just `heroku create` is more than enough to get started.


Is it just me or does this article read as basically “everyone else is an idiot, executives (1), managers (2, 8), architects (3) and other developers (5, 6, 7) and only I can see the future and know what needs to be done.”


The article reads to me as "I didn't get my way and our code isn't perfect, therefore it's everyone's fault but mine and if only we'd embraced X, this would have all been avoided."


The author also equates "late nights" with hard work in point (2) which is a dangerous malpractice in itself.


Pretty dismissive take. The author presents a vision on how engineering can be more effective and protests against micro-management that gives them zero room for creativity and being able to put their best work.


History of computer programming:

- Having an accumulator (abacus)

- Accumulator with operations (calculator)

- Multiple registers with more complex operations (early computers with plugboards)

- Stored program computer (abstraction over the plugboards)

- Imperative programming (simple abstraction over operations + registers)

- Procedures, Functions, Structures

- OOP, functional programming, and beyond.

Why do we keep making layers of abstraction? reusability.

We want abstractions that reduce effort by making code reusable.

But abstractions are only a tool, using the tool correctly is on the programmer.

Code is written once, read many times. Languages that focus on "easy to write" are a trap. Languages that focus on "easy to reuse" are what you want.


Abstraction isn't just about reusability; it's also about comprehensibility. I put a lot of effort into lower layers that abstract away complexity, not so that my business logic is reusable, but so that my business logic is easy to write, easy to comprehend, and (within the constraints defined by my abstraction) easy to modify without breaking things. This does require that the lower layers, the abstracting layers, are well engineered, with well-defined interfaces and behaviour, but it pays off, even if reusability is nowhere to be seen.


I highly agree with this. It's almost a necessity these days with a major push to hire "junior" developers at a discounted rate. By doing those lower abstraction layers, you allow them to write the necessary high-volume business code while reducing their ability to do something subtly stupid for the business/data. Usually this involves abstracting away a lot of the database transaction-type stuff.

And on a side note: Never let juniors anywhere near the database design without supervision by the team/seniors. They won't learn anything that way, and they will inevitably make a mess.


In genetic algorithms, you select the best from each generation to breed the next generation.

I think this holds true in software development teams as well.

When you indoctrinate new developers, you should delegate mentorship to the best* performing teams, so new developers can assimilate good practices and develop an intuition for how thing should work.

I think this is much more valuable than giving them grunt work.

*: just like with genetic algorithms, picking the right fitness function is everything. Good performance is about steady, consistent, sustainable development... rather than trading today's problem with tomorrow's problem.


> Languages that focus on "easy to write" are a trap

Reminds me of that time when my company abused Scala Implicits, possible because one of it's libraries for mapping scala to relational database had examples of it.

---

Regarding re-usability. Does that imply well defined interfaces and private scopes? Does that eventually lead us to Pure functions? Afaik, pure func has maybe the highest ease/simplicity for reuse. Happy to be hear your thoughts.


What a bunch of huey. I've consulted on government programs with massive amounts of money and easy to meet deadlines, and the resulting software systems were still crap. Why? Because yagni (you aint gonna need it) went right out the window and they ended up creating the most resume padding, shiny rock encrusted POS ever.


>Some propose that the phenomenon of easy credit and low-interest rates make it economically nonviable for a company to act in its longer-term interests.

Wouldn't it be the other way around? High interest rates would increase the discount rate for future value?


I think the idea is that low-interest rates cause investors to push companies to adopt riskier behavior and that easy credit enables said riskier behavior.


I don't recall ever having worked on a large code base with a team of developers where everyone was 100% happy with it. Developers complain about everything. It's never good enough and never will be.

I believe this is related to our brains being hard-wired to prioritize negative phenomenon and stimuli. Most of those projects shipped and were successful. Users did like using them. However the developers often only saw the bug reports and the endless requests for new features. They never got a chance to talk to users and see how the software was being used and improving the lives of real people. They only see the barb-wire and duct tape.

I had a young developer ask me once about refactoring: how do you get time to refactor your code?

Never. You will never find a person willing to hand you a paycheck to take a month to change absolutely nothing. It gives the business no advantage. But the bugs! the young developer says. To which I say, dear summer child, risk tolerance. The business is willing to take the chance that 1-2% of the time a user may get frustrated, encounter a bug, and file a report or complaint. If the company has to pay you to fix it later that's fine. They can tolerate the cost.

You refactor your code as you go. After you write the code and your tests pass and you think you're done -- refactor first. Abstract away superfluous variables with function application, remove duplicated code, break up long branches with decision tables, etc. Fix that code up before you check it in.

As you get better your refactor steps will be much shorter as you internalize the data structures and patterns that lead you to clean, concise code to begin with.

When you write software commercially you need management that understands how to balance capital, people, and technology. And you need engineers who are pragmatic about their work and the real outcomes: the systems and the people who use them.


> You refactor your code as you go.

That's one of my most valuable self-taught truths! I fought with management and colleagues for years until one day, after a 4-day weekend, it just struck me: nobody can tell me anything when my PR contains both the desired feature (or fix) and the refactoring that slightly improves the status quo of the code.

This was ~5 years ago. From then to today, I hear complaints about my PRs about 2 times a year.


The word "refactoring" to software managers has the same ring as "prostate exam" or "speculum" to the rest of us. It might be necessary sometimes but Preferably Not Today and please stop talking about it all of the time.


Yeah, agreed. I stopped using it altogether when I am not talking with programmers. I usually just say "when adding this new thing you have to re-fit everything else so all parts play nice with each other", or "you know how you have to pull the engine out so you can fix the frontal suspension? that's what I have to do sometimes". Both of these are accepted pretty well by non-technical people.


while I agree with you, your description of refactoring scares me. It makes me imagine an over-engineered hot mess. I had a conversation with a junior engineer just today about valuing simplicity and avoiding abstractions until you're sure you really need them.

because cleanliness isn't really the goal, the goal is meeting business needs, stability, and discoverability. Often times abstractions create complexity, and that complexity hurts discoverability, it potentially hurts stability, and it can often times be detrimental if business needs change.

So I say abstract when it's clear that you need to, not before.

That's not to say I disagree with the sentiment to get the code working and then go back over it with an eye for cleaning it up, I just dislike the sentiment that this cleanup should be for abstracting things.


I realize I said, Abstract away superfluous variables with function application which is not the kind of over-engineered mess you're warning us about. I'm referring to lambda abstraction, that is replacing an expression with a name and closing over the free variables in it with the parameters to the function.

The reason for this is that assignment is a common source of errors in imperative programs. We should aim to use as few of them as possible, define them local to their use, and limit their scope.

I'm only talking about small, local refactors that make the code yo wrote easier to understand and maintain.

Refactoring code to introduce new patterns or re-architect a module are better done earlier in the process and much more deliberately, I agree.


No one thinks their overengineered mess is an overengineered mess, that's kind of the point.


My two bosses keep asking me how the code I write for myself is so stable (it's Elm), and yet we're always fixing bullshit bugs in our company's codebases (they're JS and TS -- sorry guys, TS isn't good enough).

But when I try to sell them on the stuff I use personally, the response I get is "it will be too expensive if we have to hire someone for that," yet no thought to the cost-effectiveness of needing to calling a dev on a weekend with overtime pay to fix a null reference that snuck into the codebase during the week somewhere. Or months ago. I'd say another developer is more valuable than clinging to classes of bugs that don't need to exist any longer.

We do what we can but can only do so much to make and keep our code stable, we're a small company with a small dev team and one of the two managers like to meddle in the code. We get by well enough, but despite all the evidence directly in front of them, I am simply not allowed to take the leap into something that would be better in the long run. It just doesn't matter to them.

Another one I get is "that language is too weird," but I've taken to just directly telling them that isn't a criticism and that I need a better reason. Of course it just goes right back to the hiring excuse from above. We aren't hiring and have no plans to hire anytime soon. We're small enough to move quickly, there's no giant organization to migrate into new tooling, none of that, I'm just basically being told that I'm planning too far in advance.



this problem is about as intractable to me as why I don't eat healthier. Maybe one day I will achieve one or both.


There is so much truth in this article I nearly fell off my chair. Especially #2 and #3.


> Oh sure, you can solve the endless little meaningless problems that plague the day-to-day life of programmers everywhere, but the source of these problems — defects at the engineering level — are effectively untouchable. Also untouchable are the systemic issues that cause each of these should-be-trivial-to-fix problems into 5–8 hour tasks.

That part of #4 is so true it hurts. Most of the software I use is full of brokenness and "WTF"s.

About a year ago: AWS Lamba supports Java. Sweet, should be easy. Oh but the command line tools are broken in several ways you won't discover until you try, find they're broken, google around for a while trying to figure out what's wrong, then realize it's not your fault when you find the relevant unfixed issues.

About three years ago: Well I'm writing an Android app so this Google Maps feature should be a piece of cake, I mean, Google makes the OS so surely Maps is amazing on here. Except. Wait. No. That cannot be right. It can't. The Android maps SDK is missing features that the Javascript version has? No. That doesn't make any sense. Oh. Here's a GH repo someone already made to provide a convenient interface from Android Java to the Javascript maps SDK for this exact reason. I mean I guess I don't have to write it myself now but seriously, this was a fucking waste of half my day for no good reason.

A few days ago: Debian installer is flat-out incapable of writing Grub correctly. Several tries, every reasonable combo possible, boot fails with missing OS every time. This ain't my first damn rodeo—something's wrong. Think it's maybe a UEFI issue. Force UEFI. Now my video card doesn't work. Because it's not UEFI compatible? Why dafuq is that even a thing? Open case, reset bios so I have a display again, install again just to be safe, boot to installer's recovery mode, mount, chroot, install grub manually and in precisely the way I'd told the installer to do it the first time, and without changing a damn thing about the config the installer had just written, and it's fine.

This is basically how everything goes every time I try to do anything with software at all. Which is my job. So this is my life.

Software is terrible.

[EDIT] in fact I can think of one area where my tools are consistently as shitty, broken, and unsuitable for their intended purpose as software usually is: it's when I cheap out on power or (especially) hand tools to try to save a buck.


There is a related Jonathan Blow talk titled "Preventing the Collapse of Civilization" [0] that I came across recently from a HN post. You might be interested in watching it. Also, Casey Muratori's talk titled "The Thirty Million Line Problem" [1].

[0]: https://youtube.com/watch?v=pW-SOdj4Kkk

[1]: https://youtube.com/watch?v=kZRE7HIO3vk


I can relate to that a lot, and it's one of the reasons I tell people I'm a software developer that hates software.

I don't actually hate software, I hate all the shit that gets in between you and the solution to your problems.


OS installers are just the most insane Rube Goldberg machines ever. For 99% of the time they ought to just dump a static disk image on the partition and unionfs it with the rest of the system and be done with it. You'd have a system that is immutable, consistent, and transactionally installed. Updates could just layer the latest differential disk on there as a second unionfs. Rollback is to just unlink one disk image. Done. Simple. Reliable.

Oh.. but nooooo. These monstrosities -- for insane 1980s reasons -- have to update hundreds of thousands[1] of tiny little files and go through some truly insane logic.

The major Windows 10 updates (v1809, v1903, etc...) systematically fail to install on both my PC and my Laptop. I get a BSOD loop on boot, the update retries 3 times and then rolls back.

Now, wait for it: I can do a fresh install with zero issues, with or without an Internet connection on both platforms. I've been able to do a fresh install on both platforms with every version of Windows 10 since the first one back in 2015!

But I can't update a fresh install.

Not ever.

Not with an ISO download.

Not with the manual updater tool.

Not through Windows Update.

Not with any amount of prep work, or special flags, or whatever.

I've updated my UEFI firmware. I tried BIOS emulation. I updated my NIC firmware. I updated my SSD firmware. I tried SATA mode, NVMe mode, and even RAID. For God's sake I updated my TPM chip firmware and you have no idea how fiddly that is. I still have nightmares.

The worst part is trying to unravel the hundreds of megabytes of temporary files, staging areas, log files, diagnostic dumps, and other junk that the update process dumps into wonderfully un-descriptively named folders on my disk.

"Panther" they call it, I assume because it's expected to eat your face. [2]

Amongst the reams of "errors", it eventually complains "fatally" about an unsigned driver being used. I wondered for a while how that's even possible, since Windows 10 x64 no longer allows unsigned drivers. But.. ho-ho... it does, for some user-mode drivers only! And there it lurks! An unsigned driver named "Driver SDK Sample" version "1.0" with default strings in all other fields.

Shit. I've been rooted! Maybe even infected with a persistent root kit. The horror! I work in an IT sector that's targeted by state-sponsored attackers. In a desperate panic I googled and googled until finally I tracked down the mysterious, unnamed, unsigned driver.

It ships with Windows. It's on the f#%$ing ISO, three CAB files deep. It's the Virtual Smart Card user mode driver for TPM-hosted certificates.

Let that sink in: Microsoft ships an unsigned, unnamed driver in their operating system disk image, for the highest-security subsystem they support and then their installer shits itself if it is used in a supported manner. O_o

Okay, fine, I deleted my virtual Smart Cards and I tried to uninstall the offending driver. Except that I can't, because it's unsigned and broken. And the OS won't let me uninstall it anyway, because it's not an optional component and weirdly not "Plug and Play". It took days to scrape this thing out of the system.

It still won't update.

The best I can tell is that the highly popular Samsung SSD 950 Pro is the ultimate source of the incompatibility in my laptop, and the also common gaming motherboard "MSI X99A" is the problem in my PC.

Microsoft only fixes bugs in their installer that the telemetry can report. There's zero quality assurance and unfortunately boot loops log nothing except a common error code, so these bugs will never be fixed.

I look forward to once again being forced to format both of my computers to install Windows 10 v20H1 when it is released soon...

[1] 366,559 files on my system for C:\Windows alone. I counted.

[2] https://knowyourmeme.com/memes/leopards-eating-peoples-faces...


5sts


*of

:D


This feels borderline sociopathic.. working in team is hard and demand compromise, and yes, although necessary no political system is perfect. But the work conditions of a software engineer are way better than any other job I have done.


The so-called "Business People" are people who's main goal is to make money. And if you think about it that really means their main goal is is to make money for themselves.

Techies' in contrast are more concerned about building great products. This naturally creates some tension between techies and the business-people. What should be the division of labor, and rewards?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: