Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Architecture Antipatterns (architecture-antipatterns.tech)
61 points by simonharrer on Nov 29, 2023 | hide | past | favorite | 28 comments


Almost all of the listed "antipatterns" are things that are bad by definition. "Don't do the thing too much or too little, do the right amount" is not a useful recommendation.


You might think of it like a checklist of possible problems, rather than a list of rules to follow blindly. It's true that it won't tell you which problems you have, but they may be useful hints, things to consider.

Knowing what problems you actually have usually requires human judgement after seeing the situation.


My favourite system to work on is one that I understand.

And everyone refactors to their own understanding and intuition.

And my intuition or understanding might not be identical or as advanced or as simple or insightful as yours. (EDIT: Your understanding that things are SIMPLE might be more advanced than mine, so I don't really understand it as much as you do.)

So we have taste in software.

I would rather not maintain a system that was built on quicksand, where dependencies cannot be upgraded without breaking anything.

One person's super elegant architecture is Not Understandable™ to someone else.


> I would rather not maintain a system that was built on quicksand, where dependencies cannot be upgraded without breaking anything.

To each their own. I prefer to maintain a bad system because:

- I can make it better

- If something doesn't work as expected it's because of the current state of the system, not because of my lack of ability

On the other hand, I don't really like to maintain very good systems (crafted by very intelligent people) because:

- There's little I can do to make them better (I'm a regular Joe)

- If something breaks it's because of my ability as a programmer (all the shame on me)

So, it's like playing in two different leagues (but the paycheck is rather more or less the same, so that's nice).


This is an interesting perspective which I'm inclined to disagree with. There's little pleasure to be found in having to deal with a system that broke because it was badly designed or implemented, although I guess it means you've got a reasonably secure job for the time being. Being able to gradually refactor it can be fun sometimes I guess, but I'd still rather not have to.

Your second category is more interesting to me - you're interpreting a system is hard to understand and work on as being made by super intelligent people. I would interpret that as a system that was badly designed, unless you're doing some new and revolutionary thing (you're probably not). A system that has been designed in such a way that only someone with deep knowledge of the thought process can work on it has been designed badly. I know this because I have in the past designed many such systems. Coming back to them a few years later even I hated myself for it, so I'm deeply sympathetic to the people who had to work on them who weren't me. Thankfully in most cases I got to task a few people with ripping out the system and replacing it with something better.


"you're interpreting a system is hard to understand and work on as being made by super intelligent people"

I read it as the opposite. GP says that if a system is good, it needs no improvement, so there's no fun in refactoring and redesign.

And those good systems are easy to understand and work no, so when something breaks you can't blame it on the design. You can only blame yourself.


I also enjoy improving bad software, high five.

But funny: I was trying to think of "good" systems that I ever worked on, but drew a blank. It can't be that I only worked on bad code, right? Maybe this is one of those "when everyone around you is an asshole..." situations!

But now that I actually think deeper about it, the reason I don't remember doing a lot of work in good systems is because I barely had to touch them. They just worked, scaled fine, required very little maintenance.

And on those good systems, building new features was painless: they were always super simple and super familiar to newcomers (using default framework features instead of fancy libraries), because they never deviated from the norm. Things would also pretty much never break because there were failsafes (both in code/infra/linters/etc and in process, like code review).

At my previous job the other person working in our backend was the CTO, which worked part-time and had lots of CTO attributions. I remember spending about 20 hours tops in the span of 2 years on that backend. It was THAT good.


> At my previous job the other person working in our backend was the CTO, which worked part-time and had lots of CTO attributions. I remember spending about 20 hours tops in the span of 2 years on that backend. It was THAT good.

It might be "cargo culting" but I am curious what properties of that good system were true?


It was familiar, because it used a popular framework used the “vanilla” way, the way the author of the framework recommends. So even a junior dev would be able to do stuff in the first day.

There were very few optional third-party libraries or smart-pants patterns. If it wasn’t necessary, it wasn’t imported.

Some database views were used instead of complex ORM queries. Sounds trivial but saves a lot of time debugging.

Control flow was so predictable that I rarely debugged. Honestly for a lot of features I just did TDD without much exploration at all, even on the first uses.

Features were super well isolated and decoupled. If there was some strange, awkward, cross-cutting concern between two distant parts of the domain, it was decoupled using async events rather having domain-model-#1 call domain-model-#2. So any weird interaction between distant parts was well documented in a specific “events” folder.

Dependencies were very up to date and everything was simple so very few issues updating the framework.

Most important: test suite was comprehensive and very fast.


There's a bit of selection bias. Developers are more willing to stay working on good systems for much longer, so there are fewer job openings for other developers to work on them. Hence, most job openings are to work on crappy systems.


You are 100% correct, but my observation was more about how I actually got to work on good projects and enjoyed, but I just don't remember much about them because they barely needed my intervention.

Naturally I'm not counting the stuff I built myself: I definitely worked a lot of time on them and they were a breeze to maintain, but I won't classify them as good bad, since the one thing I'm sure of is that I'm biased about their quality ;)


+1 this. I like dumb solutions which work and are easy to understand over smart solution which are hard to understand and reason about.


I am very susceptible to the ‘Misapplied Genericity’ anti-pattern. When given a problem, my default approach seems to be building a solution which ends up looking more like a framework that allows you then to build the solution in it. For example - if I were creating a metrics dashboard, I would end up building a dashboard builder which I could then create my metrics dashboard in, rather than just ‘hardcoding’ the dashboard I need right now. Something I need to work on!


I do that as well, but I have eventually managed to train myself to do the hard-coded version first, find out I need to make it more generic anyway and thus validate the need for the genericity, but also at that point I have a much better understanding what exactly benefits from being flexible and where I can take some shortcuts compared to what in my mind is the perfect design. That way I end up building something in between which usually works quite well and I am pleased with.


Yeah, it's often better to _extract_ abstractions rather than to try and predict them correctly on a blank screen.


This is a great technique. But one also has to be cognizant of not over factoring the resulting program once it is “done”.


The adage "Use before Re-Use" has been very helpful to me in dealing with this issue.


Yup. Early in my career I was susceptible to this as well. Partly immaturity and partly business pressure (we need this thing now, and when the business adapts, we don't want to spend more engineering resources on it! make it work for the future!)

turns out that's nearly impossible, in most cases (businesses change)

I definitely take a more iterative approach now. There's a short spike window to architect the rough plan, get buy in from other engineers, and as long as we feel like we're directionally going the right way and we're not digging ourselves into a corner, we ship it.

Sometimes that has resulted in redoing things (we made a mistake in our thinking), but those redos are minimal compared to the weeks/months we may have spent over-architecting something


I cannot give a yes or no answer in this case. "Generic" is too generic a term, to use it in this argument.

It depends on the whole project and circumstances. Usually I go with specifics and later refactor. The reason is simple: too often did I experience the case, that in order to change something in view, we had to alter the "generic layer". On the other hand, how can you build something generic, when you have not at least 2-3 use cases?

"But we will never refactor" - I am one of the very few, who do just that. I worked my way up from dev to senior manager in order to give people the freedom I always missed.


I feel that the framework for integration is the programming language itself, it's Turing complete.

Getting things to glue together in the right way is a challenge though which is probably why you want it to be data driven. But inevitably you need some flexibility or logic in your data processing so you end up building an expression engine and we get into "creating a inner system/platform effect".

https://en.wikipedia.org/wiki/Inner-platform_effect


This reads like barely chatGPT-quality output and also appears to be an SEO ploy from the domain name on down.

One of the anti patterns is "making the system too complicated" Insightful!


Apart from the patterns that are obviously good or bad by definition, most of the patterns and architecture decisions have their pros and cons, and the focus is on understanding, discussing those tradeoffs, and going with tradeoffs that a team / company prefers to deal with. Monolith vs. microservices, synchronous vs. asynchronous communications, small events vs. fat events - the list can go on, there are no silver bullets or clearly right choices.


First example is not clear cut at all.

The project sounds successful overall to me. Yes, they had to do more than they thought going in. That describes most engineering efforts.

Does the author think that operating system API churn just won't affect native somehow? Or be improved when even more of your application surface area is in the native space?


A list of 'do-not' works for tasks that bottom out in science and the laws of physics. Like woodworking or fusion physics.

Thinking about coding lacks a connection to a scientific terminus point. Under the hood it's all binary and devs use a performance mindset. Making a list of prohibitions doesn't fit.


Anyone knows a similar aggregated source for the opposite, for the good patterns?


Spotted an extremely minor typo: hirarchic => hierarchical


Also, they wrote “sufficiant”. I guess they meant sufficient.


micro services




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: