Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> “It is as if some people think "good code" and "microservices" are synonyms. No. They are orthogonal.”

I disagree very strongly, and it is also part of why I believe monorepos are generally a mistake.

Microservices are a natural extension of things like decoupling and Single Responsibility Principle.

Just because you superficially could achieve similar effects with gargantuan amounts of tooling and imposed conventions in a monolith class or something is absolutely no type of refutation of the fact that modularity and separated boundaries between encapsulations of units of behavior represent a better way to organize and structure the design.

It is no different and there is no leakiness to the same abstraction when you move to discuss services instead of classes or source code units, or polyrepos v monorepos. The abstraction definitely can become leaky if taken too far in other domains, it just happens that the abstraction is depicting precisely the same organizational complexity properties in the case of source code -> service boundaries -> repository boundaries.



Well, but some services composed of microservices turn into a distributed spaghetti monolith though.

They only "naturally decouple" if you draw the lines between units correctly in the first place. And if you are able to do that well, that's the most important step in making any code turn out well regardless of the size of the services and how much of the boundaries are in the same class/repo/process/service. It also correlates with the "tendency to cheat" within a single service.

There ARE real advantages to micro-services, sure. But you trade them against an ability to quickly refactor if it turns out you drew the lines completely wrong initially. Or perhaps you end up with something that becomes very complex that could in fact have been short-circuited and replaced by 5% as much code by looking at the problem from an entirely different angle -- which you never do because of the pattern that has settled in how the micro-services were divided.

(At the end of the day, the code that is simplest to maintain is the one you don't need to run..)

So I maintain that it's a trade-off.

This seems relevant: https://xkcd.com/2044/


What you propose ends up sounding like this from my point of view:

Salad is generally better for your health than red meat. However, some people eat so much salad, with so much dressing, that it ends up being worse than red meat. Meanwhile, with great care about meal planning and moderation, some people stay pretty healthy eating red meat.

Therefore red meat is actually healthier than salad.


Currently trying to debug some microservice based system: I read your text with red meat being the microservices.

Things are always perfect when reading a simple blog post presenting the happy path. How to check what services are down, how to react to the fact, how to come back? Nah. The fact your µs RAM access just became ms network access? Don't care. Someone just decided to change the interface of their microservice so 2 others are not compatible anymore? That's just "microservice done bad". Being able to see the flow of things and add breakpoint where you need it? Nope.

It is funny to see this kind of problem when you've already experienced them in the embedded world with software components in cars or just in distributed computing.

Most application will never see the kind of scale where adding the kind of code and tools overhead have any RoI. So you end with products released too late or products so brittle you may has well not have launched.


> “Currently trying to debug some microservice based system: I read your text with red meat being the microservices.”

This doesn’t make much sense, unless you’re debugging normal-to-high quality code microservices, and still find the code to be worse than average case monolith services.


He is explaining it further down! For instance, he cannot set a breakpoint and follow execution flow (because suddenly the flow resumes in another microservice)

It seems from your comment that you assume one can always work with only one service, and not need to consider the whole system of services acting together. That is naive.


I do not assume you can work only with one service. But what you brought up from the parent comment about breakpoints makes no sense.

It’s like complaining that someone mocked out a complex submodule in a unit test, so your breakpoint descends into a mock instead of the real thing. You’re mistakenly wanting the wrong thing.

Testing that spans service boundaries is a known entity. Most of the time you want to be testing one service in isolation and mocking out any dependency calls it makes to other services.

But in cases when you want to do integration or acceptance testing involving multiple live services, that’s fine too. You could for instance run the suite via something like docker-compose.

But if you want the debugger to step through the internals of some effectively third party dependency, that’s just a poor approach to debugging. You need to mock that away and isolate whether the third party entity (whether it’s an installed package, separate service, whatever) is really to blame before descending to debugging in that entity.

Imagine if someone is debugging a data processing pipeline task. It makes a service call to a remote database. You really think your debugger should follow the service call and step through the database’s code? That’s a terrible way to debug. That example extends perfectly well no matter what the service call is into, whether it’s local or remote, whether it’s in the same language or runtime or not...


Well in that context, red meat is obviously healthier if you know up front that the people in question are going to pile on dressing...

Context is everything.

I am mainly advocating that it depends on the kind of coders on the teams, how many teams, how sure you are about the up front design / boundaries between services (I have seen such boundaries drawn VERY wrong, so wrong that nothing else ever mattered), how sure you are about the spec, etc

Start with monolith and refactor smaller services gradually as the design solidifies...


> Well in that context, red meat is obviously healthier if you know up front that the people in question are going to pile on dressing...

I think you're being too charitable accepting this analogy at all - somehow microservices are presented as something obviously and inherently better (salad) versus non-microservice approach (red meat).

If we're going down the route of silly analogies which are terrible way to argue anything, how about this:

Non-microservice architectures are normal diet of meat, fish, vegetables, grains and sugar which you can keep under control if you have any idea of what you're doing. Microservices are gluten-free diet - very popular for no good reason, it makes everything harder and you should only pursue it if you have very good reason to and you understand the cons.


> “somehow microservices are presented as something obviously and inherently better (salad) versus non-microservice approach (red meat).”

Yes, this is called the Single Responsibility Principle, in this case applied to service architecture. More generally it is a property of modularity and decoupling.

All else equal then satisfying these properties is better than not satisfying them.

The all else equal assumption clearly holds in practice, where people write equally awful code in both cases and so microservices introduces no additional tech debt yet it does introduce SRP and modularity benefits.

Could you find specific examples of monolith services with small enough tech debt that they outperform some specific other example using microservices? Of course.

Does this matter for reasoning more generally about which pattern is better ceteris paribus? Very little, probably not at all.


I am not convinced that microservices always causes less coupling as you claim.

Sometimes the coupling just jumps into the network/API layer. (Why would it not?) This can happen unless your initial divison into services was perfect (and if you indeed have that much foresight, there would be no reason why a monolith would accrue tech debt either, there would be no temptation to add debt).

The main difference is that when you discover that the initial divison into "Responsibilities" were wrong, it is easier to change and come up with another set of "Responsibilities" in a monolith and deploy the refactored service as a unit.

You talk as if you can just initially define the Single Responsibility then things will be fine. But where I have seen real failure is in identifying those initial responsibilities and choosing the wrong way to look at the total system.

My experience is with monoliths having less coupling and I suspect that the cause is that monoliths are easier to refactor as the requirements change; refactoring the very structure of the service mesh while keeping things running is such a big task that one is more tempted to start adding hack in the API layer.

Yes, one is then violating the Single Responsibility Principle. But if an organization sits there and needs to change the requirements within some deadline -- it is not going to spend 3x the cost and time because a hack violates some principle -- and the alternative is the wrong service taking on some extra work.

If you want to retort "but then they are doing microservices wrong" then I say No True Scotsman. And one could say exactly the same about monolith tech debt too..


> “The main difference is that when you discover that the initial divison into "Responsibilities" were wrong, it is easier to change and come up with another set of "Responsibilities" in a monolith and deploy the refactored service as a unit.”

This is generally not true in my experience, because the degree of implementation-sharing and reliance on common leaked abstractions is so high in monolith codebases.

Through great concerted effort, some highly disciplined teams might not fall into that ubiquitous problem of monoliths and for those exceedingly rare teams your way of thinking could work. But this is so rare it is inapplicable when considering which approach to use in general cases.

I’ll also say that I’ve worked on several monolith services and several microservices stored in dozens to hundreds of separate repos. The tooling cost to make either pattern work at scale was the same, but refactoring was so much easier with polyrepos that each isolated services. Just spin up a new repo and redraw the service boundaries.

Finally, many times services become associated with a fixed, versioned API, and must support backward compatibility for long periods of time. In these cases, redrawing service boundaries is usually not desirable regardless of initial mistakes, until you hit a point when you can release a new major version of the services. In the polyservice / polyrepo case, this is very easy, and the repos and separated code for v2 need not have anything to do at all with v1, and can be developed entirely in parallel, with mocked out assumptions of service boundaries or reliance on legacy v1 stuff.


Actually I wonder if your reasoning is circular..

If you saw a coupled mess of microservices with a lot of technical debt you would probably say that it is not "Microservices" because it is violating the Single Responsibility Principle all over the place. They just tried to do microservices -- but didn't manage to -- so do you then count it as a failure of microservices thinking?

If not then propose a new architectural alternative: The Debt-Free SRP Monolith!!

Sadly, organizations cannot choose to either make a SRP Microservice system or a Debt-Free SRP Monolith. They can only attempt. And I am yet to be convinced to attempting Microservices is that correlated with achieving SRP.


I don’t think it’s circular at all. I’m saying that if the level of tech debt is held equal between both a microservice design and an analogous monolith design, then the fact that the microservice design has greater properties of decoupling, isolation and modularity make it de facto better.

Obviously if the baseline levels of tech debt or poor implementation are not equal, all bets are off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: