"Need to" and "sane" are among my favourite subjective terms!
(Further below, I'll go into in which contexts I'd agree with your assessment and why. But for now the other side of the coin.)
In the real world, current-day, why do many enterprises and IT departments and SME shops go for µservice designs, even though they're not multimillion-user-scale? Not for Google/Netflix/Facebook scale, not (primarily/openly) for hipness, but they do like among other reasons:
- that µs auto-forces certain level of discipline in areas that would be harder-to-enforce/easier-to-preempt by devs in other approaches --- modularity is auto-enforced, separation of concerns, separation of interfaces and implementations, or what some call (applicably-or-not) "unix philosophy"
- they can evolve the building blocks of systems less disruptively (keep interfaces, change underlyings), swap out parts, rewrites, plug in new features to the system etc
- allows for bring-your-own-language/tech-stack (thx to containers + wire-interop) which for one brings insights over time as to which techs win for which areas, but also attracts & helps retain talent, and again allows evolving the system with ongoing developments rather than letting the monolith degrade into legacy because things out there change faster than it could be rewritten
I'd prefer your approach for intimately small teams though. Should be much more productive. If you sit 3-5 equally talented, same-tech/stack and superbly-proficient-in-it devs in a garage/basement/lab for a few months, they'll probably achieve much more & more productively if they forgoe all the modern µservices / dev-ops byzantine-rabbithole-labyrinths and churn out their packages / modules together in an intimate tight fast-paced co-located self-reinforcing collab flow. No contest!
Just doesn't exist often in the wild, where either remote distributed web-dev teams or dispersed enterprise IT departments needing to "integrate", rule the roost.
(Update/edit: I'm mostly describing current beliefs and hopes "out there", not that they'll magically hold true even for the most inept of teams at-the-end-of-the-day! We all know: people easily can, and many will, 'screw up somewhat' or even fail in any architecture, any language, any methodology..)
In my experience if your developers were going to make choices that lead to tight coupling in a monolith, they’re going to make the same choices in a distributed architecture. Only now you’ve injected a bunch of network faults and latency that wouldn’t have been there otherwise.
In this case it sounds like they started with a microservice architecture, but CI/CD automation necessary for robust testing and auto-scaling was not in place. The problem of queues getting backed up might have been addressed by adding a circuit breaker, but instead they chose to introduce shared libraries (again, without necessary testing and deployment), which resulted in very tight coupling of the so-called microservices.
Do they actually force a discipline? Do people actually find swapping languages easier with RPC/messaging than other ffi tooling? And do they really attract talent?!
You make some amazing claims that I have seen no evidence of, and would love to see it.
In my experience, there's a lot of cargo culting around microservices. The benefits are conferred by having a strong team that pays attention to architecture and good engineering practices.
Regardless of whether you are a monolith or a large zoo of services, it works when the team is rigorous about separation of concerns and carefully testing both the happy path and the failure modes.
Where I've seen monoliths fail, it was developers not being rigorous/conscientious/intentional enough at the module boundaries. With microservices... same thing.
Also, having a solid architectural guideline that is followed across the company in several places (both in infrastructure and application landscapes) makes up the major bulk of insuring stability and usability.
The disadvantage is obviously that creating such's a 'perfect architecture' is hard to do because of different concerns by different parties within the company/organisation.
> The disadvantage is obviously that creating such's a 'perfect architecture' is hard to do because of different concerns by different parties within the company/organisation.
I think you get at two very good points. One is that realistically you will never have enough time to actually get it really right. The other is that once you take real-world tradeoffs into account, you'll have to make compromises that make things messier.
But I'd respond that most organizations I see leave a lot of room for improvement on the table before time/tradeoff limitations really become the limiting factor. I've seen architects unable to resolve arguments, engineers getting distracted by sexy technologies/methodologies (microservices), bad requirements gathering, business team originated feature thrashing, technical decisions with obvious anticipated problems...
> "You make some amazing claims that I have seen no evidence of"
I'm just relaying what I hear from real teams out there, not intending to sell the architecture. So these are the beliefs I find on the ground, how honest and how based-in-reality they are are harder to tell and only slowly over time at any one individual team.
A lot of this is indeed about hiring though, I feel, at least as regards the enterprise spheres. Whether you can as a hire really in-effect "bring your own language" or not remains to be seen, but by deciding on µs architecture for in-house you can certainly more credibly make that pitch to applicants, don't you think?
Remember, there are many teams that have suffered for years-to-decades from the shortcomings and pitfalls of (their effectively own interpretation of / approach to) "monoliths" and so they're naturally eagerly "all ears". Maybe they "did it wrong" with monoliths (or waterfall), and maybe they'll again "do it wrong" (as far as outsiders/gurus/pundits/coachsultants assess) with µs (or agile) today or tomorrow. The latter possibility/danger doesn't change the former certainties/realities =)
Pure anecdote so I know it is meaningless but I have rewritten/refactored old services with new code, or even new languages twice with little problem because the interface was well defined. We had hundreds of devs working in other areas and we were all on different release cycles because changes were easy and decoupled. We let any team submit bug reports when we either weren't complying with our interface or we had a bug somewhere.
The only teams I had to spend time on were the ones which were on a common DB before we moved off of it.
I think this is probably true for larger or more distributed corporate environments, but I think a modular monolith is going to be a more productive and flexible architecture for most teams, and should be the default option for most startups (many of whom are doing microservices from day 1 it seems).
1. Is auto-enforced modularity, separation of concerns, etc actually better than enforcing these things through development practices like code review? Why are you paying people a 6 figure salary if they can't write modular software?
2. Is the flexibility you gain from this loose coupling worth the additional costs and overhead you incur? And is it really more flexible than a modular system in the first place? And how does their flexibility differ? With an API boundary breaking changes are often not an option. In a modular codebase they can easily be made in a single commit as requirements change.
3. Is bring-your-own-language actually a good idea for most businesses? Is there a net benefit for most people beyond attracting and retaining talent? What about the ability to move developers across teams and between different business functions? Having many different tech stacks is going to increase the cost of doing this.
I do see the appeal of some of these things, but IMO the pros outweigh the cons for a smaller number of businesses than you've mentioned. And the above is only a small sample of that. Most things are just more difficult with a distributed system. It's going to depend on the problem space of course, but most backend web software could easily be written in a single language in a single codebase, and beyond that modularization via libraries can solve a lot of the same problems as microservices. I'm very skeptical of the idea that microservices are somehow going to improve reliability or development speed unless you have a large team.
These are great observations. For anyone interested in going more in depth on the topic, I highly recommend the book Building Evolutionary Architectures
Except the language argument, don't you get all that by just having modules in your code (assuming a statically typed language since the boundaries are type checked)?
Not really. For example, it's easier to mock a microservice, than a module, for testing purposes. Let's say you have component A and component B, A depends on B (dependency implemented via runtime sync or async call), B is computationally intensive or has certain requirements on resources that make it harder or impossible to test on developer's machine. You may want to test only A: with monolithic architecture you'll have to produce another build of the application, that contains mock of B (or you need something like OSGi for runtime module discovery). When both components are implemented as microservices, you can start a container with mock of B instead of real B.
Running E2E blackbox test is equally simple for all kinds of architectures, especially today, when it's so easy to create a clean test environment with multiple containers even on developer's machine. It may be harder to automate this process for a distributed system, but, frankly speaking, I don't see a big difference between a docker-compose file or a launch script for monolith - I've been writing such tests for distributed systems casually for several years and from my personal experience it's much easier to process the test output and debug the microservices than monolithic applications.
> it's much easier to process the test output and debug the microservices than monolithic applications.
You easier to debug end-to-end tests of a microservice architecture that monolith? That's not my experience. How do you manage to put side by side all the events when they are in dozen of files?
Using only files for logging is the last thing I would do in 2018.
I use Serilog for structured logging. Depending on the log destination, your logs are either stored in an RDMS (I wouldn’t recommend it) or created as JSON with name value pairs that can be sent directly to a JSON data store like ElasticSearch or Mongo where you can do adhoc queries.
I just don't use files for anything (services should be designed with the assumption that container can be destroyed any time, so files are simply not an option here). If you are talking about the logs, there are solutions like Graylog to aggregate and analyze them.
Easy until you have 100,000 of them anyway, in which case it's expensive and slow to run it for every dev. (At that point you have enough devs that microservices 100% make sense, though)
A dependency injection framework where you use flags at the composition root to determine whether the “real” implementation class or the mock class is used based on the environment.
You will end up with something like OSGi. That can be the right choice, but is also a quite 'heavyweight' architecture.
For a certain class of applications and organizational constraints, I also would prefer it. But it requires a much tighter alignment of implementation than microservices (e.g., you can't just release a new version of a component, you always have to release the whole application).
For a certain class of applications and organizational constraints, I also would prefer it. But it requires a much tighter alignment of implementation than microservices (e.g., you can't just release a new version of a component, you always have to release the whole application).
Why is that an issue with modern CI/CD tools? It’s easier to just press a button and have your application go to all of your servers based on a deployment group.
With a monolith, with a statically typed language, refactoring becomes a whole lot easier. You can easily tell which classes are being used, do globally guaranteed safe renames, and when your refactoring breaks something, you know st compile time or with the correct tooling even before you compile.
> It’s easier to just press a button and have your application go to all of your servers based on a deployment group.
It's not so much about the deployment process itself (I agree with you that this can be easily automated), but rather about the deployment granularity. In a large system, your features (provided by either components or by independent microservices) usually have very different SLAs. For example, credit card transactions need to work 24x7, but generating the monthly account statement for these credit cards is not time-critical. Now suppose one of the changes in a less critical component requires a database migration which will take a minute. With separate microservices and databases, you could just pause that microservice. With one application and one database, all teams need to be aware of the highest SLA requirements when doing their respective deployments, and design for it. It is certainly doable, but requires a higher level of alignment between the development teams.
I agree with your remark about refactoring. In addition, when doing a refactoring in a microservice, you always need a migration strategy, because you can't switch all your microservices to the refactored version at once.
With separate microservices and databases, you could just pause that microservice. With one application and one database, all teams need to be aware of the highest SLA requirements when doing their respective deployments, and design for it. It is certainly doable, but requires a higher level of alignment between the development teams.
That’s easily accomplished with a Blue-Green deployment. As far as the database, you’re going to usually have a replication set up anyway. So your data is going to live in multiple databases anyway.
Once you are comfortable that your “blue” environment is good, you can slowly start moving traffic over. I know you can gradually move x% of traffic every y hours with AWS. I am assuming on prem load balancers can do something similar.
If your database is a cluster, then it is still conceptually one database with one schema. You can't migrate one node of your cluster to a new schema version and then move your traffic to it.
If you have real replicas, then still all writes need to go to the same instance (cf. my example of credit card transactions). So I also don't understand how your migration strategy would look like.
blue-green is great for stateless stuff, but I fail to see how to apply it to a datastore.
Do you realize that this is actually an anti-pattern, that adds unnecessary complexity and potential security problems to your app? Test code must be separated from production code - something, that should know every developer.
It’s basically a feature flag. I don’t like feature flags but it is a thing.
But if you are testing an artifact, why isn’t the artifact testing part of your CI process? What you want to do is no more or less an anti pattern than swapping out mock services to test a microservice.
I’m assuming the use of a service discovery tool to determine what gets run. Either way, you could screw it up by it being misconfigured.
First of all it is the test code, no matter whether it's implemented as a feature flag or in any other way. Test code and test data shall not be mixed with the production one for many well-documented and well-known reasons: security, additional points of failure, additional memory requirements, impact on architecture etc.
>But if you are testing an artifact, why isn’t the artifact testing part of your CI process?
It is and it shall be part of the CI process. Commit gets assigned build number in tag, artifact gets the version and build number in it's name and metadata, deployment to CI environment is performed, tests are executed against specific artifact, so every time you deploy to production you have a proof, that the exact binary that is being deployed has been verified in its production configuration.
>I’m assuming the use of a service discovery tool to determine what gets run.
Service discovery is irrelevant to this problem. Substitution of mock can be done with or without it.
If you are testing a single microservice and don’t want to test the dependent microservice - if you are trying to do a unit test and not an integration test, you are going to run against mock services.
If you are testing a monolith you are going to create separate test assemblies/modules that call your subject under test with mock dependencies.
They are both going to be part of your CI process then and either way you aren’t going to publish the artifacts until the tests pass.
Your deployment pipeline either way would be some type of deployment pipeline with some combination of manual and automated approvals with the same artifacts.
The whole discussion about which is easier is moot.
Edit: I just realized why this conversation is going sideways. Your initial assumptions were incorrect.
you may want to test only A: with monolithic architecture you'll have to produce another build of the application, that contains mock of B (or you need something like OSGi for runtime module discovery).
> What exactly are you trying to accomplish?
Good test must verify the contract on the system boundaries: in case of the API, it's verification done by calling the API. We are discussing two options here: integrated application, hosting multiple APIs, and microservice architecture. Verification on the system boundaries means running the app, not running a unit test (unit tests are good, but serve different purpose). Feature flags make it only worse, because testing with them covers only non-production branches of your code.
> Your initial assumptions were incorrect.
With nearly 20 years of engineering and management experience, I know very well how modern testing is done. :)
Verification on the system boundaries means running the app, not running a unit test
What is an app at the system boundaries if not a piece of code with dependencies?
If you have a microservice - FooService that calls BarService. The "system boundary" you are trying to test is FooService using a fake BarService. I'm assuming that you're calling FooService via HTTP using a test runner like Newman and test results.
In a monolithic application you have class FooModule that depends on BarModule that implements IBarModule. In your production application you use create FooModule:
var x = FooModule(new BarModule)
y = x.Baz(5);
In your Unit tests, you create your FooModuleL
var x = FooModule(new FakeBarModule)
actual= x.Baz(5)
Assert.AreEqual(10,actual)
And run your tests with a runner like NUnit.
There is no functional difference.
Of course FooModule can be at whatever level of the stack you are trying to test - even the Controller.
I was doing this with COM twenty years ago. It had the same advantages of modularity and language independence but without the unnecessary headaches of a distributed system.
I take your point, but it saddens me that there aren't better ways of achieving this modularity nowadays.
I was too, of course a distributed system could also be built with DCOM and MTC (later COM+), and the DTC (Distributed Transaction Coordinator) could be used when you needed a transaction across services (or DBs, or MQs). Obviously the DTC was developed in recognition of the fact that distributed transactional service calls were a real requirement - something that current microservice architectures over HTTP REST don't seem to support.
Are you telling me you chose micro services just to enforce coding standards and allow devs to be more comfortable?
The legacy concerns I don’t see being true, as it’s mainly a requirements/documentation problem and you can achieve the same effect with feature toggles.
(Further below, I'll go into in which contexts I'd agree with your assessment and why. But for now the other side of the coin.)
In the real world, current-day, why do many enterprises and IT departments and SME shops go for µservice designs, even though they're not multimillion-user-scale? Not for Google/Netflix/Facebook scale, not (primarily/openly) for hipness, but they do like among other reasons:
- that µs auto-forces certain level of discipline in areas that would be harder-to-enforce/easier-to-preempt by devs in other approaches --- modularity is auto-enforced, separation of concerns, separation of interfaces and implementations, or what some call (applicably-or-not) "unix philosophy"
- they can evolve the building blocks of systems less disruptively (keep interfaces, change underlyings), swap out parts, rewrites, plug in new features to the system etc
- allows for bring-your-own-language/tech-stack (thx to containers + wire-interop) which for one brings insights over time as to which techs win for which areas, but also attracts & helps retain talent, and again allows evolving the system with ongoing developments rather than letting the monolith degrade into legacy because things out there change faster than it could be rewritten
I'd prefer your approach for intimately small teams though. Should be much more productive. If you sit 3-5 equally talented, same-tech/stack and superbly-proficient-in-it devs in a garage/basement/lab for a few months, they'll probably achieve much more & more productively if they forgoe all the modern µservices / dev-ops byzantine-rabbithole-labyrinths and churn out their packages / modules together in an intimate tight fast-paced co-located self-reinforcing collab flow. No contest!
Just doesn't exist often in the wild, where either remote distributed web-dev teams or dispersed enterprise IT departments needing to "integrate", rule the roost.
(Update/edit: I'm mostly describing current beliefs and hopes "out there", not that they'll magically hold true even for the most inept of teams at-the-end-of-the-day! We all know: people easily can, and many will, 'screw up somewhat' or even fail in any architecture, any language, any methodology..)