The title originally contained [about SoundCloud] at the end, which is the only reason I clicked the link (I'm a front end dev and frequent SC user). Not sure why it was removed.
To be fair to the article, it should be obvious that it's about SoundCloud by the end of the first paragraph (& thus the 'we' refers to 'us at SoundClound').
I do agree that the title here is out of context but it seems any modification of titles is seen as editorialising.
Oh absolutely, $THING is fashionable and is therefore ridiculous and nobody should use it. It couldn't possibly be that $THING is incredibly useful under a specific set of circumstances, and that caused $THING to be fashionable.
Just because something is fashionable doesn't mean that's the reason people decide to use it.
Software suffers from fashion. If you look at the core java libraries they are a history of software engineering fashion.
I would argue that fashion drives most of our choices. We are not immune to it, if anything we are about the same level as teenagers IMO. Tech for whatever reason is very susceptible to fashion.
It's easy and obvious to set up queues and hand offs. It seems efficient and logical and it looks good on a whiteboard. The throughput of work in flight is of course amazing.
Then, much later, you realise latency sucks. People don't care that you have 100 features coming in 2 years. They care that you have the killer feature now.
So that's what counts here: latency to customer value.
The time between picking the next most valuable feature and putting it in front of paying customers, is the one loop that needs to be optimised.
I'm lucky to work Pivotal Labs. Our archetypal team is one product manager, one designer, two engineers from us and two engineers from the client. Currently we are experimenting with building teams with embedded data scientists as well. It works because we can, as an autonomous unit, do all the work that needs to be done to take a feature from ideation to production.
When you have that, it's amazing what you can get done.
I know I sound all-knowing and clever and stuff. But if I'd been setting up a software project before working at Labs, I'd have done a lot of the same things as this article describes as the starting point.
[edited to try and remove the impression that I would've Done It Perfectly From The Beginning, which is the opposite of what I was trying to say]
I think "The Mythical Man Month" (1975) calls it a "surgical team". If this practice works, I wonder why it isn't common?
Just postulating here, is it because most established businesses cling to whatever they currently do, and the next generation get their start at those big companies, and copy what they're used to? Or, is it because the people who have the responsibility for organizing teams do not come from a technical background, so miss out on reading material that discusses this?
That's not what TMM advocates for surgical teams. IIRC, the surgical team is an A/A+ lead, and a few directee/assistants, some highly specialised (TMM advocated for each team to have a designated tool-maker responsible for creating and maintaining the team's tooling like scripts or codegen)
>> The time between picking the next most valuable feature and putting it in front of paying customers, is the one loop that needs to be optimised.
Well said, BUT how to determine that next most valuable feature? I worked on a product once that required quiet involved sales engineering support, feedback from the sales engineers doing POC on customer sites was, to me, the most valuable feedback of all. But for other low touch or consumer targeted products, I'm baffled how to do this.
Sometimes you do it by user interviews, sometimes by paper prototypes, there's a huge grab bag of options. My peers who focus on product and design would be better at telling you their magic, but from what I've seen it's deep and broad and the results are often surprising. We've saved customers a lot of money by discovering that wanted to build something that their customers didn't want to buy.
From the engineering side, my role is to make development fast enough that product and design can find out whether they were right by having something in production quickly. This obviously varies according to project. For a highly gatekept ops culture, I could have something in an acceptance environment in hours that takes days, or weeks, or months, to reach true production.
But sometimes we get true CD and a team of designers and engineers can put tested code in front of users the same day, and have meaningful metrics shortly afterwards.
On my first project we had a hard, immovable 6 week deadline. We had a path to production after 3.5 weeks of pushing to get through complicated deployment hoops. Once we had the path to production our product manager and designer were able to perform laser-focused surgery on our backlog in the final weeks. They knew, sometimes within hours, what changes were being used and what had not worked.
It was a real eye-opener for me. Still a standard I hope for on any project.
I found it interesting that they basically replicated the XP playbook: cross-functional teams, continuous code review with pairing, collective team ownership of code and results, bounded contexts.
It makes me wonder if the "microservice" part of it was necessary. What if they had produced "microlibraries" rather than "microservices?"
>It makes me wonder if the "microservice" part of it was necessary. What if they had produced "microlibraries" rather than "microservices?"
I've wondered that every single time somebody has touted the benefits of microservices.
The people who have really 'succeeded' in it seem to conflate the benefits of looser coupling between dependent software systems (which is always a good thing) and making those systems talk to one another over a network socket (which isn't necessarily a good thing).
The benefits of microservices aren't architectural. The point re. library APIs vs services APIs is completely correct, because that's actually not what the microservice style addresses.
The point is rather organizational/operational - Being able to independently deploy, scale, monitor and manage different parts of the system.
That allows you to distribute ownership of the system, and also allows partitioning, so that a failure in one component need not effect the whole system.
When I say 'architectural' there I mean that as in the underlying design. How the system is modeled and and what different objects and interfaces exist in that model.
Eg. Say you have a billing component in your system. You may have the same underlying billing component in either a monolith or as microservices. It may be a nicely decoupled in both cases (either existing as an independent module/library in the monolith case, or as an independent service in the microservices case.)
The (potential) benefit that's afforded from the microservices approach is not that there's a better underlying design there, but that a single team or developer can properly take ownership of deploying, scaling, monitoring and managing the service, independently of the rest of the application. (Plus scope for partitioning and graceful partial failures vs whole system failures)
Yeah I'm pretty skeptical of the benefits. It seems like better isolation within your monolithic application solves a lot of these issues, no need for a network socket.
If you can't isolate things well enough for some reason, maybe it makes sense to have separate services (maybe run them all on the same machine, deployed at the same time and talk over a local socket?), but even then I suspect you just need normalservices not microservices.
Maybe adding the network socket makes the isolation within the code a requirement as opposed to a best practice. By this mechanism maintaining isolation is a requirement and not something someone can bypass "just this once" but fix it later.
There's a flood of blog posts (including this one I think) that have conflated the two, probably unintentionally. I'm happy that decoupling their systems worked out well for them. I'm not so happy this is fomenting a new fashion for creating unnecessary network API end points.
That isn't to say that you should always combine your services into one big mega-service. Just that dividing up services should be something you do only when it becomes obviously necessary and for good reasons unrelated to coupling.
I've tried that in the past. Even if modules aren't tightly coupled, deployment is, so different teams need to synchronize at deployment time. Resource isolation is also a big problem, if a module update introduces a performance bug, it will affect everything else. Yet another problem is keeping shared libraries in sync; if you want to update a core lib for component X, it will need to be updated (and tested) for everything else.
Why is deployment coupled? Or rather, why is there a need to synchronize at deployment?
I like the idea of microservices, but I think they're overkill for most systems. By that, I mean that I see the benefits, but I think people discount the skyrocketing development and operational complexities that come with distributing a system. I heard a quote recently that "the best services are extracted from existing systems, not designed up front." I think that's right. Microservices are great IF you need them, and it's really hard to get the bounded contexts right up front with an intuitive, usable API.
Anyway, one of the benefits of microservices is that it forces you to really think about your "public" API. Any decent implementation will have some notion of API versioning. So, team A can truck along with updates, deploy them whenevs, and team B can move to the new version of A when they are ready.
Of course, supporting multiple versions is more work for team A and requires more careful planning of the upgrade path. And there will come a point when team A has to drop support for older versions. "C'mon folks, we're on version 4 of A; everybody has to move to version >=3 within 6 months." But that's just part of having truly isolated services, I think.
I don't see why you couldn't have a similar approach with versioning module API's. Right?
I think your other points are spot on. Things like performance (and error) isolation can be handled through other means, but a services approach (deployed to separate boxes, I'm assuming) makes it cleaner. And it, again, forces you to think about what happens if the dependency is unavailable. Maybe we push updates to a queue, maybe we use some async fetches here with a fallback default if we don't get a response in N MS, etc. Not that you can't do these things in a monolith, but they "feel weird" and require more rigor than most teams can maintain in the face of deadlines, i.e., it would be a whole lot easier to just call this method in this other module. Microservices/SOA force it to happen.
>I heard a quote recently that "the best services are extracted from existing systems, not designed up front." I think that's right.
Damn right. Architecture should be an emergent property of your system and built incrementally. The people who do it up front almost always do it wrong.
>I've tried that in the past. Even if modules aren't tightly coupled, deployment is, so different teams need to synchronize at deployment time.
No they don't. There's no reason why two different teams can't schedule an upgrade of the same service at different times. The riskiness of this is entirely dependent upon how good your integration test suite is.
>if a module update introduces a performance bug, it will affect everything else.
The module will still affect everything that is dependent upon it if it is rebuilt as a microservice. You're just moving the performance problem from one place to another.
>if you want to update a core lib for component X, it will need to be updated (and tested) for everything else.
Ok, so upgrade the library and run the full set of integration tests.
> The module will still affect everything that is dependent upon it if it is rebuilt as a microservice.
Your services are running on different servers (or containers) to each other, so they're partitioned. If one service has a bug that introduces a catastrophic error that takes all the server resources you'll either:
Monolith: Bring down the service completely.
Microservices/SOA: Timeouts to part of the system, and partial loss of functionality.
(Assuming you've done a decent job of engineering for partial failure)
Unless you've scaled your "monolith" horizontally, in which case it takes out one server.
If you've got a decent system, it can self heal from that and ping you via a monitoring system.
>Microservices/SOA: Timeouts to part of the system
Causing all manner of annoying behavior and difficult to track down bugs like an endlessly loading web page on a completely different system that happens a couple times a week instead of a clear error message.
>Assuming you've done a decent job of engineering for partial failure
If you assume a fantastic job done when engineering then you can make the worst architectural patterns "work". It doesn't mean that they are good ieda.
Indeed. But you can't guard against catastrophic system failures (out of memory, disk, processor time, corruptions) in the way you can with independant services.
>It makes me wonder if the "microservice" part of it was necessary. What if they had produced "microlibraries" rather than "microservices?"
So the OP did discuss considering micro-libraries (perhaps via Rails engines), but they decided not to.
> We discussed using Rails engines and various other tools to implement this... At the deployment side, we would need to make sure that a feature can be deployed in isolation. Pushing a change to module to production should not require a new deployment of unrelated modules, and if such deployment went bad and production was broken the only feature impacted should be the one that suffered the change...
It goes on a bit. I think the reasons against this approach for them aren't entirely clear in the discussion, it would be good to hear more.
Although as a Rails dev myself, this one rings true:
> The code had suffered a lot during the past few years, tech debt everywhere. Besides the mess we made ourselves, we still had to update it from Rails 2.x to 3, and this is a big migration effort in itself
The ability to migrate from Rails 2 to 3 one service at a time is actually a pretty huge benefit, since that migration was monstrous. This is probably generalizable.
One other thing their ultimate microservice approach got them was the ability to write different services in different languages, and thus gradually transition to clojure/scala. I don't know if that was part of the original analysis; or if everyone would consider this a benefit. :) But it worked out for them.
Lately the reason microservices are a _pain_ is pretty clear to me; it is good to get an essay like the OP grounded in very specific experience on how microservices worked out very well for them. It does seem to make sense by the end. As the OP also says at the beginning, this is as much for organizational reasons as technical reasons. I suspect you need a fairly large team, where the microservices can be divided up amongst different developers, as in OP, before the benefits can start to outweigh the costs.
I think you have to read to the part where he talks about the deployment impacts. A key part of what happened was they needed deployment flexibility as well. From their perspective, they would need to implement the same basic infrastructure to achieve that flexibility:
But even if everything went smoothly, we knew that the current code for the monolith had to be refactored anyway. The code had suffered a lot during the past few years, tech debt everywhere. Besides the mess we made ourselves, we still had to update it from Rails 2.x to 3, and this is a big migration effort in itself.
This was probably critical their ability to adopt new technologies, like Clojure and later Scala.
I've never been convinced about splitting developers into front and back end.
I see no reason why a decent developer can't build the front and the back end for each piece of new functionality. This removes all the communication issues.
On a big, widely used consumer facing app however, front end developers really come into their own.
They have cross browser, cross device responsive design issues to contend with. Accessibility and graceful degredation. Performant JavaScript. They also need design skills.
Very few people have deep skills in back end and front end.
I've seen plenty of projects with separate front end and back end developers fuck up because the front end developers try to solve "back end problems" on the front end and vice versa.
This is the cause of an awful lot of technical debt, and "big, widely used consumer facing app" is the natural habitat of large technical debts.
I vividly remember a case where a really simple page in a system had become incredibly slow - on asking what had changed I was told "all we did was add a count of the number of users".
Turns out the back end (they were micro-services - even though the term hadn't been invented at that point) didn't have a call for "total number of users" so the front end code (this was server side HTML generation - 10+ years ago) was getting a list of every user in the system and iterating over all of them in chunks and counting.
> I've seen plenty of projects with separate front end and back end developers fuck up because the front end developers try to solve "back end problems" on the front end and vice versa.
I totally agree, but even if you have a front-end and back-end communicating well, sometimes the front-end developers throw too much over the wall to the back-end, like when the front-end "thinks it is more intuitive if the request looked like this" instead of something else that would be just as easy to create and would be much less work on the back-end. And I'm sure the opposite is true as well with back-end putting too much work on the front-end.
And, in the argument against a single full-stack developer being the solution to this problem, I've seen extremely talented full-stack developers create some terrible user interfaces, and some who created great user interfaces that would struggle to create adequately performant server-side services to back them.
> Very few people have deep skills in back end and front end.
At Labs we practice pair programming with frequent (ideall daily) rotation, so skills tend to diffuse and stories tend to get looked at by people with different backgrounds.
The inverse of this is when there's a glaring error in a design (we all write bugs - this is just a design bug) and the front end dev implements that in the final product. Of course, other processes (QA, Design QA, etc) has to fail for this to go through to production...
I don't think we need FE Devs who are full on designers, but FE devs who are design 'aware' are, in my experience, much better.
Started with C++, then moved to PHP, looked at Ruby, picked up JavaScript, built some ASP.NET apps in C#, did some Unity programming, wrote a bunch of stuff in Java, then finally moved to full time JavaScript programming.
Why the hell do you think a front end dev is "by definition" mono linguistic? It makes no sense.
So you're telling me you think that someone who works in javascript, poured hours and hours into it, and has to understand the ins and outs of browsers is going to write worse performing javascript than someone who mainly works with other languages?
Yes, they don't know anything about development because they only know one language and they're spending loads of time doing non-engineering, i.e. HTML, CSS and dealing with browser inconsistencies. This is not revolutionary speak, it's been long known that mono-linguists tend to really suck at making things run fast, regardless of their dedication to using that one language.
No-one in their right mind ever called flash developers genius coders, in fact they were (generally rightfully) regarded more as designers than developers and yet that's exactly the same niche front-end devs occupy today.
I've seen it more than once now, massive load of javascript built around the 'recommended' design patterns that as soon as you turn back into simple functions and throw a few if statements into to stop them initializing every page load, boom, massive page load gain.
You can be an expert on hoisting and this and ES6 and still suck at engineering, in fact I've seen it more than once in multiple languages, guys who knew entire language specs and yet couldn't code their way out of a paper bag.
There are 4 types of front end dev:
1. Designer who can throw together a few scripts
2. Inexperienced developer/designer
3. Developer who happens to have moved to javascript with basic design skills
4. Unicorn who's great at design and development
Everyone thinks they're 3 & 4s, in reality 95% of front end developers are 1 & 2s.
Half the things that get posted here that have dedicated front end devs are juddering, slow, scroll jacking monstrosities. Don't try and tell me that's not the present state of the front-end development, because we all know the reality is there's very few, actually good, front-end developers out there.
You can get a back-end developer to do front-end. He might lack the experience to deal with some gotchas or browser compatibility but it's nothing that can't be solved with a simple Google search.
A front-end developer doing back-end, on the other hand, might not work as well.
In my experience they often fail to come up with anything remotely modular on their own, write maintainable code, understand the design concepts of more "advanced" frameworks like Angular or doing anything different than copy-pasting some jQuery snippet they found on yahoo answers.
It's not that JavaScript forces you to write spaghetti code, it's just that until very recently it was mostly written by clueless morons that didn't know any better.
How can you write your front-end in something like Reagent if most of your staff get micro-strokes when being asked to type {{ }} instead of <%= %> in your new template system or go la la la can't hear you when you mention the merits of CSS preprocessors?
Fortunately this is all changing as the result of front-end these days getting more "mentally stimulating" thus capturing the interest of back-end developers (the ones that more often than not happen to have degrees).
I've never been convinced about splitting developers into front and back end.
There isn't any 'one true way' to do it. If a project is best built with a front end team and a back end team, then that's the best way to do it. If a project is best built with a team that does everything, that's the best way for that project.
Mind you, I would add that "This removes all the communication issues." is completely wrong. Having one person write the code for both sides compounds the communication issues, because it means no one other than that developer knows anything about the system. If communication is a problem then resolve that problem as soon as possible by writing good API documentation and sharing knowledge among the team, because if you don't you'll end up with an unmaintainable mess of code that's eventually sunk by the weight of technical debt.
I'm a very good back-end developer. I hate front-end fiddly bits. I can do it, I just hate it. (If you live for seven levels of LESS code, more power to you.)
It is usually more susceptible to bugs than the back end. The combination of a lack of control over the environment your code runs in and javascript's weak typing give it that extra dose of fun.
In the same vein as your comment, I might as well say back end development sucks because the default MySQL datastore quietly drops strings longer than the field[1].
These days front-end =/= Javascript[2] by a long shot (maybe yours is, but blame the person who made that decision for your project).
1. I don't know if that's still true. If it is, then I'm glad I no longer have to deal with MySQL.
2. GWT had strong-typing since forever. Not that I'm endorsing it, but it's still front end. I do recommend TypeScript
> the default MySQL datastore quietly drops strings longer than the field [...] I don't know if that's still true. If it is, then I'm glad I no longer have to deal with MySQL.
That's still true, as I recall. To avoid this nonsense, you need to change the sql_mode - either globally in the server config, or only within your session by issuing an SQL query.
The same reason why often the servers themselves are structured to have separate frontends and backends - because they encompass different problem spaces, and specialization provides efficiency.
Specialization provides loads of communication (or lack of it) between those specialist. Alas, every time the team I was on was doing some soul-searching it came up that the main problem was communication. Don't underestimate the effect of communication overhead and the cost of miscommunication.
First: it's all about reducing latency. Split the work, implement in parallel then you'll ship in "half" the time.
Second: Good developers are hard to find. Backend requires both good CS background (algorithms, security) as well as development skills. Frontend requires some feeling for design and can tolerate poor development skills.
> Frontend requires some feeling for design and can tolerate poor development skills.
It seems you're assuming front end developers are usually designers with some coding skills, or developers with some design skills, and neither is the case. A fairly complex app will most definitely have designers and developers in their front end team.
I think the main reason for the separation is domain knowledge. You may have a lot of experience designing real time APIs, but no experience with web technologies. I've worked with a lot of good developers that still don't get HTTP status codes or the basic principles of REST APIs, but they're good developers nevertheless. And perhaps that's why the role of the full stack developer has become now more popular.
We're talking about web developers, so what I'm saying is broadly true.
Native development - and cutting edge web-based development - is a different beast. You start needing knowledge of threading, messaging, various architectural patterns etc.
I'm actually recruiting for such a developer now - someone who wants to be top specialized front-end developer. They're just extremely hard to find (if you're interested in working on a complex SPA and are in, or want to move to NL, drop me a line! You don't need to speak dutch!).
You're right about domain knowledge: there is just too much that you need to know to be able to be an effective "full stack" developer for anything approaching a complex system.
Second: Don't insult front-end developers. I do both back and front, and I find front-end to require much more skill. Nobody notices if a backend feels slightly off, but it's very obvious when an interaction was built by someone who isn't a great front-end programmer. Both sides are necessary and difficult in their own way; they just require different skillsets.
Actually, that refers to adding more resources to _an already late project_. It explicitly mentions that starting out with more resources can speed up development.
In saying that, 9 women can't make a baby in one month.
I have, it's a brilliant book. Helped me greatly during my career. However it's about scaling an already broken (and late) project in a very naive way.
Luckily we've learnt a lot since then - and it turns out that decomposing the problem into 4-6 person chucks and optimizing for minimum dependencies is a good strategy to close-to-linear scalability :)
"However it's about scaling an already broken (and late) project in a very naive way."
If that's all you remember of it, you really need to reread it (and I find myself mildly skeptical that you did in the first place - although it could very well be that it's been long enough you thought those ideas were elsewhere). The title - and Brooks' Law - is about that, but the book covers a whole lot more. Including discussions of approaches to decomposing problems into <10 person chunks and optimizing for minimum dependencies.
It depends what you mean by front end, really. If you're working on a single page app type thing, I see no reason why the engineering shouldn't be done by the same people. Design, however, is a different matter. It's a whole other specialism, and while I'm sure there are people who are competent at both, my anecdatabase doesn't have many on file.
It's possible a developer COULD do both, if they had the experience/training to do both. On the other hand it seems like there are many people who not only know how to do one or the other but actually only want to do one or the other. Of course there are other people who want to do both, in which case I think they usually apply to "full stacK" positions.
Whether backend vs frontend is the right "seam" for division of labor depends entirely on the complexity of the systems at play, as opposed to some arbitrary generalization of all backends and frontends as you seem to be making here.
I agree, but as hard as it is to find decent developers in general, it is even harder to find decent developers who have enough experience in both sides to be able to work in both environments effectively. Too often you have "full-stack" devs who are really just back-end guys writing JavaScript like Java, or vice-versa.
For e certain class of problem, yes. But as complexity and ambition increases in the UI and backend logic, the necessary skillsets will quickly diverge.
This is really great. The micro services debate has just kicked off on my teams, and people are starting to make their cases. While the details of the article differ from our systems (less web, for one thing) the problems they experienced sound pretty familiar to me right now.