Fun story : i had to develop the same system for two different companies, one a startup with 3 employees (the founders) , the other a billion dollar business. Just a backend api & web interface, together with an iOS app displaying the content.
I could use shiny new tech for the startup (which at the time was python on app engine and its nosql datastore, with a backbone.js framework), whereas the other one forced me to use java and deploy the code on the big corp datacenter.
The start up code was built in two months, and maintained by an intern for two years with success. It basically cost nothing to run.
The big corp code was built in a year (note: i did know how to develop in java), people in charge couldn't manage to deploy it (i had to do it myself, despite just discovering their infra on the spot), and i had to maintain the code myself, because despite huge code guidelines and restrictions on the technologies allowed, nobody inside was assigned to maintaining the code.
So, yeah, there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.
You can't avoid upgrading forever though, eventually your legacy technology will stop getting security updates and support. Then you are tasked with a huge leap from old to new.
I suspect corporates don't really want to get stuck with so much legacy, they just don't know how to avoid it.
That is right. That is the balance I mentioned. In our teams we try to update the base framework every year to avoid locking us out of the innovation in language and library ecosystem. But a UI stack for example you do not migrate without explicit funding.
The start-up code written in modern Java would similarly taken two months to develop and maintainable by an intern.
The problem in enterprise tech like you found is that one is forced to use certain, non-productive old frameworks filled with legacy, over-engineered bloat.
Modern Java micro-services in contrast are really fast to develop. Green field Java projects where one can make personal choices of lean technology and libraries are simply amazing. They also have terrific characteristics under load. The JVM performs well even with bloatware. When you trim out and make your JAR's lean and mean then its utterly amazing.
> Modern Java micro-services in contrast are really fast to develop.
This attitude is a bit surprising to me. The main point of micro-services is that they address a complexity problem when dealing with large organizations. That is, they allow a large organization to break into small teams that can work (relatively) independently so each team can iterate faster. However, microservices definitely make a host of issues harder:
* cross-service transactions are much harder
* cross-cutting concerns can be more difficult to change.
* can add organizational complexity of a feature you want to add needs corresponding changes it up or downstream services.
Even Martin Fowler has this quote:
Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.
And I agree with everything you say! We have followed Martin Fowler's advice. We had a monolith product developed by several geographically separated teams whose development crawled to a snail's pace and whose full build took an eye-rolling amount of time.
Micro-services allowed us to break this. Micro-services also allowed us a faster turnaround time in feature delivery and reliability. It is also easier to isolate problems.
Generally stuff in the monolith that are already abstracted by large service facades are a good candidate for a separate service with their independent data model. Avoid cross-service transactions completely. If you have a cross-cutting concern, it generally means you need a separate service managing that cross-cutting concern.
Organisational complexity is definitely increased. This can be mitigated by tooling. Our build pipeline shows the full graph dependency chain, what is built, what is getting built, what has been deployed, etc.
We have the concept of a "system" that is basically a versioned set of micro-services running off a build trigger. We developed the capability to namespace systems - ie each system of services uses separate resources (kafka/db/etc) and separate URL's (via custom domains) when deployed on our cloud platform. You can also "plugin" your micro-service into a targeted system for diagnostics. This way dev, testing and product demo teams can work independently. The latter is not micro-service best practice, but in a large, slow-moving organisation, we found it valuable.
> We had a monolith product developed by several geographically separated teams whose development crawled to a snail's pace and whose full build took an eye-rolling amount of time. Micro-services allowed us to break this.
We also have a distributed dev team, but decided against using microservices. Instead, we have a monolith with a plugin architecture, so remote teams can just add independent modules to add functionality. Occasionally changes are made to the monolithic application, and these changes are heavily scrutinized. The plugin architecture provides many of the benefits of a microservice, while also allowing for more flexibility on occasion, and eliminates flaky network calls that are inherit to microservices.
> microservices definitely make a host of issues harder ...
I agree with all the issues you've stated, but I'd like to add one more. Microservices arguably make it much easier to build systems with circulate dependencies leading to weird race conditions and deadlocks.
Consider two units of code A and B. If implemented as classes, modules, or libraries, it's relatively easy to spot and prevent A calling into B which in turn calls back to A. Sometimes the compiler and tools can automatically catch that.
With microservices, catching dangerous dependencies like this are much more difficult as each service, outwardly, seems independent of the other and there are few tools to catch these dependencies.
There are few options, but what do you need beside a graph data structure?
We had a pipeline consisting of microservices and Kafka topics. Simple if/then logic quickly became problematic so I implemented our flow control as a directed acyclic graph, and it helped tremendously.
It's also easy to render your graph out with any number of visualization tools to quickly understand/validate work flows.
> There are few options, but what do you need beside a graph data structure?
I don't see how graph data structure solves this problem. Suppose you've created a photo sharing app. One microservice, A, has the graph database and stores the photos. One service, B, uploads and downloads photos. And a third service, C, applies filters to photos.
It's pretty easy to architect these services such that the download service B uses the filter service C in some situations, and the filter service C uses the download service B in others. This is obviously a bad design, but with microservices it's easier to make these bad design choices because the folks who wrote one service have little information about the other.
Sure, a graph wouldn't help fix that but it would at least illustrate the cyclic relationships and someone is hopefully recognizing the drawbacks inherent.
A tool to "fix" your example probably doesn't exist, but a graph is an excellent way to represent dependencies, reason about progress in the flow, and enforce constraints in a generic way.
Unlike the others, I'm going to disagree with what you've said. If you have a mono-repo set up you can develop like it's a monolith but with the added benefit that you don't have deploy the whole macro-service at once.
I think people are just spinning up a bunch of unorganized services and calling them micro and then complaining about it.
If you're on a small team that can build a clean monolith, the work to make them microservices is imo pretty trivial.
The main point was that its a lot easier to make those design mistakes whwn using microservices. Like you said there is no tool to easly spot and fix those issues. In case of a monolith there are such tools and often resolve to simply view dependency graph in your IDE.
If you need to do that then you've most likely cut along the wrong service boundry. Same thing can happen in a monolith, although you can kludge something out with spaghetti code I suppose. Doing it the right way in both cases takes about the same amount of work.
GAE went through a lot of changes after coming out, I remember the days spend on catching up its changes, so I wouldn't say it costs nothing to run 2 successive years. Even worse, after around 3 or 4 years they completely cancelled the node I am on notifying me to deal with my datae myself within limited amounte of time. Bleeding age my ass
> GAE went through a lot of changes after coming out, I remember the days spend on catching up its changes, ... after around 3 or 4 years they completely cancelled the node I am on notifying me to deal with my datae myself with[] limited amount[] of time.
I hear you brother. Google generally puts out the "smartest" stuff, but they don't really care about the day-to-day needs of most of their users. The idea is generally "we've thought about this a lot, and this is the best way, so change your code to handle it." The problem is that every few months, they seem to come out with a new "best way".
This sounds like Java is not the big corp's problem. This is bound to happen if the people deploying the software don't know their way around their own infrastructure and around the software they are going to deploy. A big corp tends to centralize knowledge into departments. To get stuff done, departments have to communicate and coordinate. If there are shoddy processes for this in place, you get a big, fat, slow organization.
it wasn’t java per say (as i said they had a set of strict rules as to what framework i was allowed to use , aka spring only), but the general culture around it, of « safe and proven, enterprise grade stack ». Had to use ext.js for the web interface for example.
Yet, even with this stack, 50% of the spent time was due to internal process.
As an example : we called them once to ask for some news, and they told us about doing a kickstart meeting. At this point we already had finished developping everything and wanted to talk about deployment (little did we know we would have to redevelop part of the stack because of guidelines we hadn’t been told about before).
I honestly believe given the same set of requirements, constraints, and people skilled with their respective stacks, the language and framework wouldn’t make that much of a difference.
Out of the four modern languages/frameworks that I’m fluent in: C++/TreeFrog (played around with it just because I am a masochist), C#/ASP, Python/Django and JS/Express, I really don’t see any difference in my productivity in any of them.
> there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.
I think the tech itself has little to do with it. Cash-strapped small startups are hungry for developers, and often ask them to do more than they can. Large companies often have many developers idling or working on fake toy projects that never see the light of day. There are massive inefficiencies in large institutions. Using old tech may be one of them, but it's a drop in the bucket compared to inefficiently utilizing the people you have.
> But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.
And yet. Despite the danger, that was a billion dollar business. So they were doing something right, weren’t they? ;)
The "problem" is, in a big corp nobody gives a damn how much things cost. But you see, it's only a problem from a certain point of view (you want to get things done quick and cheaply). Otherwise, it's not a problem at all. It looks like you got yourself a nice support contract with that big corp, and the big corp will happily absorb the cost - everyone's happy.
very similar. Very few users, everything could run on one server without a problem. The startup could have needed to scale up if they’d met some heavy success, but appengine would have been handling it just fine.
I could use shiny new tech for the startup (which at the time was python on app engine and its nosql datastore, with a backbone.js framework), whereas the other one forced me to use java and deploy the code on the big corp datacenter.
The start up code was built in two months, and maintained by an intern for two years with success. It basically cost nothing to run.
The big corp code was built in a year (note: i did know how to develop in java), people in charge couldn't manage to deploy it (i had to do it myself, despite just discovering their infra on the spot), and i had to maintain the code myself, because despite huge code guidelines and restrictions on the technologies allowed, nobody inside was assigned to maintaining the code.
So, yeah, there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.