Hacker News new | past | comments | ask | show | jobs | submit login

It was proposed in the sense that Ruby, or python, or whatever webserver language you used (Perl, php, even JavaScript) was slow, single core, synchronous, database blocked, or whatever else made it “unscalable” and you built this tiny service that only focuses on your core bottlenecks like an api call that only returns coordinates of your map position on things like aws lambda.

Then for some reason some junior engineers thought that you could make everything an api call and you can make services in the most optimal language and glue them all together to have functional “scalable” apps.

And thus the horrors of being a web dev in 2016+ began.

Of course it didn’t help SPAs were encouraging backends to be decoupled from front ends and completely hidden in their implementation so the fact that “it was now possible” enticed backend devs to experiment with multiple api services.




Well, Ruby (on Rails) is slow, single core, synchronous, database blocked and hard to scale. But certainly almost everyone realises that's not a feature of it being a monolith, but comes from it's language/stack/paradigms (AR, template, dynamic, JIT etc)?

I have, certainly, replaced some endpoints in Rails apps with lambda's, rust, or even standalone sinatra services for performance reasons.

For example an endpoint that generated "default stable avatar pngs" for new users: Ruby just isn't cut for image generation and manipulation. Rewriting that in a stack that performed x100 in this use-case (we picked rust) took a lot of heat off the cluster of servers.

Or moving the oauth and registration to a separate rails app that served these pages - the only endpoints that did HTML. Allowing the "main" Rails app to remain leaner by not loading all of the templating, and other HTML middleware in memory when it would never be used.

In that sense, I guess, monolyths can have a performance disadvantage: they require the entire app to load stuff for that one endpoint or feature even if 99% of the requests and users never use that.

Like the "PDF generation for reports" we once had, that was rarely used but still loaded in every running thread that would never handle anything related to reports or PDFs. Extracting that to a separate "PDF report generation worker" freed GBs of memory on almost all servers.


Yes, this is the sensible and necessary side of microservices...

Now, take your auth logic and put it on a 3rd party, rewriting all of your auth to do so.

Now, make your database shared across multiple distribution platforms and 12 services (aws, cloud, heroku, tableau).

When one of your 15 services goes offline for temporary maintenance, for some reason your entire website goes down.

The 17th service you created has an ip address switch and is missing and the response to all urls is the default apache gateway page.

The 24th service upgraded from Node 12 and is now broken, while the 26th service built in Go doesn't compile on the specific linux variant of one of your devs.

Before you know it, you're just doing maintenance work because something is broken and it isn't your code, it's some random downtime or brittleness that is inherent in microservice architecture.


What you describe is common "management of complexity", or, really, lack thereof.

These problems are independent of "microservices" vs "monolith". They are independent of "using a framework" vs "no framework". They are independent of programming-language or hosting infra.

Managing complexity, in itself, is a daunting task. It's hard in a monolith, it's hard in microservices. Building a tangled big ball of spaghetti is rather common in e.g. Rails - it takes a lot of experience, discipline and dedication to avoid it.

Languages (type systems, checkers, primitives), frameworks, hosting infra, design patterns, architectures, all of these are tools to help manage the complexity. But it still starts with a dedication to manage it today, and still be able to do so in a decade.

Microservices don't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "services". Just as a monolith doesn't inherently descend into an unmanageable tangle of tightly coupled, poorly bounded "modules".


Image manipulation is the one thing I also run as a micro service whenever needed. I just set up imagor once and never need to manage that in the installation/images of all other apps. No extra memory for shelling out to libvips or imagemagick needed.

The PDF use case also sounds very promising low hanging fruit


> very promising low hanging fruit

That was actually excacly our angle of attack: see the routes, modules or jobs that were putting the most pressure on the servers.

Then copy the entire app over to separate servers, connected to the same db cluster. Have an app router direct everything except, say /reports/ to the old servers, and /reports/ itself to the copies on new servers.

Did the old servers reduce significant in load? Rip out the part there. Better? Now rewrite, cleanup, isolate or extract the, e.g. /reports/ part on the servers. Better?

Then, finally, disconnect the service from the shared DB (microservices sharing a DB is the worst idea ever.) and have it communicate either via a message bus, via REST calls or not communicate at all.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: