Typically you solve that through a rewrite, followed by migration and depreciation. Since the microservice is small, you can ideally accomplish the rewrite in under a week.
But if you are changing the factoring of the microservices you are necessarily rewriting more than one. Sure it can be done, but you know what's easier than refactoring microservice boundaries? Refactoring service object boundaries in a single app with one coherent test suite.
It's time to stop hand-waving away the overhead of microservices. The two big reasons to introduce microservices are because you have different workload profiles that need to scale independently, or because you have separate teams with bounded areas of responsibility and you want them to operate independently and minimize communication overhead. I'll also give a pass for microservices where the interface is very obvious and stable, and there are low cross-cutting concerns.
But in a lot of cases devs are just complexifying things for their own resume and ego.
> But in a lot of cases devs are just complexifying things for their own resume and ego.
I wouldn't even go that far. The complexities of communications between network services simply escapes most devs who think in terms of features.
They see most of the issues as an ops problem that they can chuck over the wall. Many of them don't stay in one place too long, so they've not really hard to live with the consequences of their decisions.
The problem is, working with more buzzwords and moving about is the best way to further your career as a dev. Being a stable hand will likely result in you being underpaid and under valued, and at the end of the day most of us live in expensive cities and are quickly in our 30s where we want a family. Of course, we're going to chase the shiney that pays well.
"Minimise communication overhead" - in the environments i've worked in it's usually to decouple release schedules rather than scaling, so i guess that falls under minimise comms overhead.
Evolving enterprise systems is hard, being able to eliminate wait barriers between new dependencies being made available is a massive benefit to the wider release schedule.
You can release new functionality on your teams schedule and turn it on later when collaborating teams are ready. For various reasons enabling a feature is usually much easier than releasing a version bump.
1) never change data exchange formats between trivially connected parts of the application (e.g. splitting billing and delivery address in the payment system, no way you can do that easily with microservices)
2) effectively have 3 versions of every service running essentially constantly, and at the same time. This is more work than just running 3 versions : they must also be developed so they CAN run at the same time (e.g. they must tolerate the non-existence of a delivery address in the same example)
The Google version is supposedly coupled heavily to their infrastructure and therefore would give little benefit to opensource without also open sourcing a large portion of their infrastructure.
Sometimes I wonder if these online tools mine data on text blobs and regex we pass in. I can only imagine how potential IP could be leaked if the data was correlated to a company that the user is using the tool from. Has anyone done analysis to see whether to tools send data back to the servers? In theory the entire app should be client side only.
They probably don't but still kind of hard to be sure. In principle they could introduce it at any given time without anyone noticing for a while. Of course, once someone does notice, the shit will hit the fan.
Anyway, I share your nagging doubt and I'd never paste any text containing sensitive information into any text pane on a web page, not regexr, not JSON formatters, not anywhere.
The problem with IOCP is that it is more memory intensive as all memory for outstanding operations must be pre-allocated. With the readiness model, you can use pools of memory instead for dramatically less overall memory usage. There is a hack to use 1B reads with IOCP to get around this, but it doesn't feel very clean.
kqueue allows for batch updates on fds that are being polled for readiness. In addition, it has less hacky support for non-socket files such as timers, events, signals, and disk IO. Setup for all of these on epoll requires extra unique syscalls. I'm not sure I've even ever seen someone use epoll for disk IO (via AIO). Overalls, kqueue just seems like a more cohesive, unified async solution.