Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you can test hundreds of thousands of subtle failure cases in I/O, event ordering, timeouts, dropped packets, filesystem failures, etc.

As cool as all this is, I can't stop but wonder how often the culture of micro-services and distributed computing is ill advised. So much complexity I've seen in such systems boils down to calling a "function" is: async, depends on the OS, is executed at some point or never, always returns a bunch of strings that need to be parsed to re-enter the static type system, which comes with its own set of failure modes. This makes the seemingly simple task of abstracting logic into a named component, aka a function, extremely complex. You don't need to test for any of the subtle failures you mentioned if you leave the logic inside the same process and just call a function. I know monoliths aren't always a good idea or fit, at the same time I'm highly septical whether the current prevalence of service based software architectures is justified and pays off.



> I can't stop but wonder how often the culture of micro-services and distributed computing is ill advised.

You can't get away from distributed computing, unless you get away from computing. A modern computer isn't a single unit, it's a system of computers talking to each other. Even if you go back a long time, you'll find many computers or proto-computers talking to each other, but with a lot stricter timings, as the computers are less flexible.

If you save a file to a disk, you're really asking the OS (somehow) to send a message to the computer on the storage device, asking it to store your data, and it will respond with success or failure and it might also write the data. (sometimes it will tell your os success and then proceed to throw the data away, which is always fun)

That said, keeping things together where it makes sense, is definitely a good thing.


I see your point. Even multithreading can be seen as a form of distributed programming. At the same time, in my experience these parts can often be isolated. You trust your DB to handle such issues, and I'm very happy we are getting a new era of DBs like Tigerbetle, FoundationDB and sled that are designed to survive Jepsen. But how many teams are building DBs? That point is a bit ironic, given I'm currently building an in-memory DB at work. But it's a completely different level of complexity. And your example with writing a file, that too is a somewhat solved problem, use ZFS. I'd argue there are many situations where the fault tolerant distributed requirements can be served by existing abstractions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: