Hacker News new | past | comments | ask | show | jobs | submit login

Using a reactor or actors doesn't completely mitigate the problem, but it does make the symptoms considerably less harmful.

Some hypothetical examples:

Java/Ruby: 8 threads, mean service time: 50ms (20 requests/s). A system like this has upper-bound throughput of 160 requests/s. If one request is allowed to timeout, say 5s, this reduces the effective throughput of the system to 7 threads as one is "locked" servicing the timed-out request. It doesn't take many requests like this to significantly degrade performance, and the only remedy (absent circuit breakers) is to throw a ton more hardware/workers at the problem = $$$.

Consider the problem in, say, node.js. If a single request times out, that request will cause problems, but it won't have _nearly_ the same capacity-starving effects as the thread-bound example above; the request will simply timeout, but other requests won't be starved because the number of in-flight concurrent requests isn't limited by the thread/process count of the system.

I'm realizing as I write this that I'm straying a little off-topic here, but the point is, using process/thread-based concurrency in a high-performance system where failure is likely is a bad idea. It's just too easy to get the kind of failures Fowler describes: "What's worse if you have many callers on a unresponsive supplier, then you can run out of critical resources leading to cascading failures across multiple systems."




Which Java systems these days use 1 thread per request? In my experience, Node scales far less well than typical Java setups.


Your typical synchronous servlet container (Tomcat, Jetty, etc.) all maintain a thread pool and dedicate a single thread to each request for that request's lifetime. These threadpools can easily hold hundreds of threads on every-day hardware (vs. something like forking unicorn where 8 Ruby processes consumes quite a bit of memory).

This works well for many workloads. It allows a straightforward blocking-IO model, but you don't typically worry about a few slow requests bogging everything down (which you do if you only have 4-8 unicorn processes). I'd say in many apps, the database becomes a bottleneck before the pool runs out of threads.


For Tomcat-8 "The default HTTP and AJP connector implementation has switched from the Java blocking IO implementation (BIO) to the Java non-blocking IO implementation (NIO)."

@ https://tomcat.apache.org/migration-8.html


Keeping average response time low is only part of the point of a circuit breaker. Another goal might be to reduce load on the back end service to give it a chance to recover.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: