Hacker News new | past | comments | ask | show | jobs | submit login

It's not even close to true in any real sense.

Queues create async operations which are often processed faster than sync operations because queues force an async design.

You should compare Little's Law to a hard drive with and without native command queueing.




NCQ works well when many writers are trying to get access to the hard drive. Of course this works well, unless you attempt to do more IOPS than your drive array supports for a continued amount of time. At that point applications that require fsync or write barriers will get terrible performance due to latency , and you should re-evaluate your storage. Which is the whole point of queues don't fix overload.


Actually they do, which is why if you run the drive sans NCQ you get lower sustained throughput, and get overloaded faster.

Having a queue of messages allows optimization in ordering the messages.


Yes, having a queue can help in optimizing the throughput, but only if it exists some economy of scale by combining some tasks (which is the case with NCQ). But the when the queued tasks are completely independent, then having a queue doesn't help in increasing throughput.


Queues can also allow you to break up load that can/should be distributed to multiple workers as well... it's a natural means of spreading out load.

For example, you may have a classifieds site, where users will upload/attach images to a listing. This doesn't occur at a consistent rate, and you may have 5,10,20 workers or more you can spin up if it gets under heavier load.

Queues let you separate the workload of processing images to different optimized sizes, while allowing the rest of the system to operate normally. This isn't a situation where it's to work around back-pressure but to normalize and distribute load.

Much like distributed databases help in a lot of ways. If you are mostly read, then read-only replicas are likely the simplest answer. If you're write heavy then sharding may be your best bet. Need to distribute processing load that can be async, queues/workers are a good approach.

It's a tool, like any other... It doesn't replace distribution, replication or sharding, it's meant to be used in combination with.


The article is talking about boring old queueing, like at the bank. The teller does not ask what kind of transaction you're looking for, even if they're optimized for check depositing. It's a straight up first come first serve scenario.

This type of queueing can smooth out demand spikes at the cost of latency. The author's point is naively adding a queue doesn't solve scaling problems. Although, as you point out, it may enable optimizations that are otherwise impossible.

NCQ actively interrogates the current list of work and optimizes delivery around what needs to be done. that's not the op's point.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: