Hacker News new | past | comments | ask | show | jobs | submit login

Are those really the only options? I'm trying to wrap my head around how using a fixed size thread pool for I/O automatically implies deadlocks but I just can't. Unless the threads block on completion until their results are consumed instead of just notifying and then taking the next task..

I can definitely imagine blocking happening while waiting for a worker to be available, though. Did you mean simply blocking instead of deadlock?




N threads, with N readers waiting for a message that will only come if the N+1 reader (still in the queue) gets a message first.


Thank you for humoring me. I had to sleep on it, but I can see it now. Seems like it would require a really bad design or more likely bad actors (remotes leaving dead sockets open), but it would definitely be possible.

The same scenarios would lead to resource exhaustion if the thread pool wasn't bounded.


But sure one must use an output queue, not synchronously wait for the consumer to consume a result?


The N + 1 readers are all reading different sockets, blocked.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: