Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I spent 10 years doing C#, and the last 3 doing Ruby. I never thought of N+1 as that big of an issue. These queries are typically fast (1ms * 100 is still only 100ms…) and multithreaded web servers are non blocking on IO like database calls.

But these sporadic elevated response times kept showing up on endpoints, where they’d be hundreds of milliseconds slower than normal, but by some extension of 100ms. Say, normally 5ms, now taking 105ms, or 505ms, or more.

Then I learned about ruby’s non parallel but concurrent model, where within a process only one thread can execute at a time. In most workloads you’ll hit IO quickly, and the threads will play nicely. But if you have a CPU crunching exercise, it’ll delay every other thread waiting to execute by 100ms before it preempts. Now consider you’re doing 10 1ms queries inter process with a greedy thread, and you’re waiting at minimum 1010ms.

Still love Ruby but the process model gave me a reason to hate N+1s.



Since Rails 7.1 we've had https://www.rubydoc.info/github/rails/rails/ActiveRecord%2FR... which actaully does run queries in parallel.

There's also Rails' russian doll caching, which can actaully results in pages with n+1 queries running quicker than ones with preloaded queries. https://rossta.net/blog/n-1-is-a-rails-feature.html


load_async is still concurrency, but not parallelism. The queries themselves can run parallel, but when materializing AR objects e.g., only one thread can run at a time. A greedy thread in process will still subject you to GVL waits


If that’s a problem for you right now I’d suggest giving JRuby a look as it has no GVL and true multithreading.

Hopefully as Ractors mature that problem will be solved for MRI too.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: