I spent 10 years doing C#, and the last 3 doing Ruby. I never thought of N+1 as that big of an issue. These queries are typically fast (1ms * 100 is still only 100ms…) and multithreaded web servers are non blocking on IO like database calls.
But these sporadic elevated response times kept showing up on endpoints, where they’d be hundreds of milliseconds slower than normal, but by some extension of 100ms. Say, normally 5ms, now taking 105ms, or 505ms, or more.
Then I learned about ruby’s non parallel but concurrent model, where within a process only one thread can execute at a time. In most workloads you’ll hit IO quickly, and the threads will play nicely. But if you have a CPU crunching exercise, it’ll delay every other thread waiting to execute by 100ms before it preempts. Now consider you’re doing 10 1ms queries inter process with a greedy thread, and you’re waiting at minimum 1010ms.
Still love Ruby but the process model gave me a reason to hate N+1s.
load_async is still concurrency, but not parallelism. The queries themselves can run parallel, but when materializing AR objects e.g., only one thread can run at a time. A greedy thread in process will still subject you to GVL waits
But these sporadic elevated response times kept showing up on endpoints, where they’d be hundreds of milliseconds slower than normal, but by some extension of 100ms. Say, normally 5ms, now taking 105ms, or 505ms, or more.
Then I learned about ruby’s non parallel but concurrent model, where within a process only one thread can execute at a time. In most workloads you’ll hit IO quickly, and the threads will play nicely. But if you have a CPU crunching exercise, it’ll delay every other thread waiting to execute by 100ms before it preempts. Now consider you’re doing 10 1ms queries inter process with a greedy thread, and you’re waiting at minimum 1010ms.
Still love Ruby but the process model gave me a reason to hate N+1s.