Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RCU is a very general solution to the concurrent reclamation problem, but it hardly makes any algorithm lock free.

Also I find the following troubling:

"[in the kernel] Even a concurrency scheme that nominally used spinlocks to protect critical sections would be lock-free, because every thread would exit their critical section in bounded time"

This ignores the possibility of deadlocks and livelocks. Also the lock-free definition requires that the system makes progress if other threads are halted; you can't just handwave the requirement away by claiming that your threads never halt; and even in practice you can't guarantee that a buggy NMI handler won't takes over the cpu effectively halting any thread.

edit: are never not halted -> never halt



Based on [1], thread running inside a critical section cannot be preempted by other threads on the processor. That means deadlock won't happen since nested critical sections are still run by the same thread until it exits the outermost critical section.

Thread halting is a not problem as long as it resumes to complete the critical section. Interrupt can happen. The interrupt state is saved to be handled later, the thread resumes to finish the critical section, and then the saved interrupt is delivered to the interrupt handler. The scheduler is interrupt driven anyway.

Thread killed while inside a critical section can be a real problem, as the critical section is not released, preventing other threads from running on the processor. I'm not sure how Linux handles it, whether it would clean up the critical section when the thread is killed or the killing waits until the critical section has finished.

[1] https://en.wikipedia.org/wiki/Critical_section#Kernel-level_...


A thread can still deadlock with itself with non-recursive mutexes or spinlocks. Also assuming a single CPU system is no longer relevant today.

"Thread halting is a not problem as long as it resumes"

the whole point of a non blocking algorithm is that it gives guarantees if a thread doesn't resume for whatever reason.


The whole point of a non-blocking algorithm is system-wide forward progress. Yes, this is much harder to do if a thread can suspend for an unbounded amount of time, but that's kind of the point of the article: being in the kernel allows you to pull the kind of trick that makes that behaviour a non-issue.

(edit: s/per-thread/system-wide, since we're talking about lock-freedom).


Technically non-blocking only guarantees system wide progress. Only wait-free algorithms guarantee per thread progress.

Anyway, even controlling preemption and disabling interrupts very much does not make a spinlocked critical section a lock free algorithm. That's not just a theoretical issue. We are dealing right now with a couple of machines where periodically the kflush kernel thread livelocks hard preventing the rest of the system from ever writing a page to disk requiring a hard powercycle.


Indeed, spinlocks most definitely aren't lock-free.

"rcu_read_lock" doesn't ever block (as in wait for a spinlock), right? Sometimes you just need readers to never block and can pay higher price for other operations.


From the article:

> So as soon as a thread yields its CPU, it’s guaranteed to be out of its critical section.

Therefore, the yielding can be seen as a blocking operation, meaning that readers do in fact block!


yielding per-se is not a blocking operation, otherwise no algorithm would be non-blocking would under preemptive scheduling. It could be considered a potentially blocking operation if scheduling is exclusively cooperative.


The CPU is a resource too, for which there is contention just as for other resources such as files. Therefore, at every machine instruction, you are implictly performing a potentially blocking operation. (In a pre-emptive environment.)

Unless you are in a critical section (in which your thread has exclusive control of the CPU). So the blocking may start again at the end of the section.

I hope this makes it clear.


On Linux you can guarantee (with some effort) that your process / thread will be the only thing running on a core.

There's a boot parameter that will let you black list CPUs on which to not schedule anything (by default). Then you use tasksel to schedule your process there as well. Finally, you can configure IRQ affinity so also doesn't handle interrupts on that core. It's not a 100% non pre-emptive but pretty close.

I believe there was work being to enable it to be even more accurately 100 but I haven't followed along.


Any scheduler worth anything guarantees forward progress which makes preemption very different from actual blocking.


I'm not very familiar with the kernel RCU, but I believe that not only rcu_read_lock does not block, it doesn't even issue any expensive memory barriers.


You are right. It does not even guarantee any mutual exclusion in the read critical section i.e., the data you are reading can be changed while you are in the critical section. RCU, as the article claims, does not work in all situations.


One way to think of RCU is that it enables working with (a restricted form of) persistent data structures in languages without GC.


That's precisely the point: if threads never block or yield the CPU, you can guarantee system-wide progress.

If you have an algorithm which might deadlock, 'progress' isn't really well defined.


Again, you can't ever guarantee that a thread never block or yield; it might have bugs, it might get a machine exception, it might have to deal with buggy hardware, someone might have attached an external debugger, etc. These might be rare events and not worth worrying about. And a spinlock might actually be the fastest implementation for your algorithm. It still doesn't make it lock free.

edit: ah, and you might be running under a virtualized cpu and the host takes the cpu away from you.

Edit: another one: SMM mode kicks in and takes the CPU away from the OS.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: