Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Libdill: Structured Concurrency for C (2016) (libdill.org)
64 points by ibraheemdev on June 1, 2021 | hide | past | favorite | 19 comments


I've always been Mr Sustrik's fan, but this has a special place in my heart.

I highly recommend reading the "What is structured concurrency?" section with understanding. I think this is a big deal.

Why?

Let's start from another angle. Have you ever seen `pthread_cancel` manpage? Or "POSIX.1 Cancellation points"? This is an utter mess!

More than that, _any_ framework I've seen that allows to "kill" a coroutine (goroutine, thread, actor) in the end reproduces this "pthread_cancel" drama.

Basically, thread cancellation seems to be an unsolved computer science problem.

Mr Sustrik in "structured concurrency" makes one easy assumption, which in practice is sensible. It's not academic proposal. It's totally useable. The programmer must understand it and use it properly, but this is a totally workable constraint - that is: - the thread must exit before its parent dies.

This small thing seems to be making all the differnce.

Anyhow. If you ever wondered why killing threads is a mess, libdill proposes a solution.


> I highly recommend reading the "What is structured concurrency?" section with understanding. I think this is a big deal.

The idea was originally introduced and explained in more depth here: https://250bpm.com/blog:71/


What do you mean by "pthread_cancel" drama and being an unsolved computer science problem?


I saw this in 2017. Unfortunately, not much activity now. https://github.com/sustrik/libdill/commits/master

Might be fun to play with, but I wouldn't rely on it. Generally, better off with libuv for existing projects or Rust for greener fields where lifetimes are checked and safe concurrency is much easier.


Or Go, the concurrency utopia


I'm just going to leave this here, don't shoot the messenger, I already had a bad day of downvoting:

"Many lock-free structures offer atomic-free read paths, notably concurrent containers in garbage collected languages, such as ConcurrentHashMap in Java. Languages without garbage collection have fewer straightforward options, mostly because safe memory reclamation is a hard problem..." - Travis Downs https://travisdowns.github.io/blog/2020/07/06/concurrency-co...


From the same paragraph: "there are still some (http://concurrencykit.org/) good (https://software.intel.com/content/www/us/en/develop/documen...) options (https://github.com/facebook/folly/tree/master/folly/concurre...) out there."

There are plenty of practical solutions to the safe memory reclamation problem in C. The language just doesn't force one on you.

From epoch-based reclamation (https://github.com/concurrencykit/ck/blob/master/include/ck_..., especially with the multiplexing extension to Fraser's classic scheme), to quiescence schemes (https://liburcu.org/), or hazard pointers (https://github.com/facebook/folly/blob/master/folly/synchron..., or https://pvk.ca/Blog/2020/07/07/flatter-wait-free-hazard-poin...)... or even simple using a type-stable (https://www.usenix.org/legacy/publications/library/proceedin...) memory allocator.

In my experience, it's easier to write code that is resilient to hiccups in C than in Java. Solving SMR with GC only offers something close to lock-freedom when you can guarantee global GC pauses are short enough... and common techniques to bound pauses, like explicitly managed object pools, land you back in the same problem space as C.


Maybe, but Java is the most successful server side language because it has other features that C lacks and will always lack, the most important being: it does not seg. fault on you!

It gives you a stack trace that helps you debug in seconds and when that is not enough a heap dump gives you enough information to solve the problem of a live service without having it down.

To use any other language for servers is madness. For clients I prefer C (with a tad of C++ for convenience like namespaces, string and streams).

I even hot-deploy to it with .dll/.so like I hot-deploy the classloader to Java for development speed!

Another quote you'll try to deconstruct but I know it to be true because I use it every day:

"While I'm on the topic of concurrency I should mention my far too brief chat with Doug Lea. He commented that multi-threaded Java these days far outperforms C, due to the memory management and a garbage collector. If I recall correctly he said "only 12 times faster than C means you haven't started optimizing"." - Martin Fowler https://martinfowler.com/bliki/OOPSLA2005.html


You can get stack trace on segfault in C. Stackoverflow reference: https://stackoverflow.com/questions/77005/how-to-automatical...


Not without a crash!

And you need debug which is slower!


The official logo should be Dill from Rugrats with 3 bugs crawling around him in a circle.


I wonder if this idea could be implemented in Rust with destructors.


You can do it for Rust threads - but it gets tricky with Rust coroutines (async functions).

I made an attempt of implementing it for async Rust in

https://github.com/tokio-rs/tokio/issues/1879

with a concrete implementation in

https://github.com/tokio-rs/tokio/pull/2579

It mostly follows the Kotlin model of structured concurrency. However it has its sets of drawbacks due to Rust Futures being immediately cancellable, which then leads to problem if a parent task gets simply dropped.

In order to handle that case better I've written a proposal for async functions that run to completion with https://rust-lang.zulipchat.com/#narrow/stream/187312-wg-asy... . However that is all theoretical, and might never see the light of the day.


Drop is synchronous, and it's unclear what async Drop would look like afaik, so not today.


The drop implementation could push to a worker queue running in the background.


If you don't wait for a response you're not really getting the same benefit.


Have any languages implemented this idea?


See Ada's tasks for something approaching this concept, though it's not strict. That is, it's still possible to have "unstructured" concurrency in Ada. But tasks and instances of task types have lexical scope like any other variable, by default. I started exploring it a while ago but my interest kind of petered out so I never put anything together to really present on it.

https://learn.adacore.com/courses/intro-to-ada/chapters/task...

You can read about it there, and the code segments can be edited so you can play around with different variations.


Swift will have it as of version 5.5




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: