I'm a little disappointed in the name because latch already has a specific meaning for computer engineers. It is one of the first memory circuits we study and forms the basis for registers.
Not meaning to disregard your point, but the class doing the same thing as `std::latch` in the JDK is called `CountDownLatch`[1] and is used in a similar way:
CountDownLatch workDone = new CountDownLatch(6);
workDone.countDown(); // mark an event
workDone.await(); // wait until the counter reaches zero
Even if latches also refer to another concept, it's still positive to see some naming consistency across languages. What name would you have preferred?
A latch is a circuit that takes on a value (high or low, 1 or 0) when some some gating signal arrives (like a clock pulse) and then holds that value after the input is removed.
It is named after a door latch; a door latches when you close it and then holds that state: you can't pull it open again without using the handle/knob.
If something is called "latch" which does not change its state once in order to reflect an input event, and then hold that state until explicitly recent, then it's misnamed due to abusing the metaphor.
The barrier metaphor is the right one for an object that is hit some predetermined number of times and then fires an event (like releasing some waiting thread(s)). That predetermined number of events is its "barrier potential": the threshold that must be met to break through the barrier.
I see what you mean, but since the topic is concurrency "barrier" is also a term used for a different concept in this domain: memory barriers. N=1 of course but if I was asked what a barrier was in the context of concurrency I would immediately think of memory barriers, not the system we're talking about here.
POSIX has had barriers since around 2000 or so: pthread_barrier_wait and friends.
Memory barriers and POSIX-style barriers are semantically related; both mechanisms ensure that certain events are not re-ordered between adjacent phases; they stay on their side of the barrier.
Yes; the C++ multithreading support owes a lot (among many influences) to pthreads and one reason that low level memory barriers were called fences was to reserve the barrier name to a possible pthread-like construct.
Memory barriers also being called fences ("fence instruction") long precedes C++ multithreading.
(That conflicts with "fence register"; a concept in non-virtualized memory management whereby each running process is confined to accessing range of memory delimited by values held in fence registers.)
But in this case the concept already had a name in CS and it was called a barrier (which is a pretty apt name too). For some rename they called it a latch and then they called a cyclic barrier a barrier...
As far as I can tell, a (countdown) latch is different from a barrier.
In a barrier signaling and waiting is a single atomic operation. Each thread (of a group of N) reaching a barrier will wait until all N threads have reached an waited (and implicitly signaled) it.
A countdown latch is an event that will release one (or more) waiter only after has been signaled N times. Signaling and waiting threads are not necessarily the same and often are distinct sets.
I guess you could build a barrier from a countdown latch, but I suspect that a trivial mapping of barrier::wait to latch::signal+wait is going to racey and you need an additional sinchronization primitive. For example pthread_barrier_wait requires an additional mutex (similar to condition variables).
edit: std::barrier has (optionally) separate singal+wait and uses an explicit arrival token instead of a mutex to tie the signaling with waiting.
In practice a barrier and a countdown latch are used for different purposes (the former to coordinate multiple symmetric threads across phases of a distributed computation, the latter to wait for completion of N events).
> In a barrier signaling and waiting is a single atomic operation. Each thread (of a group of N) reaching a barrier will wait until all N threads have reached an waited (and implicitly signaled) it.
Yeah that's exactly what std::latch::arrive_and_wait does.
I don't think atomicity is a thing here though... arrive_and_wait() is just count_down() followed by wait().
> A countdown latch is an event that will release one (or more) waiter only after has been signaled N times. Signaling and waiting threads are not necessarily the same and often are distinct sets.
This is a little bit like a mutex that has a try_lock. It's not strictly the vanilla Computer Science "mutex" with lock/unlock per se, but it's not a fundamentally different concept; it's just a handy yet pretty close generalization.
I guess if you really want to give this a new name then maybe it's not a terrible idea given it's a slight generalization of a barrier, but latch is certainly not going to be any more accurate (or less confusing) than barrier.
It's interesting how we've used all the various 'doing' words in English for different technical concepts, although with some overlap. Function, method, procedure, action, operation, routine, task, process.
That's like saying every integer has 2 states, zero and nonzero. I'm talking states in the computer science sense, not in the "what I prefer to look at" sense. These have 2^n states, whether you choose to care about them or not.
Just because Java calls it something that doesn't mean it is "typically" called that thing.
> Representing it as a "state machine" it would still have only 2 states. Open and closed.
No. There are lots of "open" states with different counter values. That you choose to call them all "open" and not observe the differences doesn't change this. It sounds like you've never studied state machines, so I recommend reading up on them; here's a starting point: https://en.wikipedia.org/wiki/State_(computer_science)#Finit...
> It's only been there since 2004 and is named the same in C#
No it's not... where did you get this? C# has CountdownEvent and Barrier. It's not called "latch".
Please avoid swipes in your comments like "It sounds like you've never studied state machines", "where did you get this?" and "There's no point continuing this argument so let's just leave it here". They invariably land with much more force on the other person than you intend, and this leads to much worse threads. It's easy to understand see how this post would turn into a provocation.
Ouch, please don't cross into personal attack, regardless of how annoying another comment is or you feel it is! We ban accounts that do that. Fortunately your comment history looks like a good one otherwise. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and sticking to the rules when posting here, we'd be grateful.
latches can be edge or level triggered. The important concept is that once they are triggered, they latch the input value, and maintain it after the input has disappeared.
The thing is, pretty much every object you work with in your programming language latches the most recently stored value in it. The machine is built on latching as a fundamental block.
A simple mutex latches open when you unlock it, and then latches locked when something locks it again.
Unfortunately, nested namespaces can have some surprising and unintended consequences. You might find https://abseil.io/tips/130 interesting. Given the author was until very recently the chair of the C++ LEWG, I imagine he's had some influence on how what namespaces new types ended up in.
It sounds like "latch" is basically the same thing as an "event" (as in CreateEvent() in Windows, eventfd in Linux), but maybe optimized for single-process usage?
Edit: Ok so a (Windows) event is one bit, a C++ latch is a one-shot counter, and a C++ barrier is a cyclic counter. But I thought a barrier in general computer science terms is just a one-shot counter, which is what they're calling a latch in C++?
Does someone have any idea why latches do not have also count up operation alongside count down? In one of my previous jobs, I implemented a latch with a mutex, condition variable, and a counter and I supported also count up. It seems to work OK and count up was an essential operation for the use case I had needed it.
What is the source of the complexity inside condition variables? Reading the glibc sources, the main problem seems to be ensuring that signals are delivered properly?
Yes, atomic unlock + wait is the complex bit. Prior to Vista, Windows did not have a CV primitive. Existing emulations used multiple kernel objects and their correctness was difficult to verify. Performance also suffered.
When implementing posix semantics there are a lot of nasty corner cases, especially with dealing with fairnes, graceful destruction, correct handling of wait generations and probably more that I'm missing.
I would never attempt to implement a full condvar.
In most cases I think they're identical. WaitGroup has an ability that latch doesn't: WaitGroup can Add() and subtract (Add() with negative) whereas latch can only subtract (latch has to be initialized with its maximum value). The fact that latch is initialized with its value could be a little more convenient than WaitGroup in some situations (save 1 line of code). latch has an ability that WaitGroup doesn't: .try_wait(). Also latch's .arrive_and_wait() can save a line of code compared to WaitGroup.
Not quite. It is not about reducing the program line count, but rather about reducing cognitive load for writing correct code. If you have to specify the initial count, then you cannot introduce an error where setting the initial count is forgotten.
Small typo above but it seems like a different approach to mutex. What's similar is basically a count of operations that's locked between threads/routines.
Worth noting Doug Lee's early work on adding concurrency to Java. That effort resulted in the java.util.concurrency package but work started in the late nineties and it was available as a separate library. And of course it included a Latch class; and the related CountDownLatch.
Before this stuff landed, Java's concurrency model was a lot less nice to deal with. Good high level abstractions are important. Nice to see the same kind of primitives are being added to C++.
The semantics of latch are a little simpler, which leaves room for a lighter weight implementation than is typically associated with condition variables. Barriers are closer, but also have slightly simpler semantics, again allowing for a potentially lighter weight implementation.
But the semantics are not sufficiently different as to allow for any notably new use cases/code flow, certainly not from the examples I've seen. Every one of them could have been written using condition variables at any point in the last 30 years and would look almost indistinguishable.
Perhaps differentiation is justified as program semantics can be captured to allow optimizations in runtimes. For example, condition variable to wait for events which can be delayed vs latched to wait for arrival of running threads to a point of execution. Runtime should handle each uniquely.
This is something I always read from people who produce a puddle of noodle soup that keeps crunching through corner cases instead of producing something structured and actually readable by human beings.
Yeah java 6/7 was verbose, but it's nothing representative of the language now. You might as well point out the problems of C++03. Streams, lambdas, method references, records, and local variable inference make it pretty fluent to write and at the very least it has a sane standard library (I can't say as much for Scala).
Also as for their intro problem what type of Java programmer worth their salt can't do: