Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In summary asynchronous programming without a GC is a pain in the a*. That's somewhat expected isn't it?


It’s a pain with GC as well, coming from a C# background. It’s incredibly easy to write something that intermittently doesn’t work in weird and impossible to debug ways.


<laughs in JS>

Async is bread and butter of JS and you rarely have any larger bits of sync code, so you learn to deal with this.

And then being able to just freely pass bits of data accessing code (closures) around is such wonderful feeling that you'll miss it everywhere else you go.

I recently had a little excursion into Python and had to invent a very nasty hack to concisely keep short bits of code accessible through some constants.


Frontend or backend js? Because in the frontend everything is just scheduled to a single thread, so you only deal with concurrency, not parallelism.


In node.js backends, you also deal with a single thread only; if you want multiple CPUs, you'd need node-cluster, giving you, conceptually speaking, multiple shared-nothing single-thread environments to load-balance requests into. Technically, libuv (the C lib exposing async I/O to node) uses threads but that is hidden away from you. Multithreading in JS can't work anyway since JS doesn't have synchronization primitives, which is both a blessing (because it drastically simplifies the design space for the language ie no JVM-like happens-before constraints, atomic ops, and "synchronized" heisenbugs) and a curse (because most backend/business code doesn't benefit at all from async and its terrible debugging story, and you need to fork out into workers/isolates for even slightly CPU-heavy things).


Forget parallelism I even get race conditions in Typescript frequently because managing state is just hard. Changing state from multiple place became so hard that in one project I just used redux :p. And where redux was not helpful i use async-lock package. May be correct asynchronous programming is harder thing and is arcane knowledge. Not everybody is wizard.


It's still concurrency in Node, since there's no threads in JS.

There's the `worker_threads` (never used it though) which is like web workers in a browsers, but those work by posting messages to eachother.


I think this whole thread is about woes of asynchronous programming (concurrency) not parallelism which is another can of worms.


Depends on your scheduling runtime. In Rust you can schedule everything to a single thread or to multiple threads. That could change the correctness of your asynchronous code or at least make certain bugs non-deterministic


"Modern" frontend JS is increasingly async. Yes, it's not true async, but it still has all the problems, but the benefit of not locking up the UI is worth the pain.

Even old frontend JS was async in that events could be triggered by the user at any time, and in any order, and xhr and image loads requests were async as well.


Yeah I know, but I think the gp is thinking of nasty debugging sessions that come from parallelism, not concurrency. Concurrency issues one can step through with a debugger on a single thread.


Really the only sync actions in frontend I can think of are alert() calls and page navigations.


> intermittently doesn’t work in weird and impossible to debug ways

This is my major reason for using Rust. It's far better to beat your head against a wall when you're writing than when you're debugging. In both c++ and c# it's possible to write subtly wrong code that is basically undebuggable. Often these are intermittent things that show up once every million or more runs. There's no amount of time that will satisfy you after you've run into this, even after you've fixed the problem. It's like having a stalker, you never know if they've stopped stalking you. You can lock your doors, you can buy an alarm system (eg Valgrind suite). But you won't know whether you've really solved the issue.

Of course it's possible there will be other undebuggable issues with Rust, I'm going more by reputation and general impression in blogspace rather than a thorough study. But I thought it was worth a try, and I've been positively impressed thus far.


One of the very particular "undebuggable" issues (safe) Rust solves is data races.

Experience tells us that humans can't successfully reason about non-trivial concurrent programs unless they exhibit Sequential Consistency. In Rust you're promised this is what you get. Maybe what you wrote is stupid and wrong, but it has Sequential Consistency. "Oh," you exclaim during debugging, "A might happened before B and then we're in a pretty pickle" - you found the bug, now you just need to fix it. This promise is delivered by never allowing code to have references to things some other code might change, thus eliminating data races.

In most other languages that offer concurrency this promise only applies if you wrote a program with no data races and the responsibility to ensure that is with you, so you can accidentally write programs that don't have Sequential Consistency and thus... "Wait, so, A happened before B and B happened before A? Huh? I don't even understand the bug".


Only for data races via threads on the same process, it does nothing to prevent data races via shared memory using IPC mechanisms across processes.


How so? Obviously you can build an unsafe IPC mechanism with concurrent access, label it "safe" when it isn't, and then say "Look at this horrible mess, I blame Rust" but it seems like it'd be faster to just implement std::ops::Index unsafely and then blame Rust because thing[len+1] blew up even though Rust has "memory safety".

Now I'm going to write an amusing aside. One way you could get into this trouble is if your hardware allows arbitrary foreign memory writes as "IPC". The BBC microcomputer allowed this over the network! You could send a bunch of bytes over Econet (a 1980s network from Acorn Computers available for the BBC, Electron and Archimedes computers), addressed to another BBC micro on the network, asking they be written to a RAM buffer, and the remote hardware would do so. We used this to run a Multi-user Dungeon at school in about 1989 or so. A "server" ran the actual MUD software, and individual users signed in from a computer on the network around the school to play, when they typed a command their command buffer was transmitted over Econet, and then the server wrote remotely to their display RAM to show the result on their screen.

Fortunately your computer is not a BBC Microcomputer, and foreign processes do not (on the whole) get to scribble on your program's memory. So this should not be a problem unless you specifically make this unsafe decision in your Rust program.


Because as explained on rustnomicon, there are many ways to access data in parallel ways, and the language only prevents a tiny subset of those cases.

However almost every time data races and Rust come together in the same sentence it is as if it would prevent all use cases.


But (unless you've got an excerpt that says otherwise) the Rustonomicon is about unsafe Rust. And I was explaining that safe Rust has data race freedom.

The Rustonomicon is not warning you about scary hidden problems in safe Rust, it's warning you about scary problems you need to care about when writing unsafe Rust, so that your unsafe Rust has appropriate safety rails before anybody else touches it.

// Safety: Can't touch this while anybody else might write

This reminds me of the mutex thing. Look at C++ std::mutex. You could implement exactly that in Rust. But, that's not what std::sync::Mutex is at all. Because if you implemented it in Rust, C++ std::mutex is either useless or unsafe and clearly we'd prefer neither.

But do you have some examples of Rust shared memory IPC that you believe are unsafe? It might be instructive to either show why they're actually safe after all or, alternatively, go add the unsafety explanations and work out what a safe wrapper would look like.


You as a user of a crate deemed safe, that underneath uses shmem, mmap, or a database, written without taking the proper care to prevent other processes to change exactly the same underlying data segment, written in what knows what, are in for a surprise and long debugging sessions.

The crate public API surface is safe after all, and unless the user has experience in distributed systems, the answer won't come right away.


But this hypothetical "written without taking the proper care" code is buggy. Like I said it's just the same situation as a bad implementation of Index but more convoluted.

Rust's standard library takes this very seriously. In many languages if I try to sort() things which refuse to abide by common sense rules like "Having a consistent sort order" the algorithm used may blow up arbitrarily. But Rust's sort() is robust against that. You may create an infinite loop (legal in Rust, causes Undefined Behaviour in C++) and the result of sorting things without a meaningful ordering is unlikely to be helpful if it does finish, but it's guaranteed to be safe, you won't get Undefined Behaviour.


Rust's npm like approach to crates and micro approach to standard library make it a real problem, regardless of the quality approach to the standard library.

You would have a point if the standard library was batteries included.

When a language sells safety it has to go all in.


Cargo-geiger and similar tools allow you to audit crates you depend on to discover whether they're definitely safe.

Of course just because some Rust is unsafe does not mean it's wrong, it just means that you're relying on it being correct as you have to with all code in unsafe languages.


Even those tools don't validate data corruption, using perfectly safe Rust accessing a table row from multiple threads without being protected from a transaction block or a table row lock.

Hence why doing blank statements like Rust prevents data races, without the context when that is actually 100% true, does no favours to the language advocacy.


Data corruption isn't a data race.

I think what you're imagining is just "What if people use Rust to write bad SQL queries?" which again, not a data race. Stupid perhaps, unlikely to give them the results they expected, but not a data race.


I am thinking that data races in the scenario of multiple threads accessing a global variable is sold too often, and everything else gets ignored.

While a relevant progress versus what other systems languages are capable of, it still leaves too much out of the table, that tends to be ignored when discussing data consistency safety.

Stuff that usually requires formal methods or TLA+ approaches to guarantee everything goes as smooth as possible.


The context here, right up at the top of the thread where perhaps you've forgotten it, is that (safe) Rust gets you Sequential Consistency, since it has Data Race Freedom.

This makes debugging easier, in the important sense that humans don't seem to be equipped to debug non-trivial programs at all unless they exhibit Sequential Consistency. It's easy enough to write a program for modern computers which doesn't have Sequential Consistency, but it hurts your head too much to debug it.

With your C++ hat on, this might seem like a distinction that doesn't make a difference, lack of Data Race Freedom in a C++ program results in Undefined Behaviour, but so does a buffer overrun, null pointer dereference, signed overflow, and so many other trivial mistakes. So many that as I understand it an entire C++ sub-committee is trying to enumerate them. Thus for a C++ programmer of course any mistake can cause mysterious impossible-to-debug problems so Data Race Freedom doesn't seem important.

Try your Java hat. In Java data races can happen but they don't cause Undefined Behaviour. Write a Java program with a data race. It's hard to reason about what it's doing! It can seem as though some variables take on inexplicable values, or program control flow isn't what you wrote. If you introduce such a race into a complex system you should see that it would be impractical to debug it. Most likely you'd just add mitigations and go home. This is loss of Sequential Consistency in its tamest form, and this is what safe Rust promises to avert.


If you are very zealous and use lots of C++17/20 magic, you can prevent lots of runtime bugs with C++ too by doing basically the same stuff you do in Rust (RAII, move semantics, use concepts, etc). Sadly that's not doable in C#.


Move in particular is painful in C++ because to make it work C++ needs to invent these "hollowed out" objects that would be safe to deallocate after there's no longer a real value inside them. Rust doesn't need to do that.

Suppose you've got a local String variable A. You will move A into a data structure which is going to live much longer and then your local function exits.

In C++ when the function exits, A will get destroyed, so when A is moved into the data structure, A needs to be hollowed out so that whatever is left (a String object with no heap storage) can be safely destroyed on function exit.

In Rust the compiler knows you moved A, therefore there is nothing to destroy at the end of the function, no work needed (at runtime).


Hmm, C# is my main language, and I don't think I've had an issue like you describe since back when async/await was new and I was still learning about it. And nowadays, Roslyn analyzers, like those in VS and Rider, will warn you about many problems.


Doing c# for over 9 years. I never ran into async/await issue that everyone seems to encounter. I’ve used it from creating libraries, CLIs, WPF, Winforms and doing web server with ASP.


Yep same. When I had w3wp crashing 50 times a day with half an async stack somewhere inside the .Net framework several continuations after I did something stupid. Sleep well .Net developers :)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: