Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Async UI: A Rust UI Library Where Everything is a Future (wishawa.github.io)
173 points by wishawa on Oct 5, 2022 | hide | past | favorite | 92 comments


(Context: I write async rust professionally)

Fly you fools. This will be a nightmare to debug, introspect and reason about for a speed boost that you (and your users) won’t be able to measure.

If you want to build a native app, more power to you. There are simpler languages that will enable you to do that with a much higher productivity.

Kudos to the library writer though!


The point of async APIs is not speed boost, it's decoupling processing from the local call stack (which happens to hang up the GUI until the routine resolves, but also forces components to be tightly coupled and monolithic).

IMO every end-user-facing interface should be based on async calls, which provides better composability and forces the developer to think the relations between all full possible interactions and not just the current call. Too many GUIs do weird things when the user clicks on several controls in quick succession before allowing the previous one to finish. The program should have a model for how to resolve such anomalous inputs, instead of leaving those interactions as undefined behavior or handling it as error-prone edge cases. Having an async framework isn't enough for that, but at least forces the developer to think about out-of-order interactions between commands.

If that makes reasoning about the complex it's because current debug & introspect tools are inadequate for async-heavy flows; but that's a reason to improve the tools, not to drop the flows. Better declarative languages and inspect tools would ease development in that style.


> The point of async APIs is not speed boost, it's decoupling processing from the local call stack

You can do the same with goroutines/green threads/virtual threads, without putting the burden of differentiation between sync and async functions on the programmer.

The only argument for async/await syntax I've ever seen is either "it allows traditionally sync languages to use async" (compatibility) or "it gives the compiler more information so it can make stuff faster" (speed boost).


And why no just "classical" OS threads? Yeah, that's rhetoric, sure you can (or did you mean that with virtual one's?) And actually once upon a time, one of rust's selling point that you could do that safely without the common data races that you have to care for in the usual multithreading/multiprocessing environments, awesome!

So to your grandposter..:

> The point of async APIs is not speed boost, it's decoupling processing from the local call stack (which happens to hang up the GUI until the routine resolves, but also forces components to be tightly coupled and monolithic).

NO, just no! Async or similar approaches were motivated by super parallel concurrency (classic example is connection handling for a webserver) to have better performance vs the overhead you'd have with os thread primitives (and even there, nowadays, that is just a motivation that is not always true anymore..)

In no way is the point of "async style" decoupling.. that we can do on a lot of levels with a lot of primitives.. especially this is very unneeded for UIs where you can decouple UI from Cpu processing from everything else with usually (depends, sure) two to three permanent threads.

On top of that, async style is horrible also for our mental models.. most clear code happens with simple control flow, classical threads (no matter if green or os) shine there because they stick to that model much more than async.

Async style was and is still mostly for performance, definitely not for decoupling and also not for the nicer programming model..

But yeah, motivations and sense nowadays sadly often gets lost over hype :(


> And why no just "classical" OS threads?

Because of memory footprint and thread contention.

OS thread's default stack size is often in the order of megabytes. On a server with 64GB of ram, that means you can't run more than ~64000 threads at once. That's not really a high number in the context of modern highly-concurrent servers.

Meanwhile, goroutine's (and probably green thread's and virtual thread's from languages other than Go) default stack size is in the order of kilobytes, allowing you to run millions of them concurrently.

Thread contention wastes CPU cycles on kernel-level context switches and may lead to hard-to-debug issues such as thread starvation. You generally have no control over how the OS scheduler manages OS threads, so without sophisticated thread synchronization mechanisms, you're relying on blind luck.

Userspace threads are usually scheduled by the language runtime itself, which gives it a higher level of control. For example, Go runtime schedules goroutine in a round-robin fashion, guaranteeing that all goroutines will have some kind of progress in a reasonable amount of time.

EDIT: Since this post is about UI, yeah, "classical" OS threads are pretty good choice, since you usually only need a single OS thread to handle all the UI events, while the rest of the system can do the processing. So both the "stack size" and "contention" arguments are not really relevant in that scenario.


> On a server with 64GB of ram, that means you can't run more than ~64000 threads at once. That's not really a high number in the context of modern highly-concurrent servers.

Obviously the OS does not allocate megabytes of actual physical RAM to thread stacks, it's just address space. Just, this:

https://unix.stackexchange.com/questions/127602/default-stac...


I wouldn't call it quite "obvious" (it certainly didn't cross my mind), but thanks for the information. Quite interesting.


> you can't run more than ~64000 threads

Please, we started here with a GUI framework and how someone said async is not about performance - in the end you underline my point? I said it was motivated by massively concurrent use cases that require a high number of threads... (and that similarly motivates green threads et al, full agree).


You asked a question, I answered it.

Moreover, the last paragraph of my post actually agrees with you.

There is absolutely no need for a confrontational attitude.


>NO, just no! Async or similar approaches were motivated by super parallel concurrency (classic example is connection handling for a webserver) to have better performance vs the overhead you'd have with os thread primitives (and even there, nowadays, that is just a motivation that is not always true anymore..)

No - that's completely wrong.

Event loops existed prior to them being popularised for IO scaling - they were used in GUI for way longer.

Async is just a way to transpose continuation based programming and the callback hell involved in dealing with event loops.

Writing UI code even in multithreaded code, without async, is a PITA because UI frameworks expect UI state to be updated on the UI thread - so you need to do work on thread X then schedule a callback on UI thread and update UI state. With async you just fire off a task, await with scheduler on the UI thread and you have linear code flow.


> Async style was and is still mostly for performance,

Scalability, not performance.


For a webserver, scalability (as in ability to handle a large number of concurrent request) is a performance metric. Speed of handling each one of those requests is another performance metric.


I second this, as personally I believe that async is an anti-pattern. It's an unfortunate result of the programming world choosing easy over simple so it could recruit vast armies of inexperienced programmers for profit:

"Simple Made Easy" by Rich Hickey: https://www.youtube.com/watch?v=LKtk3HCgTa8

I'm old school, I want the runtime to do as much work for me as possible so I don't have to. Basically that looks like the runtime providing things like process isolation and concurrency, even if the underlying hardware can't do that. Especially if we're using a high-level scripting language like Javascript anyway. Rust I could maybe see at least providing access to async functionality, but I'd vote specifically against that footgun and go with lightweight threads and message passing (how Go does it) or scatter-gather arrays (there may be a better term for this) with the compiler detecting side effects and auto-parallizing everything else like loops. The simplest way to facilitate that is to use immutable data as much as possible, passed via copy-on-write (the Unix way).

The idea of async being scattered around operating systems and kernels and such is anathema to my psyche. Code smells setting off my spidey sense everywhere I look. To the point where if the world goes that route, I just don't think we'll have determinism anymore. That makes me want to get out of programming.

Note that I feel the same dismay about stuff like the DSP approach used by video cards, where the developer has to manually manage vertex buffers, rather than having the runtime provide a random-access interface. Not being able to make system calls from shaders is also tragic IMHO. We've lost so much conceptual correctness in the name of performance that it breaks my heart. The cost of that is the loss of alternatives like genetic algorithms, which could have provided a much simple roadmap to get to the inflection point we're at with AI, 20+ years ago.

It all just makes me so tired that I feel like some guy yelling at clouds now.


Are we talking about Rust or async/await syntax in the abstract?

Async/await syntax is needed if you want to have a `with` block that managed resources across co-routine boundaries.

Consider Python's `async with` which will create a resource that is freed when the co-routine leaves the execution context.

This is distinct from Java's try-with-resources which doesn't work with async code. So anytime you use `try (TelemetrySpan.start()) { blah.read().andThen(x->send(url);}` it doesn't do what anyone would ever want. Hopefully Loom fixes it.

So the async and sync distinction is needed/useful if you have both.


> Async/await syntax is needed if you want to have a `with` block that managed resources across co-routine boundaries.

I don't see what is the connection between async/await syntax and managing resources. The `with` block is just another mechanism for managing resources - Go has the `defer` statement which runs the deferred function at the end of currently executing function block, providing the equivalent functionality without need for async/await syntax. The `with` blocks could easily be implemented in Go, but Go doesn't like duplicating functionality in the language.

> Consider Python's `async with`

Python is a traditionally synchronous language, so the async/await syntax in Python is necessary if you want to use the async runtime, while still having compatible syntax with the traditional sync runtime.

> So the async and sync distinction is needed/useful if you have both.

Every sync call can be trivially modeled as an async call that is always awaited. If you want to bolt-on async runtime onto a sync runtime, you need async/await syntax. Other than that, I don't see the value it brings to a language at all.


Ok, I thought you were saying callback hell is cool. But I think you're saying async-first is the way it should be and the runtime should handle it. Which is a good idea in most cases. I think Go is pretty awesome for this approach.


That's an advantage of async vs manual CPS. Green threads/coroutines allow stack based resource handling without the syntactic complexity of async.


> You can do the same with goroutines/green threads/virtual threads

goroutines capture a lot more state than an async continuation/future. The same argument you made below against OS threads applies here too.


What state does a userspace thread have to capture in order to work that a coroutine doesn't?


A userspace thread captures full a stack context, a delimited continuation used in async programming captures only the referenced variables needed for the remaining computation. This is a strict subset of the userspace thread state.

For instance:

   fn f() { var v1 = ..., var v2 = ...; g(v2); }
   fn g(v2) { await; /* do something with v2 */ }  // await is a context switch
A userspace thread captures v1 and v2, an async computation typically only captures v2. Compound this by all variables on the stack up to the await point, and the difference can be substantial.


I don't understand your example. What is resumed after the await in f?

Generally stackfull continuations can capture more than stackless, but do not have to. If the context switch in f resumes into g, then only v2 needs to be captured.

If it resumes in an (indirect) caller of f then v1 will have to be captured if still live, but then again, this is not expressible at all with stackless coroutines, without explicitly or implicitly suspending all immediate callers (which would end up capturing v1 anyway).

That is, a stackful continuation equivalent to a stackless one only need to capture the same amount of state.

Also I don't think that defining as delimited the async continuations as opposed to the stackful ones is correct. You can have stackful delimited continuations.


> That is, a stackful continuation equivalent to a stackless one only need to capture the same amount of state.

Of course if they're equivalent, then they're equivalent. That's simply not the case with goroutines vs. async functions in existing system where the program is written in a sort of continuation-passing style and so the captured state is more explicit.

Of course you could also perform some sophisticated transform a goroutine program into this form as well and, with a suitable static analysis, also shrink the captured state in this fashion. However, the fact is that no existing system works like this, and so what I wrote previously is an accurate description of the tradeoffs at this time.


So, you didn't give the explanation of what your snippet is supposed to do but, assuming that await returns control to f (as that the only thing that can happen with stackless coroutines) the equivalent in go would be to spawn a goroutine when calling g (pseudocode as I know approximately zero go):

   fn f() { var v1 = ..., var v2 = ...; go g(v2); }
   fn g(v2) { /* do something with v2 */ }  
There is no await as they are implicit in go. The coroutine spawned in f will only need to capture v2. I could make similar examples in cilk++, lua, or really any language with stackful coroutines.

Of course if you do not spawn a coroutine in f and it is instead part of another coroutine, v1 might be captured (unless the compiler identifies it as dead and reuses the stack slot, say, for v2). But to express the same with stackless coroutines you need to make f also a coroutine which will end up capturing v1 if live across the call.

Am I missing something?


> So, you didn't give the explanation of what your snippet is supposed to do but, assuming that await returns control to f

No, await is a context switch of some kind. In a stackful implementation the stack is switched to another thread at that point (say for an I/O wait), in an async implementation, the point after await is resumed with the live variables needed for the remainder of the program because it will be passed an explicit continuation.

> Of course if you do not spawn a coroutine in f and it is instead part of another coroutine, v1 might be captured (unless the compiler identifies it as dead and reuses the stack slot, say, for v2). But to express the same with stackless coroutines you need to make f also a coroutine which will end up capturing v1 if live across the call.

Yes, the idea is that v1 is not live, and existing stackful implementations will capture it regardless, where an async written program written in CPS form will not capture it. As I initially said the state captured by the latter is a strict subset of the former.


So, don't you agree that the stackful equivalent of your stackless example will use something like 'go g(v1)' which will only capture v1?

Without explicitly forking v2 will be captured, but then it is a completely different program with different semantics and it doesn't make sense to say that it captures more.

Edit to be more practical: in c++ you can have both stackless and stackful coroutines. If you write the same program, say using asio, with either feature, the same data will be captured.


> So, don't you agree that the stackful equivalent of your stackless example will use something like 'go g(v1)' which will only capture v1?

It will only capture v1 and also reserve a larger stack space in case the new computation needs it, where the stackless equivalent does not require this.

> Without explicitly forking v2 will be captured, but then it is a completely different program with different semantics and it doesn't make sense to say that it captures more

It doesn't have different semantics just because v1 changes ones space behaviour and not the other's.

I'm not interested in Turing tarpit arguments that one can be made equivalent to the other. As I've already said, the point is what existing systems encourage what sort of program architectures and what allocation behaviour naturally follows. It's long been evident that stackless abstractions clearly must capture strictly less state at any given time.


If you are not interested in discussing it further so be it. But go did have segmented stacks that didn't require reserving additional space.


Go's segmented stacks are 2kB and used to be more. That is "reserving additional space".

Stackless space allocation is on the order of single or double digit bytes by contrast. There is no reasonable way to conclude they are comparable.


What is the fundamental difference between coroutines and userspace threads that makes such optimizations possible for coroutines and impossible for userspace threads?


> The point of async APIs is not speed boost, ...

I think the parent comment meant using Rust rather than a garbage collected language like C# or even Java for a GUI. Not just using async within Rust.


Java GUIs are horribly slow. Maybe it's not inherent but I've never encountered one that wasn't. C# ones are sometimes alright but only if they're using the native Windows frameworks.


Like IntelliJ?


Yes, exactly


Judging by this comment, I sincerely question what "professionally" means. You miss a lot of benefits and focus on the one thing that has very little to do with why people might want this.


Why would people want to build a UI in Rust except for speed? Memory safety doesn't seem super important in the UI context.

There is a reason most people still refer to Rust as a "systems" language.


There’s plenty of domain specific desktop software that costs $5k-100k per seat per year that is horribly buggy to the point of crashing several times during the course of a professional’s average day. Stuff like Altium, Solidworks, Xpedition, COMSOL, and all the big name FPGA suites crashed all the damn time when I used them for nontrivial projects.

All of the above packages are so complex that it’s not possible to control downstream code that gets invoked by user action. Rust’s importance here isn’t in the memory safety but in the features that enable Rust’s guarantees, which can also be used to create and enforce complex logic and rules using the same type level concepts.


> Why would people want to build a UI in Rust except for speed?

Lots of reasons.

* The developer is familiar with Rust.

* The rest of the project is written in Rust.

* The primary function of the app is something for which a library exists in Rust.

Sure, Rust is known for running fast and for memory safety. But that doesn't mean that it will never be used except for those two reasons.


Yup. I write all my UI's in Rust. I write, well, everything in Rust. Why? Because i prefer it, and there is no task that i feel is fundamentally hindered by Rust itself. Shortcomings in libraries can definitely hinder, but that gets better with each year.


> Why would people want to build a UI in Rust except for speed?

For speed and its modern language (and poorer ecosystem). People already program gui in c++ for this very reason. So what is this rant about exactly? The usage of async? It's a valid way to do it, not the only one but valid.


+100

It's time for us to take the institution back from the lunatics.


I was going to post similar. I respect all builders - but why. Similar to the Postgres WASM in a browser post from the other day. Why....


Hi HN! I've been working on Async UI for half a year now, and I think it is time I share it with everyone. The project is not even really usable yet (due to lack of implemented components and documentation), but I think the proof-of-concept is enough to demonstrate this new way of doing UI — a way that, I believe, is more suitable to Rust than existing UI framework designs.

P.S. My website is new, so if you find any readability or accessibility issue, please let me know!


I don't use Rust (yet) but was still curious how this looks-n-feels. I get a distinct React JSX vibe from it. I get that this is a very flexible way of building but at the same time I prefer having the component templates 'at the top' as in Vue rather than control flow. Even Vue is built from using builder control flow underneath so it's a different authoring environment vs different implementation. Another one that's interesting is Elm.

Best of luck and I'll be checking out where this ends up going.


I really like what I see so far, especially the out of the box focus on web as well as desktop. That's absolutely killer.

One thing to not neglect is accessibility. Even if it's not fully baked yet giving it some thought in the API and implementing for web would be a big plus. Accessibility is the hill most non-mainstream UI toolkits die on. They usually leave it for later and then find that it's hard because they didn't think about it.

Being retained mode puts you ahead in the accessibility game right away.


I must admit, I was a bit skeptical when reading the title, but after reading I'm super interested.

The login form example which returns values reminds me a lot of imgui and other immediate mode GUI frameworks.


We’re almost back to if (MB_OK == MessageBoxA(…)) and other modal forms, just need to wait a little more for this ui insanity to end and life can be good again.


Seems a weird mix of async and callbacks spaghetti.

I´d expect an async UI to be in "immediate UI" style, e.g.:

  async fn give_em_cookies(window:Win) {
    win.title("Cookies!");
    win.columnwise();
    win.label("Which cookie do you like?");
    let cookie = win.selector(&["Shortbread", "Chocolate chip"]).await;
    win.label("When to you want to eat the cookie?");
    win.rowwise();
    if win.button("Now!").await   { eat_cookie_now(cookie); }
    if win.button("Later!").await { eat_coookie_later(cookie); }
    if win.closed().await { no_cookie_eaten(); }
  }

  // the function represents the state machine that encodes
  // the behaviour of the dialog box, which the caller gives to UI
  // engine for rendering
  ui.display(give_em_cookies(Window::new())).await;


This is wild to me... how does that login_form function work? If awaiting renders, then it can also return values?


You're right that components should intuitively pend forever. But with Async UI, the line between a component and a normal async function is blurry; our login_form eventually returns, so it is a normal async function, but it also renders something, so it is kind of a component...

Concretely the login_form function works by racing a render (a true, never-completing component) with a "listener" future that completes when the user submits their login. Once the listener completes, the race is over and the render future gets dropped. We can then return from login_form.


This pretty interesting - does it take inspiration from anything? I personally haven't seen anything like this.

It would also be great to see example code for that - it's the most significant part where I was looking for more information but couldn't find it.

Edit: A bit unrelated, but the tokio vs async-std split continues!

Edit 2: How does error handling work? Is check_login a component or regular request?


The async control flow is not directly inspired by anything. It is a cool side effect of using async for everything that I myself only discovered once I started writing examples.

Async UI as a whole is inspired by the simple fact that UI is an effect system[1], and async is also an effect system.

[1]: https://en.wikipedia.org/wiki/Effect_system

Re async split: Diversity promotes innovation :)

Re error handling: There's no real support yet. For now when I hit an error I just render nothing. I might add support for components returning Result<_, _> in the future.


Crank.js uses async and races for control flow. It’s pretty interesting


The full example code is here: https://github.com/wishawa/async_ui/blob/main/examples/gtk-l....

(except for that invalid_login_popup is not a real popup because I haven't implemented popups yet)

I'll add a link to it in the blog post.


The programming model reminds me of Rob Eisenberg's older attempts at building UI toolkits ([0]). I don't recall if that was fully async or 'just' using a coroutine/generator-style approach, but it feels similar.

I'm not sure the complexity of doing everything using async constructs is worth it, though. Large-scale UI's built in Qt or Javascript are mostly single threaded anyway, but it's still worthwhile to explore so kudos for that. Looking forward to seeing how far you get.

[0] https://github.com/Caliburn-Micro/Caliburn.Micro


Today in "HN never used a reactive framework that is not React and is offended when UI is not written with Dear ImGui".

This is literally the exact style of SwiftUI and Jetpack Compose (down to the author having used the term fragment, I sure hope this isn't leftover trauma from being an Android developer), except written in Rust (hence having to deal with lifetimes in the middle, default parameters, lambdas being quite verbose and needing to move things, etc).

Not blocking the UI thread is mandatory if you ever want to make any kind of complex UI. If you're a web dev, well you only have one thread anyways, good luck, if you're on any other platform, interactions _cannot_ ever block the UI (unless you, yourself, update the UI to say it is blocked). Making this async is a good thing.

Stack traces are a problem, but then again they've been a problem in any remotely capable UI toolkit.

With ReactiveCell, it looks surprisingly similar to what Compose does, where modifying a State<T> causes recomposition of everything observing it. Which means that it might be powerful enough one day to do the same things as Molecule (https://github.com/cashapp/molecule), or ComposePPT (https://github.com/fgiris/composePPT), where everything is a potential target and it interops really well with existing toolkits.


Hopefully I can find some time in my busy schedule to try this.

One thing I like is that the code appears to flow more like a console app than a GUI app. I've always found it's easy to create a quick-and-dirty console app; but if I want to do a quick-and-dirty UI (windows) app, it's much more time consuming.

This is because, with console IO, you can write your UI in a very linear manner. With UI (windows), it's much harder to write the code in a linear manner.


This is an interesting idea, but if you look at bigger examples, such as the todo-list example, the code is littered with `async` noise, calls to `borrow()`, and other fancy stuff like reducers. (A todo list is easily expressed in SwiftUI without much ceremony) Seems like scaling up to actual apps would be a mess.

https://github.com/wishawa/async_ui/blob/main/examples/web-t...


That example is indeed pretty noisy, but partly due to my premature optimization. The operations of adding, editing, and toggling Todo items are all O(log N)[1]. Things could be simpler if I’d just take the O(N).

[1] O(log N) for “our” code. I don’t know what the browser’s rendering/layouting engine is doing.

In general Async UI will still be noisier than SwiftUI or React, but I hope only in the way that Rust is more explicit/verbose than other languages.


Hmm, surely a SwiftUI todo list could be implemented using O(log n) operations with ease. Anyway I think that example might scare people off, so definitely try to simplify it if you can :)


sigh

how did we all survive before async UI... ¬‿¬


By building, say, cooperatively scheduled systems of active objects (actors) accepting arbitrary messages Smalltalk style, like Windows 1. Or preemptively scheduled systems of typed ones, like Symbian. It’s not like the classic approaches are that much simpler, is my point.


By lots of 'static and Rc.


I wonder if one can do this with coroutines in a more ergonomic way. Lua might be a good candidate language to create such a UI framework…


Very cool!

I don't understand the motivation about lifetimes in sync rust not being able to be arbitrary. I'm also confused because most of the time went you want to send data around in a async context you wrap it in `arc`, which has the pretty much analogous `rc` in a sync context which would also solve the lifetimes issue. Is there something I am missing?


In C++, this will compile:

    class T {
        public:
            T(int & m): member{m} {}
            int & member;
    }

    T MakeT(int m) {
        return T(m);
    }
    
    int main() {
        const auto t = MakeT(10);
        std::cout << t.member << std::endl; // UB <- member has actually been destroyed
    }
Rust will correctly observe that the lifetime of m in `MakeT` is lower than the T object returned and will refuse to compile


I always dreamt of a GUI library where every widget would be a separate actor sending and receiving messages.


This is basically how Objective-C worked, being based on Smalltalk. Especially with NSNotificationCenter, you also get the async aspect (so-called “NSNotificationCenter spaghetti code”).

Personally I have mixed feelings on it. ObjC/AppKit was clearly a step up from classic object UI toolkits built in rigid languages like C++, but I find React and its immediate-mode brethren far more enjoyable to work with precisely because there is no amorphous graph of actors sending messages to each other.


I see. Isn't it (non-blocking message-exchanging objects) the most natural way to reason about though? Surely classic types and sequential function execution is more convenient for data processing/scripting but a GUI seems a naturally object-oriented thing and it feels better when the objects act independently.


i'm really curious about the UI performance. a very good way to get lower latency in my usually callback-driven code is to move to synchronous things as far as possible - every callback has a cost, and these costs really add up especially if you want as fast as a startup as is possible for your app. And every step in a coroutine in an event-driven system is pretty close conceptually to a callback, isn't it ?


You might be interested in RUI https://github.com/audulus/rui


RUI is really cool, but it is solving a different problem. It is immediate-mode, and it is focused on building a new toolkit. Async UI is more about re-exposing existing retained-mode toolkits in nice-to-use, Rust-friendly APIs.

The cool thing is this: with some effort, it is probably possible to adapt the two library together to have a UI with RUI's rendering system and Async UI's APIs.


In a similar vein, what do you think of Druid? I’ve enjoyed using it in the past.


So, does this work with an application that uses tokio?


This uses the executor used by async-std rather than the one used by Tokio. So if you're using something Tokio-specific, you would probably need async-compat[1].

But can you share more on what you need Tokio for? I'd completely understand if we're working with servers. But when it come to clients - UI applications - I feel like async-std and smol are pretty competitive.

[1]: https://docs.rs/async-compat/


>But can you share more on what you need Tokio for?

Because everything. uses. Tokio. It's become the defacto standard of async Rust, whether people like it or not.

This issue has been notable enough to cause some projects to outright consider dropping async-std support - and I think they only relented because there's apparently enough (private, not-open-source) users who apparently still use it.

(https://github.com/launchbadge/sqlx/issues/1669)

(Note that they also have a note about async-compat not exactly being an ideal solution and that they'd be looking into writing their own)


`reqwest` comes to mind as a client-side library that's my goto, that I think is tokio-only?

But I think the distinction you made about "client-side Rust" not being as obviously-tokio-dominated as backend-side does make me stop and think.

Btw, this looks interesting regardless! Great job! Can I ask how your Rust learning journey went? I assume one needs an under-the-hood understanding of async Rust to build a library like this, and I'd love to hear about your learning journey getting to that understanding.


Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.


Looks a bit like concur JS


I must admit I've never seen Concur before!.The idea of using generator/async is pretty similar. The difference, as I understand, is that in Concur, you yield the widget and let the framework mount it, while in Async UI, you await the widget yourself.


isn't it better for the framework to mount it etc? seems like things might get out of control in large apps otherwise.


This is very Compose (Android) and Swift UI (iOS) like. I love it!


It's 2.56 MB gzip!


The Todo example is 2.5 MB non-gzipped. About 600KB gzipped.

Still large, but it's because we're shipping a lot of code that we don't need. Mostly the APIs exposed by web_sys. Proper dead code elimination would bring it down a lot.


Right. I'm interested in wasm lately and binary size is always my concern. There's almost non-existence of ast to wasm knowledge, only through Rust, C++, and some other langs.

If we really want a lot of non-javascript langs to succeed, I think we need to focus on raw wasm, .wat is pretty good for teaching. But they have new thing coming up which is called "Component Model", something about interface types related, which may help reducing bloat binaries.


Isn't this basically retained mode vs immediate mode GUIs?


has a SwiftUI resemblance, nice :)


Only in small examples. This doesn't look that much like SwiftUI to me: https://github.com/wishawa/async_ui/blob/main/examples/web-t...


Could you compare it with Sycamore with a few bullet points?

I feel like it is pretty close even though you don't see the async exposed? L


Sycamore works pretty similarly to React. See https://www.reddit.com/r/rust/comments/xvv49w/comment/ir6pw0... for how Async UI is different from React-style frameworks.


It don't see how this works out. React uses a virtual Dom.

Sycamore:

> Write code that feels natural. Everything is built on reactive primitives without a cumbersome virtual DOM.

Yew works more similar to React


You're right. I got the two frameworks confused.

Async UI is similar to Sycamore in term of not diffing VDOM.

The API is different in that in Sycamore, you tell the framework what to render (by using sycamore::render) and the framework will handle it from there. In Async UI, you await what you want to render yourself. Async UI's API is more transparent in this way, and this brings benefits including - making async control flow (like the control flow example in the blog post) possible - making component simply async function or anything that implements IntoFuture<Output = ()>

Sycamore's reactivity is pretty painless (a little too magical for my taste, but that's probably just me), so it's something Async UI can learn from.


I love you wisha <3




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: