Most UIs I've written in my career benefit from being modeled as FSMs. While I see the value proposition of the atom-based approaches, especially for basic applications, I can't help but become a bit hesitant about their scalability long-term on large teams. Part of the reason the very dogmatic Redux approach of a event dispatching + single store of truth caught on so quickly was because a lot of us had felt the pain of two-way data binding / global state with event listeners in Angular 1. I distinctly remember the horror of debugging digest loops (scope.$$phase memories) and being completely lost in the unstructured data flow. What made for a great demo became a nightmare at scale.
There's nothing stopping people from using these atom-based libraries to build more robust abstractions, but from my professional experience I tend to just see global getters and setters with useEffects / lifecycle method of your frameworks choice as a side effect sync.
Maybe my instincts are off here though and I am overly cautious. I love XState but the learning curve is rather massive and getting buy in from other team members is hard when the DX of the atom approach is so nice.
I feel like state "management" and reactivity performance are talked about a lot, when ultimately state orchestration is where I see things fall over the most.
I haven’t ever seen issues scaling with jotai even on very large apps working with extremely complex data structures.
I’ve seen far larger messes created in redux due to tight coupling of things across large apps to a global store and the inability to work with things like Maps and Sets or other values that are not trivially JSON serializable.
In the other direction I have seen messes with observable-based state management systems where things become far more complex and too far abstracted (how often do you really care about anything other than the latest value and maybe the previous one?) or with proxy based systems that have too much fragile magic (valtio, mobx) or require wrapping your components and going all in on oop (mobx)
To me signals hit the magic spot of reactive without being overkill and keep code testable in smaller units while retaining performance benefits of surgical updates
I like xstate in theory — it’s a good way to think about complex state transitions — but in at least half of cases in practice where you aren’t interested in a derived value, someone is storing a value/ getting the latest value or toggling a boolean and it’s just such overkill for that. The reducer pattern itself doesn’t meaningfully show up much for similar reasons. The other common cases are with fetching states (idle, loading, refetching, success, error) but you often end up wanting things like cache management or maybe even optimistic updates so eventually tanstack query covers that ground better than rolling your own.
As someone who created those horrifying RxJS monstrosities, I agree with most of this, which is why I caveated my concerns. Tanstack Query simplified a lot of these issues—separating server state and caching from client side state was a huge boost. I’m mostly coming from the perspective of someone who just inherited a Jotai code base with 200+ atoms and trying to wrap my head around it. Any library poorly used can lead to a mess though, so not going to claim it’s a problem inherent to all atom based approaches.
Re: atoms + learning curve, this is why XState Store exists, which supports both XState-style state updates and atoms: https://stately.ai/docs/xstate-store
I also like the approach of Svelte, and realize that sometimes a full state machine isn't needed. I don't think that atoms should be used everywhere, since updating state freely without guardrails just leads to disaster long-term.
We aren’t good at creating software systems from reliable and knowable components. A bit skeptical that the future of software is making a Rube Goldberg machine of black box inter-LLM communication.
I could see software having a future as a Rube Goldberg machine of black box AIs, if hardware is cheap enough and the AIs are good enough. There was a scifi novel (maybe "A Fire Upon the Deep"?) where there was no need to write software because AI could cobble any needed solution together by using existing software and gluing it together. Throwing cycles at deepening layers was also something that Paul Graham talked about in the hundred year language (https://paulgraham.com/hundred.html).
Now, whether hardware is cheap enough or AI is smart enough is an entirely different question...
As someone who makes HW for a living, please do make more Rube Goldberg machines of black box LLMs. At least for a few more years until my kids are out of college. :)
Here's a practical in this vein but much simpler - if you're trying to answer a question with an LLM, and have it answer in json format within the same prompt, for many models the accuracy is worse than just having it answer in plaintext. The reason is that you're now having to place a bet that the distribution of json strings it's seen before meshes nicely with the distribution of answers to that question.
So one remedy is to have it just answer in plaintext, and then use a second, more specialized model that's specifically trained to turn plaintext into json. Whether this chain of models works better than just having one model all depends on the distribution match penalties accrued along the chain in between.
I wrap the plaintext in quotes, and perhaps a period, so that it knows when to start and when to stop, you can add logit biases for the syntax and pass period as a stop marker to chatgpt apis.
Also you don't need to use a model to build a json from plaintext answers lol, just use a programming language.
I think going all-in on Effect in its current state is not something I'd do. However there's a subset of its functionality that I'm currently replicating with a bunch of different libraries: ts-pattern, zod, some lightweight result / option wrapper like ts-belt, etc. Pretty much trying to pretend I'm writing ML / OCaml. Having those all in one package is quite convenient. Add in TypeScript's the much needed story around retry / observability / error handling—I see why people lean into it.
Having experience with ZIO / FP in Scala, I'm a bit biased in seeing the value of Effect systems as a whole, but taking on the burden of explaining that mental model to team members and future maintainers is a big cost for most teams.
Coming from a ReasonML / OCaml codebase (frontend react), I'm seeing a lot to love with the pattern matching and sum types. Zod is already one of my favorites (coming from https://github.com/glennsl/bs-json).
Is 'retry / observability / error handling" something that comes from Effect?
That's right, Effect lifts all types to a lazily-evaluated common type and provides combinators to work with that type, similar to RxJS with Observables and its operators.
Retrying[0], observability[1], and error handling[2] are first-class concerns and have built-in combinators to make dealing with those problems quite ergonomic. Having these features is a non-starter for any serious application, but unfortunately, the story around them in the TypeScript ecosystem is not great—at least as far as coherence goes. You often end up creating abstractions on top of unrelated libraries and trying to smash them together.
I'm a big fan of ReasonML / OCaml, and I think the future of TypeScript will involve imitating many of its code patterns.
The fragmentation around runtime validation libraries is pretty crazy. The fact that half these comments mention some alternative library that mimics almost the exact API of Zod illustrates that.
It is filling a necessary shortcoming in the gradual typing of TypeScript, and using validator schema types to power other APIs generic inference is powerful. I am optimistic about an obvious leader emerging, or at least a better story about swapping between them more easily, but a bit annoying when trying to pick one to settle on for work that I have confidence in. That being said, Zod seems like the community favorite at the moment.
Yes, it's annoying. I share your optimism. This is how the JavaScript (and now TypeScript) community figures things out.
Note that TypeScript had competitors, too. It got better. Zod has an early lead and is good enough in a lot of ways, but I'm not sure it will be the one.
Perhaps someday there will be a bun/deno-like platform with TypeScript++ that has validation merged in, but it's probably good that it's not standardized yet.
I do consulting for a few restaurants, and despite my experience building full-stack web applications, I find myself reaching for Excel for most of my deliverables. These are "applications" that "non-technical" restaurant operators need to be comfortable in. Having a sheet where they paste in some data and get their needed output has required the least amount of continued maintenance and training. They can drag the file around in Dropbox / Google Drive and that works for them.
I still try to "engineer" to the best of my ability—separating raw input from derived data from configuration, data normalization, etc. With Lambda functions in Excel now, I kinda just pretend I'm writing Lisp in an FRP editor / runtime environment. The ETL tools with PowerQuery are quite good for the scale that these restaurants operate at.
Hard for me to turn off my brain in my full-time job when I am tasked with poorly recreating a feature that Excel nailed years ago.
It’s a shame that Access died, because that’s what you would have built this in 25 years ago, and it would have been better for your client in every respect.
Do you have any favorite resources (can be books, courses, blog posts) that teach with this approach from? I have been diving more into 3D and the shortcomings in my mathematical background are starting to show.
Good question. I learned it the hard way long ago. But it was still the 3D space in mind, although not with computers, but hand drawn pictures.
I saw my neighboring programs with supposedly simpler linear algebra courses struggle with gaussian elimination and thought it was utterly boring. My course was probably also boring but at least the 3D space made it come to life. It gave a meaning to translations and matrix multiplications.
The times are so much better now, should be so much easier to learn these things with all the resources available. (Although I realize that is an easy thing to say.)
Maybe these two submissions can help you dive further into the topic:
Seems like this guy gets a lot of praise but I have not downloaded the book (I should probably stop this rabbit-hole and eat breakfast and let you to it).
I have not used it in production yet, but it's been great for one-off scripts and side projects. Setting up a TypeScript Node environment with ts-node, ts-jest, ESM support, top level await, etc. is more annoying than it should be. More recent Node releases have alleviated some of this pain, but not as trivial as running bun init.
Ohh bun shell looks interesting. I was looking at zx[1] for some frontend pipeline scripting tasks that were just beyond maintainable bash, but maybe I'll give bun shell a go.
I like the idea, especially the TS-like syntax around enums and union types. I've always preferred the SDL for GraphQL vs writing OpenAPI for similar reasons. Most APIs I've run into in my career would benefit from modeling their API responses as ADTs versus the usual approach of overloading 4 different union members into a giant spare object.
I echo the sentiment others have brought up, which is the trade-offs of a code-driven schema vs schema-driven code.
At work we use Pydantic and FastAPI to generate the OpenAPI contract, but there's some cruft and care needed around exposing those underlying Pydantic models through the API documentation. It's been easy to create schemas that have compatibility problems when run through other code generators. I know there are projects such as connexction[1] which attempt to inverse this, but I don't have much experience with it. In the GraphQL space it seems that code-first approaches are becoming more favored, though there's a different level of complexity needed to create a "typesafe" GraphQL server (eg. model mismatches between root query resolvers and field resolvers).
ts-pattern has been a decent band-aid for the lack of native pattern matching, but obviously has downsides that could be avoided if it was built into the language.
Congratulations to the React team on the release! I've kept up somewhat with the development of these features since Dan first introduced some of the ideas at JSConf 3 years ago. It's interesting to see how the APIs have changed over time—I'm sure as a result of some tough lessons learned at Facebook.
As someone who has worked on large React projects worked on by multiple teams, I can see a lot of the value proposition being delivered in this release. I can already think of many places where I'll want to slot in the transition API.
I'm curious if the SuspenseList API is making the cut here or if it's still on the roadmap? I played with it a while back and thought it was very cool, albeit slightly niche perhaps.
The only part that's a bit of a bummer is the recommendation on using suspense for data fetching. I'm already itching to get rid of lots of if (loading) {} code that plagues many of our components and makes orchestration and re-use of them a bit more painful than we'd like. I see lots of power in the idea of suspense as a way to orchestrate various generic async operations, but it feels like they don't want us to build solutions on this abstraction unless we buy into some opinionated framework. I can't really tell my team "let's use remix now lol".
All that being said this is a tremendous step forward and I'm looking forward to seeing what problems the React team tackles next.
>It's interesting to see how the APIs have changed over time—I'm sure as a result of some tough lessons learned at Facebook.
Oh yeah definitely. For history nerds, I've included a bunch of old (but relevant) PRs in the full changelog so that you can see the evolution. For example:
>Add useTransition and useDeferredValue to separate urgent updates from transitions. (#10426, #10715, #15593, #15272, #15578, #15769, #17058, #18796, #19121, #19703, #19719, #19724, #20672, #20976 by @acdlite, @lunaruan, @rickhanlonii, and @sebmarkbage)
>I'm curious if the SuspenseList API is making the cut here or if it's still on the roadmap? I played with it a while back and thought it was very cool, albeit slightly niche perhaps.
We've postponed it because there were some gaps we need to figure out. But it's likely coming in one of the minor releases.
>I see lots of power in the idea of suspense as a way to orchestrate various generic async operations, but it feels like they don't want us to build solutions on this abstraction unless we buy into some opinionated framework. I can't really tell my team "let's use remix now lol".
Hear, hear. The reason we suggest that is that implementing refetching in the data layer is currently too challenging. Relay implemented it, but it is pretty opinionated about other things so it's easier for Relay. Next.js doesn't currently support refetching for getServerSideProps anyway, so it wouldn't be a regression. But for a generic non-framework API, this feature is very important. We're working on a cache mechanism that might solve this, but it's work in progress and it's hard to provide guarantees that it'll ship in the same form as it is now. We just don't have all the answers yet.
Thanks for the links! Will certainly check them out, might even send me down memory lane a bit when I was first reading about these. Would love a full-fledged recap at some point—it's really interesting getting insight into the the internal mechanics of the problems my UI framework solves for me in my day-to-day. I always come away feeling more informed when you write something, but you've earned a long break after this release, so no pressure!
I hear you on the data fetching and always appreciate your teams' cautious approach. It gives me a lot of confidence in the APIs you all end up landing on, and I appreciate the focus on backwards compatibility and incremental upgrading as opposed to shipping out the first iteration of a cool idea that comes to mind. I know just enough about React to probably shoot myself in the foot with this type of stuff, but add in the idea of concurrency and all that likely falls over.
I think the story here with Relay and GraphQL is really awesome. I hope the Relay team has an article in the future showing off some of the possibilities—I think it makes a really strong argument for itself in conjunction with these features, even taking into account the restrictions you mention. Showing how some of these features flesh out in a more complete framework would be helpful in framing them in a vacuum. Or maybe I'll stop complaining and explore and write something myself!
If your team's using React Router, there's an upcoming release that aims to address the problem you describe (taken from Remix, as it's by the same guys): https://remix.run/blog/remixing-react-router
Thanks for the link! Hadn't seen that release, looks promising. I think this could definitely work for some of our newer apps that haven't fully bought into GraphQL yet. Not sure if the loader / action prop callbacks will be sufficient when you start getting into more complicated GraphQL use cases around caching. Haven't played with it though, so might be wrong! Good to see that these will already be in place for the community to leverage.
There's nothing stopping people from using these atom-based libraries to build more robust abstractions, but from my professional experience I tend to just see global getters and setters with useEffects / lifecycle method of your frameworks choice as a side effect sync.
Maybe my instincts are off here though and I am overly cautious. I love XState but the learning curve is rather massive and getting buy in from other team members is hard when the DX of the atom approach is so nice.
I feel like state "management" and reactivity performance are talked about a lot, when ultimately state orchestration is where I see things fall over the most.