Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: I made React with a faster Virtual DOM
109 points by aidenyb on June 1, 2022 | hide | past | favorite | 88 comments
Hi! I made a React compatibility library for a Virtual DOM library (https://github.com/aidenybai/million).

The idea is to have much faster rendering (a compiler optimizes virtual DOM beforehand) while ensuring the same developer experience React provides.

This is very, VERY early stage, so be prepared for weird bugs / plugin incompatibility / etc. If you have any suggestions, I'd be more than happy if you replied in a comment with it!

You can spin up the demo here >> https://stackblitz.com/github/aidenybai/million-react-compat



Congrats!

Considering the pretty docs, I'm guessing you're trying to get people to use this. It would be useful to have the value proposition front-and-center (e.g. in your README).

If someone is in the React mainstream, they use React. If they like the devex of React but want something simpler/more streamlined, they use Preact. I'd appreciate a "This is why you might choose this library instead of Preact. This is why this is a new library instead of a patch to Preact. This is an honest assessment of when React/Preact is a better choice than Million." section.

There are often tradeoffs in library development. Being honest about the ones you chose helps other developers trust you.


Biggest thing for me is that this library won't get abandoned like the ones that the intro section for this library refers to. Unless the value prop is really, really good I would stick to the bigger options for that reason alone.


I made some adjustments to the README [1], thanks again for the helpful feedback.

[1] https://github.com/aidenybai/million#readme


Even better, for something similar to React, but much simpler and more streamlined, I turned to Svelte. As a backend dev, it was the first frontend framework that I actually stuck with.

In retrospect, I think what really confused me was that React components return HTML, which is a weird semantic I can't intuit, after a few attempts. Svelte lets you declare everything separately, while being more succinct and readable (IMO at least).


Probably not too relevant/helpful anymore, but the problem might be that React components don't return HTML: they're functions that return a collection of JS objects that describe what the DOM ("HTML") should look like, and React compares that to what the DOM actually looks like and applies the relevant changes. These functions get called every time something changes. Having that mental model is vital for React "clicking".


Agreed. Svelte was the first front end framework that ever made sense to me. I was able to do things in it that I never dreamed of doing in React/Vue.


The combination of Svelte’s DX and performance is truly unmatched and unlocked a lot of doors for me as a developer too.


It looks very good. I hope there will be support in nextjs and a nice material library for this


I appreciate the feedback. Will be adding to the readme!


Is React rendering performance really a pain point worth solving? I've never thought to myself "man I wish React would render faster"... It doesn't seem like this is whatsoever a bottleneck for our application (~1k unique components).

I feel like the main pain point with React is that routing, bundling, SSR, state management, etc. have to be painfully stapled together, and this is what frameworks like Next.js solve for.


It's not usually a problem, but if you're making an app that shows really large amounts of data it can become a bottleneck (I've run into this personally). It also has less to do with number of unique components and more to do with the number of components on the screen at once, and the number that have to be (or end up) re-rendered on each update


Well, let’s look at it from a different angle; not the angle of the user or the developer, but as a matter of computational work and the energy consumed by that work. In a single application or even the life of an application all it might be negligible. But when does it not become negligible? What if react adopted the change and every react app What about for every react app in use suddenly used a fraction less computational power each render?

And is it just a monetary cost?

Yes, if energy use is your primary concern there are easier ways to reduce it. But in the above context it seems clear that you should take your gains where you can get them.


Don't think it's a pain point, but better performance without feeling a difference (ie not using bubble sort for sorting but using quicksort) is just.. always nice?


How often are you testing on low end devices like say 5 year old budget tablets or netbooks? Speed also correlates with energy use. Does using your site drain batteries faster than it needs to? I find these things often aren't given appropriate consideration, although speed improvements could easily be negligible for your app and its users.


Bloat and slow is really a problem when it comes to GUI app. Slowness means less responsive ui, longer to load is also negative in human's perception. Things get in a way of solving business problem.


Maybe with something like React-Three-Fiber where the frame rates can be 11ms


there's a little error in the demo, line 6:

  const [value, setValue] = useState(0);
should be:

  const [value, setValue] = useState(init);
or the parameter init could be removed from the component.


Thanks!


Hope you're looking at tools like RiotJS and SvelteJS as well. Riot specifically has a compile step (but you can run un-compiled while building)


Definitely going to check Riot out. I've tried it while it was in the initial release, seems really cool they're trending towards compiled!


Riot has its quirks, but it should really get more love than it does. I found it to be a breeze to work with, and they were doing things correctly even back when the pre-rewrite versions of Angular and React were still getting it very wrong.


OP, are you aware that the React team are testing their own compiler for React? Not to say your project is obselete, just want to make you aware. The team talked about it at React Conf 2021 (see youtube videos), currently named React Forget.


I'm aware! AFAIK it's just a auto memoizer for React currently


Seems like https://millionjs.org/benchmarks is hanging (tested on firefox with 4GB RAM machine) and the page shoot me 4.8 GB of memory.


On that note, the main page https://millionjs.org/ is weirdly janky/slow when scrolling. Doesn't make sense to me since there's barely much on it.


Million's docs uses https://github.com/shuding/nextra, which uses React + Next.js for underlying code


Not sure why that's happening, looking into it


Is this similar to SolidJS?

https://www.solidjs.com/


I've been doing a small project in SolidJS recently and really enjoying it. It is a lot easier to reason about than React. My only complaint is that the router is kind of alpha and a moving target, so the examples are now out of date with the latest version. Also, like most open source projects, the documentation is lacking in a lot of ways. That said, if you know React and you're not doing anything too complicated, I highly recommend it.


Agree wrt. Solid being easier to reason about.

Is the router you are using solid-app-router [1] ? Have been working with it for last few months and it has been generally stable (my usecases are not particularly complex though).

The docs for the solidjs core has also massively improved recently.

[1] https://github.com/solidjs/solid-app-router


Not similar to SolidJS in internals (fine grained rendering), but same idea of compiling JSX and optimizing it beforehand


Hey, author here of this Show HN post. I appreciate all the comments on the post, but I would appreciate if content you post is on topic. I love the discussion about HN culture but I don't think it's relevant to this post, and it's quite overwhelming to read so many comments. If you'd like, you can start your own discussion on HN about this rule (as long as it is along the HN guidelines).

Other than that, thank you so much for all the awesome feedback, it's really appreciated :)


What is its difference with Preact?



The fastest virtual DOM is no virtual DOM at all.

Rather than creating and diffing a fine-grained tree of elements every render, it's very easy to use the syntactic structure of a template to see exactly what parts can and cannot change. To get stable and minimal DOM updates you just compare the template identity to the previously rendered template - if they match you update the dynamic expressions, if they don't you clear and render the new template.

This is what we did with lit-html and it's quite a bit faster (and smaller) than React and doesn't require a compiler because it uses standard JS tagged template literals. https://lit.dev/docs/libraries/standalone-templates/

It's a very simple approach and very, very hard to beat in the fast/small/simple/buildless tradeoff space. I hope one day that the DOM can include standard updatable DOM with a technique like this on top of the template instantiation proposal from Apple. It's such a common need that it should be built in.


As a representative (?) of a popular library, it's not a great look to come in and put-down someone's Show HN project. Especially when you're also dismissing the entire category of libraries as inferior, while the reality is that both approaches have different costs and benefits


Is he putting it down?

He's arguing about speed and saying why the approach of lit-html will always be faster than virtual dom.

I think it's fair since OP is trying to get a faster vdom to also expect discussions about different approaches and I'm glad the previous user gave his two cents.


Relevant post: https://status451.com/2016/01/06/splain-it-to-me/

There are two common social perspectives: information sharing vs emotional harmony. From one perspective, the other seems rude or insane. If your message is corrected or added to, this is a chance to be less wrong. But it is also a chance to be embarrassed and seen as less knowledgeable than you seemed.

Status seekers tend to assume the latter perspective, and therefor label such comments as rude, dismissive, conflict-seeking, etc. even if the OP had no such intention.

This is also 95% of what "help i'm being harassed online" comes down to.


Oh this explains so much. I've been raised in the optics that one should always share information in order to make the world better, didn't even think there were other reasons to, like, talk in public.


Since you’re not aware, a very common reason to talk in public is to gain something from one’s talking. The situation at hand looks a lot like that, and in particular gaining something at the expense of someone else in this case. If every ShowHN ended up directing everyone to an alternative project, no one would bother making Show HNs. Hence why it’s questionable about whether the top comment makes the world a better place, etc. That’s also why someone not apparently related to the library at hand could make the same comment without it being negatively received because they don’t appear to have anything to gain from it. I would instead encourage the poster to make their own Show HN the next day rather than hijacking an existing Show HN.


While I agree in general (there are better/faster approaches than virtual dom), it’s not very relevant to a Show HN about React-compatible performance improvements. Lit is not compatible with React, as far as I know. If there’s a claim that Lit’s approach can be compatible with React’s API, that would be a different matter.

To me this makes the comment come off as hijacking the thread to promote their own library, which really is quite rude. Not claiming this was spankalee’s intent, but you see this quite often on HN, people who keep mentioning their own product or library in barely tangentially related discussions.


It came off to me as "VDOMs as a whole are obsolete and a waste of time, including your project that you're excitedly showing us, so you should just give up on it". Which isn't just insensitive, it's not even really correct. There are much better ways this alternate approach could have been brought up for genuine discussion if that was their goal.


I don't know, it seems pretty common for people to be straight up on HN. I didn't get a sense of superiority or insult from the comment - just statements that the author believes to be true.


I know that HN believes it’s immune to this, but being right isn’t an excuse to forgo social expectations


Different social environments have different social expectations. HN has a social expectation that focusing heavily on technical considerations of how something can be done better will lead to pleasant shop-talking like OP's response here: https://news.ycombinator.com/item?id=31576634#31578949


> I know that HN believes it’s immune to this, but being right isn’t an excuse to forgo social expectations

Could you please offer some insight on why trying to pass off technically wrong claims in a technical forum should be immune to any informative comment clarifying or clearing up misconceptions ?


> why trying to pass off technically wrong claims in a technical forum should be immune to any informative comment clarifying or clearing up misconceptions

Not really, because I didn't make that claim, so you'll have to ask someone who makes that claim.


> Not really, because I didn't make that claim, so you'll have to ask someone who makes that claim.

Well, you actually did. You claimed, and I quote, "being right isn’t an excuse to forgo social expectations"

I'm now asking you to explain the role your "social expectations" have on "being right", specifically in the case where someone in a technical forum makes technically wrong claims.

Are you able to shed some light onto this sort of belief?


> Well, you actually did. You claimed, and I quote, "being right isn’t an excuse to forgo social expectations"

That is a very different claim than “trying to pass off technically wrong claims in a technical forum should be immune to any informative comment clarifying or clearing up misconceptions”.

It is possible to be right without posting an “informative content clarifying of clearing up” a “technically wrong claim” and it is possible to “informative content clarifying of clearing up” a “technically wrong claim” in a manner which does not disregard social expectations.

Being right neither requires nor excuses being a jerk.


The irony of being weirdly combative about the anodyne observation that "being right isn’t an excuse to forgo social expectations".


You can be right without the other person being wrong if you aren't addressing something they said.


> (...) to come in and put-down someone's Show HN project.

There was no put-down. There was a very informative and insightful post explaining that a) unlike the original claim, the project does not use a virtual DOM, b) the technique used is indeed very performant and hard to beat, c) other projects also use it.

You need to go way outside of your way to pretend to feel any sort of outrage over this.


The put-down to me was where it said: "it's very easy to...", but failed to explain it beyond a few words. I'm relatively clever, but don't know exactly what they are talking about, and the lack of explanation with a statement of ease gives an implication that we should just know their solution.

That changes the message from an informative comment about alternative approaches into something the could be read as a dismissive rejection.

Now, a link to an article explaining the alternative approach... or even just one or two more explanatory sentences... would not come off that way. And maybe it is easy, but it would be better to just put up the facts, not judgments. Post an explanation and let the reader decide whether or not they think it is easy.


> The put-down to me was where it said: "it's very easy to...", but failed to explain it beyond a few words.

What do you mean by "failed to explain it beyond a few words"?

OP stated in no uncertain terms that this approach was followed in lit-html, provided a link to a page from lit's site where this approach is thoroughly explained, and if you really want to look closely at real-world implementations you already have the link to lit-html.

How much more do you want to demand from someone in order to point out in a web forum that someone made a mistake?

Also, what stops anyone from posting any question asking for ay clarification?

Or are we supposed to jump right onto the "I'm being persecuted" mode?


I find it comical that you make this comment when your previous 3 comments you've done exactly what you're accusing OP of.


> It's a very simple approach and very, very hard to beat in the fast/small/simple/buildless tradeoff space.

Author of the ivi library here. Completely agree with an idea that such approach could lead to a better performance, but there is a huge difference between an idea and actual implementation. Also, I just don't get it why a lot developers that work in this problem space still think like "virtual DOM" API and tagged template APIs are mutually exclusive, I've actually have an experimental implementation that supports both APIs and it is not so easy to beat efficient full diff vdom algo. Tagged template APIs are useful when we are working with mostly static HTML chunks, but when it comes to building a set of reusable components (not expensive web components), pretty much everything inside this components becomes dynamic and we are back to diffing everything.


Just to add on here, the Preact author made a generalized utility for tagged template => vnode conversions: https://github.com/developit/htm


I love the work done by the Lit team (I assume you're a contributor?). It's really fantastically designed, as you mentioned with bundle size/rendering speed/etc. I'm sure that Lit's implementation is very efficient and ranks high in benchmarks.

This isn't to say virtual DOM isn't fast Experimental libraries that use virtual DOM's like blockdom and ivi (see https://krausest.github.io/js-framework-benchmark/2022/table...) are very, very fast.

At the end of the day, the way libraries render UI is a set of tradeoffs. No one method is objectively better. While lit works great for a lot of web developers, so do virtual DOM based libraries

Totally agree on native DOM diffing, I'll check out Apple's proposal :)


Virtual DOM is an unnecessary overhead, is what the parent is saying.

There is probably a good reason popular frameworks insist on using it though.


I would guess:

- historically, manipulating the DOM directly was slow (in WebKit?) so working on a virtual one made sense

- The idea writing a compiler like Svelte which does the heavy lifting at compile time was not there, or was dismissed for some reason (the React developers might have decided that having a reactive model like Svelte, with the need to tweak JS's semantic a bit - where assigning variables trigger stuff - was not great, or they didn't want this JS/HTML separation)

And then you are stuck with your model for compatibility reasons. React cannot get rid of its virtual DOM without breaking everyone.


The VDOM library will still need to manipulate the real DOM sooner or later, so the proposed performance boost of this abstraction is not relevant for browsers.

Since the VDOM runs fast in Node, you can execute your tests quickly in Jest. But since no real browser are running on Node, the value of this is perhaps questionable.

The browser can get a performance boost via serverside rendering and again it comes in handy that the VDOM runs fast in Node. But perhaps this solves a problem that the VDOM has caused, because React loads slowly and renders the slowest [1].

You can run the VDOM in Android and on iOS via React Native and this is all well, but the VDOM is holding the web back because we have come to expect all this from technologies such as Web Components that might load fast and render fast by virtue of not relying on it.

The virtual DOM is modelling the DOM, but the DOM comes with a model out of the box, it's called the Document Object Model and it is always in sync with the view without any constant performance tweaks. Asynchronous rendering can fix it until batched rendering solves it for good. But these are opposite rendering strategies! We are simply going in circles now.

A lot of the myths around the virtual DOM can be explained by unwillingness to learn the native API and we are mostly dealing with the fallout now. It's the same with CSS. Sorry for the rant and the abuse of your comment.

[1] https://twitter.com/championswimmer/status/14865018568345845...


> The VDOM library will still need to manipulate the real DOM sooner or later, so the proposed performance boost of this abstraction is not relevant for browsers

VDOM is to DOM as Emacs buffer is to terminal display.

Updating the terminal display was historically slow, so there's an algorithm inside Emacs to take the buffer state, diff it against the previous state, and compile a list of terminal commands (escape sequences) that represent a minimal transition between the two states. Famously, it was marked with an ASCII art skull and crossbones in the comments of the source code.

The VDOM is there for the same reason: provide a fast way to minimize the cost of slow DOM transitions.


It sounds like they would repaint the entire terminal on every single change, but that is not what the alternative to VDOM is. They could have skipped straight to the list of terminal commands without the overhead of diffing for an even faster result and that's of course not as easy as it sounds, but that is more or less what lit-html attempts to do with the strategy outlined on https://dev.to/thisdotmedia/lit-html-rendering-implementatio.... This of course comes with a different overhead, so the option to change a `style` property directly should still be considered an option that beats all these strategies if you are doing it sixty frames per second. The problem with VDOM is not with the technology, but with the marketing that has made us believe that such an approach is problematic and slow.


The real DOM is always manipulated. Given a state change the entire virtual DOM is generated, then diffed with the real DOM and then changed parts are put into the real DOM (= real DOM is manipulated). What I wonder is whether the reason for virtual DOM is really just historic, is there anything else that has caused its persistence other than inertia?


> then diffed with the real DOM

Diffing with real DOM is slow, majority of vdom libraries aren't diffing with real DOM. As an author of a "vdom" library, I don't like to think about "reconciler" as a diffing algorithm because it is a useless constraint, I like to think about it as a some kind of a VM that uses different heuristics to map state to different operations represented as a tree data structure.

> What I wonder is whether the reason for virtual DOM is really just historic, is there anything else that has caused its persistence other than inertia?

As a thought experiment try to imagine how would you implement such features:

- Declarative and simple API

- Stateful components with basic lifecycle like `onDispose()`

- Context API

- Components that can render multiple root DOM nodes or DOMless components

- Inside out rendering or at least inside out DOM mounting

- Conditional rendering/dynamic lists/fragments without marker DOM nodes

Here are just some basics that you will need to consider when building a full-featured and performant web UI library. I think that you are gonna be surprised by how many libraries that make a lot of claims about their performance or that "vdom is a pure overhead" are actually really bad when it comes to dealing with complex use cases.

I am not saying that "vdom" approach is the only efficient way to solve all this problems, or every "vdom" library is performant(majority of vdom libraries are also really bad with complex use cases), but it is not as simple as it looks :)


Occasionally when touching on this topic here or elsewhere, or when searching for it on the web, I haven't found an elaborate explanation to why exactly virtual DOM is (as it does seems wasteful, at least for someone looking from the outside). But perhaps as you point out, the only sure-fire way to feel the actual benefit would be to try to build one yourself.

So thanks for listing out some concrete things that may be easier to implement with a virtual DOM. And if there are any other good resources out there, then do share! :)


> And if there are any other good resources out there, then do share! :)

Unfortunately there aren't any good resources on this topics. Everyone is just focusing on a diffing and unable to see a bigger picture. In the end, all feature-complete libraries implement diffing algorithms for dynamic children lists and attribute diffing for "spread attributes", so with this features we are kinda already implementing almost everything to work with DOM and create a vdom API, everything else are just slight optimizations to reduce diffing overhead. But working with DOM is only a part of a problem, it is also important how everything else is implemented, all this different features are going to be intertwined and we can end up with combinatorial explosion in complexity if we aren't careful enough. Svelte is a good example of a library that tried to optimize work with DOM nodes at the cost of everything else. As an experiment, I would recommend to take any library from this[1] benchmark that makes a lot of claims about its performance, and start making small modifications to the benchmark implementation by wrapping DOM nodes into separate components, add conditional rendering, add more dynamic bindings, etc and look how different features will affect its performance. Also, I'd recommend to run tests in a browser with ublock and grammarly extensions.

And again, it is possible to implement a library with a declarative API that avoids vDOM diffing and it will be faster that any "vdom" library in every possible use cases, but it shouldn't be done at the cost of everything else. But unfortunately, some authors of popular libraries are spreading a lot of misinformation about "vdom overhead" and even unable to compete with the fastest ones.

1. https://github.com/krausest/js-framework-benchmark


> The real DOM is always manipulated. Given a state change the entire virtual DOM is generated, then diffed with the real DOM and then changed parts are put into the real DOM (= real DOM is manipulated)

That's true, but you need to read a lot from the (V)DOM when diff'ing. Which was said to be slow with the read DOM. I don't know to which extent, and I've read it's not true anymore.

I don't think the diff'ing is done with the real DOM, but between two VDOMs. No?

Anyway, I personally find this approach heavy and like more how Svelte patches the DOM instead of computing a diff.


We rolled an in-house/intranet framework and went a slightly different path. Our framework uses pre-built components that are served to the client in terms of js commands over websocket (similar to blazor server-side mode of operation).

Every component in the DOM has an ID and a hash. Client-side events that mutate state automatically modify this hash as well. The server keeps a dictionay of active components per client. At view state sync time, components are first created and removed on an identity basis. Then, all existing components have their hashes compared for equality. Depending on the type, various patch commands will be submitted to the client to realign the element to expected state.

In most cases, each component involves multiple DOM elements. By scanning for the component root elements via attributes, we can avoid having to walk through the entire literal DOM each time. This may have profound consequences for table views and other enumerables.

I have zero clue if this is the fastest/best, but its simple as hell and looks starts to look like butter as you polish each standardized component.

Only real downside is the latency constraint, but this is something we can pretty easily overcome for our users with some well-placed frontend VMs. Definitely wouldn't do something like this for Netflix scale, unless someone told me the economics of 1 websocket per client works now... On average, how many DAUs per VM does Netflix run these days?


This sounds like a websocket version of Astro (https://astro.build)! Very cool stuff.


[Haskell fan boy here]

I feel like there is at least one Haskell library out there that automates this "no virtual DOM" problem. In Haskell it is not uncommon to create data structures on the fly, only to deconstruct them immediately again. If done correctly the compiler can remove the immediate structure completely, leaving a (recursive) algorithm.

An example of this would be sorting via binary trees. If done correctly, the intermediate tree is never (completely) present in ram.


Check Surplus. This is exactly how it's designed, and as such it typically tops out the performance charts right next to vanilla JS.


Your link shows an empty page with some "e is undefined" error in the console (Firefox).


Link to the Apple proposal?



I love seeing a bunch of people I know in a random HN post.

:wave:



I think this is what solid.js is doing. Except the template literals, they still use jsx.


The jsx is there primarily for better typescript support and react-like api.

Internally it compiles down to template literals, so the same advantages apply.

The typescript support for lit is lagging to say the least. It is surprising that there is still no good official support for type-checking of templates. There are a few community projects though but they are not as reliable and the DOM/Web Component API doesn't make it easy to make fully type-safe APIs.

Solid being able to lean in the JSX support in the TS compiler and not tying itself to custom elements api is able to offer a much better DX here.


No, solid.js is building reactive graph at runtime and in theory should be able to also detect static inputs at runtime (not sure how much effort he put into reactive graph optimization techniques). Personally, nowadays I prefer S.js/solid.js approach, but it has different tradeoffs, like it is essential to understand the difference between solid.js and React/Svelte/lit/etc :)


I have yet to explore this area, I'll check out S.js!


It is an old idea with reactive graph created at runtime and direct bindings (knockout.js, etc), but as always, implementation is way more important than some abstract idea, and ~6 years ago Adam showed that it is actually possible to implement this idea quite efficiently (S.js+Surplus). Then Ryan started working on Solid.js and it became one of the most popular implementation of this idea.

There are a lot of things that I don't like in Solid.js implementation, like it seems that he still don't care about performance in general and only focuses on getting high score in js-framework-benchmark (optimizing library for two cases: DOM template cloning and one/many-to-one reactive bindings). But I believe that it is not something that is inherently wrong with an idea and there are a lot of room for improvements in implementations.

I guess the main tradeoff with such idea is that it has a slightly higher learning curve than something like React with its top-down recompute/rerender approach (as long as we don't care about performance). But when we start to add reactive systems to react/svelte/etc to improve performance, at that point it becomes more complex than just using UI library specifically designed for reactive system.

Right now I am trying some experiments with new algorithms and datastructures for reactive system, that I specifically designed for UI problem space, to actually beat vdom implementations in microbenchmarks that were heavily biased towards vdom-like libraries (reimplementing top-down dataflow+diffing in reactive system with derived computations, it is super useful when building something like https://lexical.dev/ )

EDIT: Also, in such libraries it becomes quite hard to implement features like reparenting or DOM subtree recycling. But it seems that nobody cares about reparenting in web UI libraries (Flutter supports reparenting). DOM subtree recycling is quite useful in use cases with occlusion culling (virtual lists), but it should be optional with different strategies to reclaim memory (not how it is done in Imba library).


Solid is great, you can also use it with hyper dom expressions: https://github.com/ryansolid/dom-expressions


I hope developers with decent skills move on on Hook style api (I've seen at least 3 better-than-react libs still copy this) I'm 100% sure it's not going to last long, someone will come up with a better api very soon.


> I hope developers with decent skills move on on Hook style api (...)

What's your opinion on Hook-style APIs?


I have mixed feelings.

Hook style API allows us to write the same functionality with less keystrokes. But I found that I interpret how it works object-orientedly. It feels wierd to write them as functions instead of classes.

If there are multiple appearances of the same component, they belong to different "instances". Components get updated by modifying their state, or in other words, their "instance variables".


The only major downside with hooks is that they are sort of non-intuitive imo. Writing UIs with hooks once you grok it properly is really nice and terse. What's your beef with them?


It destroys pure view layer as pure function and that makes it less compostable/reusable. Not be able to just a view function by just passing arguments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: