I've recently been reading the Pensees, so it feels timely that this article was posted. I'll add here a couple more practical pieces of advice that Pascal offers for dealing with what he considers the wretchedness of human experience:
"One must know oneself [reference to Socrates]. If this does not serve to discover truth, it at least serves to regulate one's life, and there is nothing better"
"Physical science will not console me for the ignorance of morality in the time of affliction. But the science of ethics will always console me for the ignorance of the physical sciences."
Having used Solid on a largish web product for over a year, I am thoroughly convinced and will not be returning to React in the future.
This is somewhat of an aside: I am aware that the creator of Solid has long been experimenting with adding laziness to the reactive system. I think it would be a mistake. That everything is immediate keeps state changes intuitive, fairly easy to debug, and is one of the strong points of Solid's design. I've never run into a real world scenario where delaying computations seemed like an optimal way of solving a given problem.
I'm curious which part of laziness are you concerned with? Is it delayed execution in events? It is just most lazy things run almost immediately during creation anyway, and on update everything is scheduled anyway. The only lazy thing we are looking at is memo's which while impactful isn't that different as the creation code runs.
I guess the push/pull propagation is harder to follow on update then simple multi queue but in complex cases we'd have a bunch of queue recursion that wasn't simple either.
Wow, I never imagined you would respond to my comment, a little embarrassed ngl :D
I understand that under the hood Solid's reactive system is not quite simple; though, the mental model needed to use it, is very simple, which I greatly appreciate when building complex application logic on top of it. That's really my main concern: that one-way "push" semantics are easy to follow and think about, and adding new mechanics complicates that picture. It seems deceptive that what presents itself, at least conceptually, as just a value access, might now cause arbitrary code execution (another way of putting this is that it feels like it violates the principle of least astonishment).
As I mentioned before, I also haven't run into situations in practice where lazy memos seem like a desirable behavior. If I initialize a derived value, it's because I plan to use it. If some computation needs to be "delayed", I place it inside an if statement with a reactive condition, for instance, createMemo(() => shouldCompute() ? computeValue() : undefined).
All that's said, you've done a fantastic job with Solid. I hope you continue to innovate and prove my objections misguided.
I'll admit that hearing that "laziness" was something being explored for Solid 2.0 also made me uneasy. Like the OP, in my case I know it's because I'm worried that it will complicate working with Solid state in the same way that React's "concurrent mode" complicates working with React state. Really, I just hate React's "concurrent mode" and I really like the simplicity of working with Solidjs' state by comparison (if Solidjs has a similar concept to concurrent mode I haven't run into it yet).
All of this is to say that my own worries aren't based on any concrete objections, but a general fear of change since I like the current approach (and haven't run into any performance issues with it). Also, without knowing exactly what you plan on changing, the idea of introducing "laziness" seams like it could be a euphemism for "async" which feels like it would definitely make things more complex. Anyway, reading your comment makes me hopeful that my ambiguous unease is misplaced.
React has lived long enough to become the villain, and it's way too entrenched. It was certainly a very important step forwards in webdev, but it now probably has more gotchas than vanilla JS does.
I think it's just the lifecycle of craftsman tooling in general.
When everyone has experience with a tool, everyone can enumerate its downsides in unison, but we can't do that with new/alternative tools.
Whether we confuse that for the new tool having no drawbacks, or we're just tired of dealing with the trade-offs of the old tool, or we're just curious about different solutions, we get a drive to try out new things.
React always had gotchas, but the question is how tolerable are those gotchas compared to the gotchas of what you were doing before. And how tolerable are Solid's gotchas going to be once you discover them. Sometimes it's a better set of gotchas and sometimes it isn't.
It's also easy to confuse problems that arise from failing to adequately manage the gotchas with problems inherent in the tool itself. There's a subtle distinction there that's easy to miss, especially for those with a blame-the-system sort of attitude (which I don't entirely fault).
One company I worked for had a very slow frontend. It was common there to blame the slowness on React. "React is just kind of slow."
Another company I worked for had a much larger React-based frontend, and it was fast-loading and snappy by comparison.
The difference is that the second company had much more well-established good practices in its design system, the codebase, the enforced lint checks, and the general knowledge of its engineers. They avoided performance traps that caused multiple renders of things. (The first company's frontend would render a given screen 12+ times for some state changes.)
> The first company's frontend would render a given screen 12+ times for some state changes
Might just be me, but it feels like a good framework/library would put in a lot of work to avoid or at least alert about these kinds of issues so they can’t stay very long without being fixed.
I’ve seen the same in React (even infinite render loops which couldn’t be ignored and had to be fixed) and to a lesser degree with the likes of Vue as well.
I’m not sure what’s missing but having a tool that is easy to use in the wrong way doesn’t inspire confidence.
Very balanced response, but in my case it's less about gotchas and more about APIs. I just think other frameworks have more intuitive APIs than react. Maybe this falls in the same category as a gotcha, but I feel it's a little different.
It's not that it's way too entrenched, it's just that people grew tired of constantly shifting frontend frameworks. Personally: I really didn't care who "won" that war, I just wanted to have a sane well-supported default without having to learn a new pattern every year to store page state or update an icon.
> I've never run into a real world scenario where delaying computations seemed like an optimal way of solving a given problem.
And even when it might be, Solid has always exposed fairly low level reactive primitives for those who want more control. Hopefully if laziness is added, it's in the form of new optional primitives.
> Having used Solid on a largish web product for over a year
I am curious about your experience in this regard. I've been aware of Solid for quite a while (and plenty of other non-React alternatives that on paper seem "nicer") but the problem I usually quickly run into after exceeding the complexity of a contrived demo app is the ecosystem not yet having a number of components, library wrappers, integrations, etc. that I use all the time.
Have you found the Solid ecosystem sufficient for whatever your needs are, is it fundamentally easier with Solid to integrate things that don't exist yet, or did you go into it less spoiled/tainted as I by reliance on React's ecosystem?
I've found the ecosystem to be perfectly serviceable for every complex piece of functionality I needed to bring in: remote state, forms, tables, and routing, come to mind. Complex state management can easily be handled using the library's standard reactive primitives and the community "solid primitives" project has a ton of well made utilities so you don't have to reinvent the wheel for common use cases.
I'm not going to sugar coat it though, SolidJS is not necessarily a batteries included ecosystem. There is a severe lack of components/component libraries. Luckily integrating vanilla (or "headless") JS libs is dead simple once you have enough experience.
Cool. I am a bit of a minimalist (one reason I've never felt the most comfortable with React compared to some other things) and equally not interested in bloat and differing opinions/principles from using component libraries for everything (and I almost never use them for UI) but, yeah, I also don't want to have to write my own high-performance basics. A few years ago when using Vue for something, I had to detour a lot to writing things like virtual list components because at the time they were not available. (I see Solid has a few of those.) Not ideal for a tiny team or as a solo dev trying to make a dent in something in my off hours
> Luckily integrating vanilla (or "headless") JS libs is dead simple once you have enough experience.
Good to know. I expect to need to write my own wrappers for certain things that are more niche, but some frameworks definitely make it easier than others, and I do tire of wrapping the same 153 events and properties and such for some component yet again when [framework of the month] has an annoying way of doing this
For what it's worth, ecosystem is a valid concern, esp compared to React, and esp if dealing with more niche features. Even for something fairly generic like Virtual List implementations. Virtual Lists do exist for solid (specifically tanstack is the most fully-featured, albeit under-documented), they aren't as battle-hardened or as well-documented as their react counterparts. Specifically, I have thought about digging into VirtualList internals, because of a couple issues that I haven't seen in some other framework impls (granted, of most the complex generic ui components, virtual-list is literally the one issue I had with ecosystem).
Also, you'll definitely start seeing edges faster. Things like touch event libraries, animation libraries, maplibre/dataviz/etc. I'd say that the solid ecosystem is at the point where you'll see 1-2 libraries for most things, but if those libraries aren't quite right for you, you can very quickly find yourself digging deeper than you'd like.
That being said, as parent stated, integrating vanilla libs isn't so hard. And also, there is a... solid... amount of people building headless libraries specifically for the ecosystem, which are quite useful (For example, I recently came across https://corvu.dev, which I've started integrating here and there). What I mean to say is less that solid has a poor ecosystem, and more that there isn't the infinite easy-access vibe-code-able pop-in-lib-to-solve-your-problem ecosystem of react.
Even with the shallow ecosystem, solidjs has been quite enjoyable to me. Also, in the two years I've been using it, I think I've built up enough of my own personal ecosystem that I don't really need to reach towards external dependencies all that much, which feels very nice.
I've used Solid Primitives and white it's great, unfortunately it seems pretty dead. None of the primitives is "4 Accepted/Shipped" and many aren't even "3 Pre-shipping (final effort)".
Is laziness intended to offer primitives for suspended UIs?
I haven’t used Solid for a while and can’t recall if there’s a Suspense counterpart already. If not, this seems like a reasonable feature to add. It’s a familiar and pretty intuitive convention
React is, I am convinced, the new Struts only client side.
Struts was SSR before we needed a term for SSR. It had low productivity so it looked like it was doing well in the industry because there were so goddamned many jobs. But if you asked people you really couldn’t find many that loved it, and if they did it was a sign that maybe their judgment wasn’t that great, and you were going to be disappointed if you asked follow-up questions about what else they thought was a good idea.
It was killed by JSTL, which was one of the first templating systems to get away from <% > syntax.
Because the state -> DOM mapping is non-trivial for my application, I ended up writing my own virtual DOM diffing code, a primary facet of React. I appreciate the ease of being able to circumvent this where it's not necessary and performance considerations dominate, though I admit i've not felt the need to do it anywhere yet.
The AI training data set for React is much larger. The models seem to do fine with SolidJS, though I suspect there is meaningful benefit to using React from this point of view.
Overall, I'm happy with where I'm at and I prefer the SolidJS way of thinking, though if I were to do it all over again, I'd probably just go with React for the above two reasons.
I was not expecting to loose karma for this comment!
I have a couple of years familiarity with SolidJS and thought the two key insights that came to mind from building a large project in it would have positive value.
> it has raised the issue of how journalists verify the credentials of sources in the AI age
Performing background checks is not difficult. Professional background check services are fast and commonly used in hiring processes. It seems like this article is (deliberately?) missing the actual questions raised by this case: why are these various outlets/journalists so lacking in rigor when it comes to the accuracy of their content, and how is a fraudulent expert consistently being chosen for their articles.
There are simply too many economic incentives encouraging doctors to not act in a patient's best interest that, as the parent comment points out, it would be irresponsible to not be skeptical of their advice.
It is strategically beneficial, for trade agreements, diplomatic leverage and in international conflicts, for a nation to have a strong industrial base, in other words an economy that creates and exports actual physical goods. Tariffs level the playing field in our own consumer economy for American made goods, because without them American manufacturing is competing with countries where labor is absurdly cheap.
Was going to comment the same thing. I try to avoid politics with co-workers and family because they are people that you are obligated, on some level, to interact with and have decent social cohesion. Friendships are entirely voluntary, so I can't begin to understand choosing to spend time with people that you can't honestly share your thoughts and feelings with, political or otherwise.
You make some really good criticisms of OOP language design. I take issue with the part about garbage collecting, as I don't think your points apply well to tracing garbage collectors. In practice, the only way to have "memory leaks" is to keep strong references to objects that aren't being used, which is a form of logic bug that can happen in just about any language. Also good API design can largely alleviate handling of external resources with clear lifetimes (files, db connections, etc), and almost any decent OOP languages will have a concept of finalizers to ensure that the resources aren't leaked.
Finalizers are crap. Having some sort of do-X-with-resource is much better but now you're back to caring about resource ownership and so it's reasonable to ask what it was that garbage collection bought you.
I agree with your parent that these sort of accidental leaks are more likely though of course not uniquely associated with, having a GC so that you can forget who owns objects.
Suppose we're developing a "store" web site with an OO language - every Shopper has a reference to Previous Purchases which helps you guide them towards products that complement things they bought before or are logical replacements if they're consumables. Now of course those Previous Purchases should refer into a Catalog of products which were for sale at the time, you may not sell incandescent bulbs today but you did in 2003 when this shopper bought one. And of course the Catalog needs Photographs of each product as well as other details.
So now - without ever explicitly meaning this to happen - when a customer named Sarah just logged in, that brought 18GB of JPEGs into memory because Sarah bought a USB mouse from the store in spring 2008, and at the time your catalog included 18GB of photographs. No code displays these long forgotten photographs, so they won't actually be kept in cache or anything, but in practical terms you've got a huge leak.
I claim it is easier to make this mistake in a GC language because it's not "in your face" that you're carrying around all these object relationships that you didn't need. In a toy system (e.g. your unit tests) it will work as expected.
Using a website is a strange example because that’s not at all how web services are architected. The only pieces that would have images in memory is a file serving layer on the server side and the user's browser.
> I claim it is easier to make this mistake in a GC language because it's not "in your face" that you're carrying around all these object relationships that you didn't need. In a toy system (e.g. your unit tests) it will work as expected.
I simply don't see how these mistakes would be any less obvious than in an equally complex C codebase, for instance.
Agree on Finalizers being crap. Do x with resources is shit too though because I don't want indentation hell. Well I guess you could solve it by introducing a do-x-with variant that doesn't open a new block but rather attaches to the surrounding one.
> Finalizers are crap. Having some sort of do-X-with-resource is much better but now you're back to caring about resource ownership and so it's reasonable to ask what it was that garbage collection bought you.
What garbage collection here brings you, and what it has always brought you here, is to free you from having to think about objects' memory lifetimes. Which are different from other possible resource usages, like if a mutex is locked or whatnot.
In fact, I'd claim that conflating the two as is done for example with C++'s RAII or Rust Drop-trait is extremely crap, since now memory allocations and resource acquisition are explicitly linked, even though they don't need to be. This also explains why, as you say, finalizers are crap.
Things like Python's context managers (and with-blocks), C#'s IDisposable and using-blocks, and Java's AutoCloseables with try-with-resources handle this in a more principled manner.
> I agree with your parent that these sort of accidental leaks are more likely though of course not uniquely associated with, having a GC so that you can forget who owns objects.
>
> Suppose we're developing a "store" web site with an OO language - every Shopper has a reference to Previous Purchases which helps you guide them towards products that complement things they bought before or are logical replacements if they're consumables. Now of course those Previous Purchases should refer into a Catalog of products which were for sale at the time, you may not sell incandescent bulbs today but you did in 2003 when this shopper bought one. And of course the Catalog needs Photographs of each product as well as other details.
Why are things like the catalogues of previous purchases necessary to keep around? And why load them up-front instead of loading them lazily if you actually do need a catalogue of incandescent light bulbs from 2003 for whatever reason?
> So now - without ever explicitly meaning this to happen - when a customer named Sarah just logged in, that brought 18GB of JPEGs into memory because Sarah bought a USB mouse from the store in spring 2008, and at the time your catalog included 18GB of photographs. No code displays these long forgotten photographs, so they won't actually be kept in cache or anything, but in practical terms you've got a huge leak.
I'm sorry, what‽ Who is this incompetent engineer you're talking about? First of all, why are we loading the product history of Sarah when we're just showing whatever landing page we show whenever a customer logs in? Why are we loading the whole thing, instead of say, the last month's purchases? Oh, and the elephant in the room:
WHY THE HELL ARE WE LOADING 18 GB OF JPEGS INTO OUR OBJECT GRAPH WHEN SHOWING A GOD-DAMN LOGIN LANDING PAGE‽ Instead of, you know, AT MOST JUST LOADING THE LINKS TO THE JPEGS INSTEAD OF THE ACTUAL IMAGE CONTENTS!
Nothing about this has anything to do with whether a language implementation has GC or not, but whether the hypothetical engineer in question, that wrote this damn thing, knows how to do their job correctly or not.
> I claim it is easier to make this mistake in a GC language because it's not "in your face" that you're carrying around all these object relationships that you didn't need. In a toy system (e.g. your unit tests) it will work as expected.
I don't know, if the production memory profiler started saying that there's occasionally a spike of 18 GB taken up by a bunch of Photograph-objects, that would certainly raise some eyebrows. And especially since in this case they were made by an insane person, that thought that storing the JPEG data in the objects themselves is a sane design.
---
As you said above, this is very much not unique to language implementations with GC. Similar mistakes can be made for example in Rust. And you may say that no competent Rust developer would do this kind of a mistake. And that's almost certainly true, since the scenario is insane. But looking at the scenario, the person making the shopping site was clearly not competent, because they did, well, that.
Why do you treat memory allocations as a special resource that should have different reasoning about lifetime than something like a file resource, a db handle, etc etc? Sure if you don’t care about how much memory you’re using it solves a small niche of a problem for a lot of overhead (both CPU and memory footprint) but Rust does a really good job of making it easy (and you rarely / never really have to implement Drop).
The context managers and stuff are a crutch and admittance that the tracing GC model is flawed for non memory use cases.
Memory is freed as soon as the program closes by the operating system.
If you have an open connection to a database they might expect you to talk to them before you close the handle (for example, to differentiate between crash and normal shutdown). Any resource shared between programs might have a protocol that needs to be followed which the operating system might not do for you.
The GC model only cares about memory, so I don't really understand what you mean by "flawed for non memory use cases". It was never designed for anything other than managing memory for you. You wouldn't expect a car to be able to fly, would you?
I personally like the distinction between memory and other resources.
If you look hard enough I'm sure you'll find ways to break a model that conflates the two. Similar to this, the "everything is a file" model breaks down for some applications. Sure, a program designed to only work with files might be able to read from a socket/stream, but the behavior and assumptions you can make vary. For example, it is reasonable to assume that a file has a limited size. Imagine you're doing a "read_all" on some video-feed because /dev/camera was the handle you've been given. I'm sure that would blow up the program.
In short, until there is a model that can reasonably explain why memory and other resources can be homogenized into one model without issue, I believe it's best to accept the special treatment.
> to differentiate between crash and normal shutdown
> Any resource shared between programs might have a protocol that needs to be followed which the operating system might not do for you.
Run far and quickly from any software that attempts to do this. It’s a sure fire indication of fragile code. At best such things should be restricted to optimizing performance maybe if it doesn’t risk correctness. But it’s better to just not make any assumptions about reliable delivery indicating graceful shutdown or lack of signal indicating crash.
> In short, until there is a model that can reasonably explain why memory and other resources can be homogenized into one model without issue, I believe it's best to accept the special treatment.
C++ and Rust do provide compelling reasons why memory is no different from other resources. I think the counter is the better position - you have to prove that a memory resource is really different for the purposes of resource management than anything else. You can easily demonstrate why network and files probably should have different APIs (drastically different latencies, different properties to configure etc). That’s why network file systems generally have such poor performance - the as-if pretension sacrifices a lot of performance.
The main argument tracing GC basically makes is that it’s ok to be geeedy and wasteful with RAM because there’s so much of it that retaining it extra is a better tradeoff. Similarly it argues that the cycles taken by GC and random variable length pauses it often generates don’t matter most of the time. The counter is that while it probably doesn’t matter in the P90 case, it does matter when everyone takes this attitude and P95+ latencies (depending on how many services are between you and the user you’d be surprised how many 9s of good latency your services have to achieve for the eyeball user to observe an overall good P90 score).
> For example, it is reasonable to assume that a file has a limited size. Imagine you're doing a "read_all" on some video-feed because /dev/camera was the handle you've been given. I'm sure that would blow up the program.
Right, which is one of many reasons why you shouldn’t ever assume files you’re given are finite length. You could be passed stdin too. Of course you can make simplifications in cases, but that requires domain-specific knowledge, something tracing GCs do not have because they’re agnostic to use case.
Because 99.9% of the time the memory allocation I'm doing is unimportant. I just want some memory to somehow be allocated and cleaned up at some later time and it just does not matter how any of that happens.
Reasoning about lifetimes and such would therefore be severe overkill and would occupy my mental faculties for next to zero benefit.
Cleaning up other resources, like file handles, tends to be more important.
Then use “tracing GC as a library” like [1] or [2]. I’m not saying there’s no use for tracing GC ever. I’m saying it shouldn’t be a language level feature and it’s perfectly fine as an opt-in library.
Bolt-on GCs necessarily have to be conservative about their assumptions, which significantly hinders performance. And if language semantics doesn't account for it, the GC can't properly do things like compacting (or if they can, it requires a lot of manual scaffolding from the user).
It should be a language level feature in a high-level language for the simple reason that in vast majority of high-level code, heap allocations are very common, yet most of them are not tied to any resource that requires manual lifetime management. A good example of that is strings - if you have a tracing GC, the simplest way to handle strings is as a heap-allocated immutable array of bytes, and there's no reason for any code working with strings to ever be concerned about manual memory management. Yet strings are a fairly basic data type in most languages.
That's my point though. You either have graph structures with loops where the "performance" of the tracing GC is probably irrelevant to the work you're doing OR you have graph ownership without loops. The "without loops" is actually significantly more common and the "loops" case actually has solutions even without going all the way to tracing GC. Also, "performance" has many nuances here. When you say "bolt on GC significantly hinders performance", are you talking about how precisely they can reclaim memory / how quickly after being freed? Or are you talking about the pauses the collector has to make or the atomics it injects throughout to do so in a thread-safe manner?
I suspect the benefits of compaction are wildly overstated because AFAIK compaction isn't cache aware and thus the CPU cache thrashes. By comparison, a language like Rust lets you naturally lay things out in a way that the CPU likes.
> if you have a tracing GC, the simplest way to handle strings is as a heap-allocated immutable array of bytes
But what if I want to mutate the string? Now I have to do a heap allocation and can't do things in-place. Memory can be cheap to move but it can also add up substantially vs an in-place solution.
English is not my native language - could you help me understand where I gave the impression that Christianity made no contributions to the development of the west?
It is an obvious fallacy to conflate the usage of tear gas canisters with the usage of mustard gas in WWI. They differ drastically in amount/concentration, area of effect, and long term health risks, thus should be treated differently in considering their usage.
Tear gas clearly sits on a spectrum of non-lethal arms with various other options that are more or less harmful. While it's entirely fair to criticize its use on a case by case basis, insofar as disorderly public gatherings can have varying levels of violence/destruction, it would stand to reason that some instances warrant the use of tear gas.
"One must know oneself [reference to Socrates]. If this does not serve to discover truth, it at least serves to regulate one's life, and there is nothing better"
"Physical science will not console me for the ignorance of morality in the time of affliction. But the science of ethics will always console me for the ignorance of the physical sciences."