In one of James Gosling's talks he tells a funny story about the origin of this design decision. He went around the office at Sun and gave a bunch of seasoned C programmers a written assessment on signed/unsigned integer behaviors. They all got horrible scores, so he decided the feature would be too complicated for a non-systems programming language.
Non-systems languages still need to interact with systems languages, over the network or directly. The lack of unsigned types makes this way more painful and error-prone than necessary.
To me this just reads like fear mongering and shilling for the status quo political establishment. I've recently been learning a bit about Russian history and it has similarities to their conservative nobility throughout the 19th century trying through various means to suppress the spread of liberalism in the public and intelligentsia: the point being that Russia had serious social ills like serfdom and radical political ideas were absolutely warranted. Social media is destabilizing for the influence of establishment sources of information and more of the public (right and left) is finding out more accurate information about how the world works, then coming to natural conclusions about how to address various social ills. Polarization may be increasing, but people forming stronger opinions is also exactly what you would expect in the face of increased revelation about unsolved social problems. Ultimately, I'm optimistic about the long term effects of social media on politics.
In practice, P2P over ipv6 is totally screwed because there are no widely supported protocols for dynamic firewall pinholing (allowing inbound traffic) on home routers, whereas dynamic ipv4 NAT configuration via UPnP is very popular and used by many applications.
Most home routers do a form of stateful IPv6 firewall (and IPv4 NAT for that matter) compatible with STUN. UPnP is almost never necessary and has frequent security flaws in common implementations.
You just send a (UDP) packet to the other side's address and port and they send one to yours. The firewalls treat it as an outbound connection on both sides.
I don't believe that's true. You would still need something like UDP hole punching to bootstrap the inbound flow on both sides first. Also you would still only be limited to UDP traffic, TCP would still be blocked.
Sending one packet outbound is hole punching. It's really that simple. Since there's no NAT, you don't need to bother with all the complexity of trying to predict the port number on the public side of the NAT. You just have two sides send at least one packet to each other, and that opens the firewalls on both sides.
Ironically, this author gets the relationship between lisp and writing totally wrong. Lisp may be much more artistic, but programming in Java, for instance, is much more akin to writing than programming in lisp is. Written languages have well established vocabulary and grammar, that cannot be changed or redefined by the writer. The author is completely correct that lisp is more of a "programming medium" than a "programming language", since the language itself can be molded and changed by the programmer in very self-expressive ways. However, he doesn't follow through with this observation to the obvious conclusion that this feature of lisp, as a medium, makes it fundamentally different from human language.
I don’t think that’s the right take. Poetry manipulates common grammatical rules and still communicates meaning from the writer to the reader, perhaps in an even deeper way because of that manipulation. Of course in Java and many other programming languages, grammatical errors will simply not compile. LISP is one of those few languages where grammar can change from program to program, much like with poetry
Even though there is much more freedom in poetry, it is still defined by a specific set of rules/features: verses, rhythm, stanzas, spacing, meter, and rhyming. It's only because of these restrictions that it is so obvious when writing is or isn't poetry. These features and forms can streched, but unlike lisp they cannot be completely redefined.
You’ve got to read more poetry before making assertions like this. In practice, the definition is more fluid than that.
Lisp cannot be completely redefined. You can’t avoid parentheses, and if you stray too far from common idiom, you’re no longer writing Lisp, you’re writing something else using Lisp syntactic forms.
> It's only because of these restrictions that it is so obvious when writing is or isn't poetry. These features and forms can streched, but unlike lisp they cannot be completely redefined.
I disagree here. To take rhyming as an example. It's possible to have a poem where every line rhymes AND a poem where there is no rhyme at all. It's not as simple as saying 'okay the lines in this text don't rhyme so it can't be a poem'. The same is true of the things like spacing and meter. These are all massively variable, and the result doesn't even have to be bound by the usual rules of grammar. English - or any other natural language - is much more variable than Lisp.
For me the defining feature of poetry is that the form and nature of the language used in a text may suggest meaning over and above what the individual words say. This definition is subjective, and suggests that the poetry is in the eye of the beholder, but is more honest than a simplistic checklist of features to look out for.
I didn't say poetry has to have all of those things, but it has to contain some of them or it simply isn't poetry. I would challenge you to find me one good example of poetry that has none of the features I listed.
This whole poetry topic is really besides the point anyways.
> English - or any other natural language - is much more variable than Lisp.
I don't feel like you are actually addressing what I'm saying, so let me reiterate it more clearly. I'm not making any assetions about the absolute creative power of lisp or writing. It is the author of the article who points out that lisp's distinguishing feature, compared to other programming languages, is it's ability to specialize and mutate its own verbiage/syntax to better fit certain problems or modes of thinking. I am simply pointing out the irony that this charactistic of lisp also distinguishes it significantly from natural language, even though the author is attempting to argue that programming lisp and writing literature are similar.
English Language is the best general purpose conveyance of arbitrary ideas, and it has syntax rules just like programming languages. It's "best" by the metric of being "easiest for humans to understand". That's what I mean by best, in this case.
I think one can argue that LISP is the "best" computer programming language based on a set of metrics that revolve around simplicity and composability. There's no simpler language. There simply cannot be, because there's nothing "extra" in LISP. It's the most compact (while still being human readable) way to define and/or call functions.
You can also argue that LISP is closer to English than any other programming language, because in English we use parenthesis's and lists to enumerate things related to what has been said. For example "(add, 1, 2)" is axiomatically the best way to express adding two numbers, in a way that scales to arbitrary numbers of operations. It's superior to "(1+2)" because the plus sign is a single character (cannot scale), and therefore there are a limited number of them, and using symbols means humans have to memorize them rather than simply reading their name. But "add" is a name one can read. Also "add one and two" is a valid English sentence, so all LISP programs are similarly valid English language expressions, where the parenthesis is merely used for grouping language "clauses".
If the world were ever to be forced to agree on one single high level programming language, it would necessarily need to be LISP for about 10 other huge reasons I could name: Everything from ease of writing parsers, to compression of code, simplicity, ease for learning, even as a young child, etc.
> English Language is the best general purpose conveyance of arbitrary ideas, and it has syntax rules just like programming languages. It's "best" by the metric of being "easiest for humans to understand". That's what I mean by best, in this case.
Given that most people alive today don't understand English at all, I don't think this claim holds up very well.
> For example "(add, 1, 2)" is axiomatically the best way to express adding two numbers, in a way that scales to arbitrary numbers of operations. It's superior to "(1+2)" because the plus sign is a single character (cannot scale), and therefore there are a limited number of them, and using symbols means humans have to memorize them rather than simply reading their name.
I'd be willing to wager that "1+2" is understood by far more people across the globe than "(add, 1, 2)".
* I use "English Language" as a synonym for "Human Language". However even if you want to be pedantic and interpret all my words in the literal sense, everything I said is still literally true.
> I use "English Language" as a synonym for "Human Language".
That was unclear given you kept calling out English, and not natural or human language more broadly in the rest of your comment. But I'll go with it.
> all my words in the literal sense, everything I said is still literally true.
No, they aren't. You need to make a stronger case than "Because I declared it axiomatically true".
+ has become part of nearly every language already, what's the value of picking one word (add) from one language (English) to replace it? Or to be more generous to say that every language should substitute for + whatever their language's word is. Now they can't communicate without a translator. Or, they could just use + as has been done for centuries. Why make things harder on ourselves?
The point about `(+ 1 2)` v.s. `1+2` is about the fact that the LISP syntax generalizes to all computations, whereas mathematical expressions do not. The beauty of LISP is that one simple syntax solves everything about computation in the axiomatically simplest way possible.
Lisp would be vastly less readable if the parentheses were removed, and arity had to be worked out according to the definitions of the functions and operators, understood to be working on an implicit operand stack.
Think about it: the entire class of error of mismatched stack height doesn't exist in Lisps.
There could be an operand stack, if the Lisp is translated to a stack-based byte code; but it's managed so the programmer isn't concerned about it.
Yeah, RPN is essentially what you'd get as output if you compiled LISP using a compiler. Claiming RPN is as easy to deal with as LISP would be utterly absurd, and no one genuinely believes that.
Well, RPN with parentheses, as a layer of syntax, would be easy to deal with. That's what Lisp backwards would be.
((a b +) (a b) add defun)
(((list pop) print) list while)
It helps me to read it in Japanese:
"A to B wo tasu, A to B no paramēta no aru, 'add' to iu kansuu wo 'defun' shite."
"lisuto wo 'pop' shite, sore wo insatsu shite, lisuto ga kara de nai aida ni"
Interesting things happen. When we think about how 'add' is called, we follow it from right to left, contrary to the direction of speech of the Japanese sentence. The function add is called, the (a b) parameters receive argument values and the (a b +) is evaluated.
However, the chained application expressed by nesting, which people complain about being backwards (so they need to invent piping/threading macros to go left to right) now reads left to right! We do have to evaluate the while condition first, but then the pop and print to left to right.
Yeah, the Japanese incorrectly put verbs at the end of the sentence! Just like Yoda does. Asian he must be. Incorrect they all are.
But seriously, yeah computers are naturally 'Stack Machines' [almost] always, where they need the arguments pushed onto a stack before the 'verb' is executed.
That's why we have interpreters and compilers: So we can make code easier to read. FORTRAN was originally invented as a way to let humans think in terms of parenthetical groupings for math and functions, specifically to _avoid_ reasoning in an RPN way.
Reverse Polish notation is every bit as easy, simple and minimal as Lispian prefix notation. They are mirror images of one another... and for most ordinary people, both are equally difficult to read or to use.
And I include myself in this.
Remember that for a lot of ordinary people, school algebra is almost impossible to follow and is the reason they stop studying maths as soon as they can. And that is simple infix notation.
Riddle me this then: When editing code full of parentheticals, why do devs find it helpful that IDEs can find and highlight the matching opening/closing parenthesis automatically? I mean, based on your logic, parenthesis just don't make things any easier, right? I'll leave the topic of "indentation" completely off the table too, because a single riddle is enough.
The parenthesis in LISP make it dramatically easier to deal with than RPN; and I don't believe for a millisecond that you objectively disagree with that fact.
> There's no simpler language. There simply cannot be, because there's nothing "extra" in LISP. It's the most compact (while still being human readable) way to define and/or call functions.
I guess you have not gotten into stack or array programming languages yet?
Forth is insanely compact and then there is APL which is a complete other ballpark.
Or check out Rebol for a homoiconic language in a very different syntax.
Lisp is amazing but oh boy there is whole other world out there. It is just one possible local optima. One that I personally love but not the only one.
I wouldn't exactly call APL simpler than LISP. Just because there are some things in APL that _can_ be coded with less characters doesn't make it overall simpler or even more compact in the general case. It's just axiomatic that for a language you need the ability to define and call functions, and it's just a debate about whether there's a simpler syntax than LISP or not, and I say there isn't.
> English Language is the best general purpose conveyance of arbitrary ideas, and it has syntax rules just like programming languages. It's "best" by the metric of being "easiest for humans to understand". That's what I mean by best, in this case.
This is the point I take issue with. I agree with you that lisp is the simplest and "best" programming language. Unlike lisp, there is no clear "best" natural language that is more simple and composable than all other natural languages (I know that's not what your claiming, just pointing that out). The dimension of your "best metric" for language is pretty bizarre though; all you are saying is that spoken/written language, in general, is better than grunts and pointing. If you actually compare the space of natural language and the space of programming languages, which is much more interesting imo, I think you would have to agree that non-lisp programming languages are more similar to natural language because their development was more practical and unprincipled than lisp.
The reason I say LISP is close to English is because it's syntax is purely a verb followed by objects, and there's [practically] no other symbols in the language other than parenthesis.
Since the word "best" always gets everyone's dander up on HN, I was very careful to point out we have to define our metrics (of comparison) before we can use that word, and that's precisely what I did.
Written languages are not static. You can absolute introduce your own vocabulary and grammar. Yes, you might become harder to understand for someone unfamiliar with it but so can get a Lisp program when macros are over used. It is an art after all.
This is especially insane when talking about English. What English? American English? What group? African-American English that introduces whole new grammatical concepts? Indian English? Even two people grown an raised in London can have wildly different ways of speaking depending on their class backgrounds and many other factors. There isn't one English.
And it is ever evolving. Shakespeare English is vastly different than 21st century English. In fact Shakespeare himself invented roughly 1.7k words that we still use today.
Not sure what you are arguing with in my comment because I never said that languages are static; obviously they change. But to pretend that language, as practiced today, is not well defined is absurd. Every part of English grammar and syntax has been methodically systemized, and this information is widely taught in public education programs. Some common set of definitions/rules (may vary by people or region) is conformed to by speakers of the language.
Oh, the police is gonna show up if I verb something that isn't a verb? People only ever use grammatical constructs and vocabulary that they have learned in school?
It is super common for authors to introduce new words or bend the grammar. Again, bloodydamn Shakespeare invented so many new words! Oh, and bloodydamn is such an invented word from a very contemporary and popular book series called the Red Rising series. You don't even need to be high brow to invent new words and get away with it.
I urge you to explore more literature and poetry. There is more to English than what they teach you at school.
> Oh, the police is gonna show up if I verb something that isn't a verb? People only ever use grammatical constructs and vocabulary that they have learned in school?
That's not at all what I said in my comment and I'm not at all refuting your point that languages change and can vary depending on who is using it.
> Every part of English grammar and syntax has been methodically systemized
No, it hasn't. None of it has, ever.
So far, you have been wrong about writing, about human language (specifically, English), about programming languages (specifically, Lisp and Java), about poetry, and about the rules of English and how they're set.
I am intrigued. Is this performance art? What next?
Have you seen LOOP[1]? That example barely scratches the surface too.
As for being able to make words mean different things and break grammatical strictures; that’s called poetry when we do it in English. And yes there is bad poetry, but some is superlative.
While I agree with the thrust of the article being that students are cheating themselves by relying on LLMs, it's important to reflect on ways in which educators have encouraged this behavior. Anyone who has been to college in the age of the internet knows that many professors, particularly in the humanities, lazily pad out their class work with short menial writing assignments, often in the form of a "discussion board", that are rarely even graded on content. For students already swamped with work, or having to complete these assignments for general ed courses unrelated to their major/actual interests, it is totally understandable why they would outsource this work to a machine. This is a totally fixable issue: in-person discussions and longer writing assignments with well structured progress reports/check-ins and rounds of peer review are a couple ways that I can think of off the top of my head. Professors need to be held accountable for creating course loads that are actually intellectually interesting and are at least somewhat challenging to use LLMs to complete. When professors are constantly handing out an excess of low-effort assignments, using shortcuts becomes a learned behavior of students.
It is complicated because tariffs are not just "passed off". In the real world, tariffs are almost always marginal, meaning each company is willing to absorb some of the cost, ie lower profits. In parent's example, the US has a large steel industry and rail companies relying on imports cannot arbitrarily raise prices because they need to stay competitive with companies using domestic. Markets are also dynamic and simply comparing pre/post price may work temporarily, but for a persistent tariff that metric will become increasingly meaningless as price inevitably fluctuates due to other factors.
This is definitely not true from a philosophical level. Contemplation of an infinite god/cosmos, and the seemingly infinite nature of time are core aspects of early Greek philosophy, which heavily influenced the philosophy of each of the Abrahamic religions.
As you started alluding to, the reason the west may seem more fearful of the infinite is likely because of widespread secularism, not western religion. An infinite cosmos is not nearly as scary to someone whose life purpose is appeasing an all good/infinite/timeless/immutable being, as it is for someone whose life purpose is managing their dopamine levels.
Somewhat related: being someone who grew up in the west, I've always wondered how Hindus and Buddhists deal with evidence of the big bang. It fits fairly naturally into Abrahamic traditions that believe in a beginning to the universe. Though, it is fairly important to the philosophy of those eastern traditions that time and space (samsara) has no beginning or end.
I've recently been reading the Pensees, so it feels timely that this article was posted. I'll add here a couple more practical pieces of advice that Pascal offers for dealing with what he considers the wretchedness of human experience:
"One must know oneself [reference to Socrates]. If this does not serve to discover truth, it at least serves to regulate one's life, and there is nothing better"
"Physical science will not console me for the ignorance of morality in the time of affliction. But the science of ethics will always console me for the ignorance of the physical sciences."
Having used Solid on a largish web product for over a year, I am thoroughly convinced and will not be returning to React in the future.
This is somewhat of an aside: I am aware that the creator of Solid has long been experimenting with adding laziness to the reactive system. I think it would be a mistake. That everything is immediate keeps state changes intuitive, fairly easy to debug, and is one of the strong points of Solid's design. I've never run into a real world scenario where delaying computations seemed like an optimal way of solving a given problem.
I'm curious which part of laziness are you concerned with? Is it delayed execution in events? It is just most lazy things run almost immediately during creation anyway, and on update everything is scheduled anyway. The only lazy thing we are looking at is memo's which while impactful isn't that different as the creation code runs.
I guess the push/pull propagation is harder to follow on update then simple multi queue but in complex cases we'd have a bunch of queue recursion that wasn't simple either.
Wow, I never imagined you would respond to my comment, a little embarrassed ngl :D
I understand that under the hood Solid's reactive system is not quite simple; though, the mental model needed to use it, is very simple, which I greatly appreciate when building complex application logic on top of it. That's really my main concern: that one-way "push" semantics are easy to follow and think about, and adding new mechanics complicates that picture. It seems deceptive that what presents itself, at least conceptually, as just a value access, might now cause arbitrary code execution (another way of putting this is that it feels like it violates the principle of least astonishment).
As I mentioned before, I also haven't run into situations in practice where lazy memos seem like a desirable behavior. If I initialize a derived value, it's because I plan to use it. If some computation needs to be "delayed", I place it inside an if statement with a reactive condition, for instance, createMemo(() => shouldCompute() ? computeValue() : undefined).
All that's said, you've done a fantastic job with Solid. I hope you continue to innovate and prove my objections misguided.
I'll admit that hearing that "laziness" was something being explored for Solid 2.0 also made me uneasy. Like the OP, in my case I know it's because I'm worried that it will complicate working with Solid state in the same way that React's "concurrent mode" complicates working with React state. Really, I just hate React's "concurrent mode" and I really like the simplicity of working with Solidjs' state by comparison (if Solidjs has a similar concept to concurrent mode I haven't run into it yet).
All of this is to say that my own worries aren't based on any concrete objections, but a general fear of change since I like the current approach (and haven't run into any performance issues with it). Also, without knowing exactly what you plan on changing, the idea of introducing "laziness" seams like it could be a euphemism for "async" which feels like it would definitely make things more complex. Anyway, reading your comment makes me hopeful that my ambiguous unease is misplaced.
React has lived long enough to become the villain, and it's way too entrenched. It was certainly a very important step forwards in webdev, but it now probably has more gotchas than vanilla JS does.
I think it's just the lifecycle of craftsman tooling in general.
When everyone has experience with a tool, everyone can enumerate its downsides in unison, but we can't do that with new/alternative tools.
Whether we confuse that for the new tool having no drawbacks, or we're just tired of dealing with the trade-offs of the old tool, or we're just curious about different solutions, we get a drive to try out new things.
React always had gotchas, but the question is how tolerable are those gotchas compared to the gotchas of what you were doing before. And how tolerable are Solid's gotchas going to be once you discover them. Sometimes it's a better set of gotchas and sometimes it isn't.
It's also easy to confuse problems that arise from failing to adequately manage the gotchas with problems inherent in the tool itself. There's a subtle distinction there that's easy to miss, especially for those with a blame-the-system sort of attitude (which I don't entirely fault).
One company I worked for had a very slow frontend. It was common there to blame the slowness on React. "React is just kind of slow."
Another company I worked for had a much larger React-based frontend, and it was fast-loading and snappy by comparison.
The difference is that the second company had much more well-established good practices in its design system, the codebase, the enforced lint checks, and the general knowledge of its engineers. They avoided performance traps that caused multiple renders of things. (The first company's frontend would render a given screen 12+ times for some state changes.)
> The first company's frontend would render a given screen 12+ times for some state changes
Might just be me, but it feels like a good framework/library would put in a lot of work to avoid or at least alert about these kinds of issues so they can’t stay very long without being fixed.
I’ve seen the same in React (even infinite render loops which couldn’t be ignored and had to be fixed) and to a lesser degree with the likes of Vue as well.
I’m not sure what’s missing but having a tool that is easy to use in the wrong way doesn’t inspire confidence.
Very balanced response, but in my case it's less about gotchas and more about APIs. I just think other frameworks have more intuitive APIs than react. Maybe this falls in the same category as a gotcha, but I feel it's a little different.
It's not that it's way too entrenched, it's just that people grew tired of constantly shifting frontend frameworks. Personally: I really didn't care who "won" that war, I just wanted to have a sane well-supported default without having to learn a new pattern every year to store page state or update an icon.
> I've never run into a real world scenario where delaying computations seemed like an optimal way of solving a given problem.
And even when it might be, Solid has always exposed fairly low level reactive primitives for those who want more control. Hopefully if laziness is added, it's in the form of new optional primitives.
> Having used Solid on a largish web product for over a year
I am curious about your experience in this regard. I've been aware of Solid for quite a while (and plenty of other non-React alternatives that on paper seem "nicer") but the problem I usually quickly run into after exceeding the complexity of a contrived demo app is the ecosystem not yet having a number of components, library wrappers, integrations, etc. that I use all the time.
Have you found the Solid ecosystem sufficient for whatever your needs are, is it fundamentally easier with Solid to integrate things that don't exist yet, or did you go into it less spoiled/tainted as I by reliance on React's ecosystem?
I've found the ecosystem to be perfectly serviceable for every complex piece of functionality I needed to bring in: remote state, forms, tables, and routing, come to mind. Complex state management can easily be handled using the library's standard reactive primitives and the community "solid primitives" project has a ton of well made utilities so you don't have to reinvent the wheel for common use cases.
I'm not going to sugar coat it though, SolidJS is not necessarily a batteries included ecosystem. There is a severe lack of components/component libraries. Luckily integrating vanilla (or "headless") JS libs is dead simple once you have enough experience.
Cool. I am a bit of a minimalist (one reason I've never felt the most comfortable with React compared to some other things) and equally not interested in bloat and differing opinions/principles from using component libraries for everything (and I almost never use them for UI) but, yeah, I also don't want to have to write my own high-performance basics. A few years ago when using Vue for something, I had to detour a lot to writing things like virtual list components because at the time they were not available. (I see Solid has a few of those.) Not ideal for a tiny team or as a solo dev trying to make a dent in something in my off hours
> Luckily integrating vanilla (or "headless") JS libs is dead simple once you have enough experience.
Good to know. I expect to need to write my own wrappers for certain things that are more niche, but some frameworks definitely make it easier than others, and I do tire of wrapping the same 153 events and properties and such for some component yet again when [framework of the month] has an annoying way of doing this
For what it's worth, ecosystem is a valid concern, esp compared to React, and esp if dealing with more niche features. Even for something fairly generic like Virtual List implementations. Virtual Lists do exist for solid (specifically tanstack is the most fully-featured, albeit under-documented), they aren't as battle-hardened or as well-documented as their react counterparts. Specifically, I have thought about digging into VirtualList internals, because of a couple issues that I haven't seen in some other framework impls (granted, of most the complex generic ui components, virtual-list is literally the one issue I had with ecosystem).
Also, you'll definitely start seeing edges faster. Things like touch event libraries, animation libraries, maplibre/dataviz/etc. I'd say that the solid ecosystem is at the point where you'll see 1-2 libraries for most things, but if those libraries aren't quite right for you, you can very quickly find yourself digging deeper than you'd like.
That being said, as parent stated, integrating vanilla libs isn't so hard. And also, there is a... solid... amount of people building headless libraries specifically for the ecosystem, which are quite useful (For example, I recently came across https://corvu.dev, which I've started integrating here and there). What I mean to say is less that solid has a poor ecosystem, and more that there isn't the infinite easy-access vibe-code-able pop-in-lib-to-solve-your-problem ecosystem of react.
Even with the shallow ecosystem, solidjs has been quite enjoyable to me. Also, in the two years I've been using it, I think I've built up enough of my own personal ecosystem that I don't really need to reach towards external dependencies all that much, which feels very nice.
I've used Solid Primitives and white it's great, unfortunately it seems pretty dead. None of the primitives is "4 Accepted/Shipped" and many aren't even "3 Pre-shipping (final effort)".
Is laziness intended to offer primitives for suspended UIs?
I haven’t used Solid for a while and can’t recall if there’s a Suspense counterpart already. If not, this seems like a reasonable feature to add. It’s a familiar and pretty intuitive convention
React is, I am convinced, the new Struts only client side.
Struts was SSR before we needed a term for SSR. It had low productivity so it looked like it was doing well in the industry because there were so goddamned many jobs. But if you asked people you really couldn’t find many that loved it, and if they did it was a sign that maybe their judgment wasn’t that great, and you were going to be disappointed if you asked follow-up questions about what else they thought was a good idea.
It was killed by JSTL, which was one of the first templating systems to get away from <% > syntax.
Because the state -> DOM mapping is non-trivial for my application, I ended up writing my own virtual DOM diffing code, a primary facet of React. I appreciate the ease of being able to circumvent this where it's not necessary and performance considerations dominate, though I admit i've not felt the need to do it anywhere yet.
The AI training data set for React is much larger. The models seem to do fine with SolidJS, though I suspect there is meaningful benefit to using React from this point of view.
Overall, I'm happy with where I'm at and I prefer the SolidJS way of thinking, though if I were to do it all over again, I'd probably just go with React for the above two reasons.
I was not expecting to loose karma for this comment!
I have a couple of years familiarity with SolidJS and thought the two key insights that came to mind from building a large project in it would have positive value.