The ring's source code in particular had to be transcribed from an actual shader I made (https://www.shadertoy.com/view/msd3R2). I didn't have the patience to write it directly in CSS. It's neither the right semantic nor the medium for it (which does make you wonder what CSS' medium was supposed to be).
Perhaps unsurprisingly, it runs at full 120fps on a decent laptop (it's just 21*21 JS style setting really). I've tried to make a half-JS, half-CSS version, and the speed is somewhere in-between. Basically, CSS variables are _extremely_ slow across all browsers, and I don't believe I've hit some particular edge-case. They're not used idiomatically here, but still, we shouldn't hit this drastic of slowdowns when updating 441 styles. They're also pretty hard to read the moment the calculation isn't some trivial basic arithmetics; and even then...
The true shader version runs at basically infinite fps =)
It's more than that. The way black box composition is done in modern software, your n=100 code (say, a component) gets reused into a another thing somewhere above, and now you're being iterated through m=100 times. Oops, now n=10k
Generally, Casey seems to preach holistic thinking, finding the right mental model and just write the most straightforward code (which is harder than it looks; people get distracted in the gigantic state space of solutions all the time). However this requires 1. a small team of 2. good engineers. Folks argue that this isn't always feasible, which is true, but the point of these presentations is to spread the coding patterns & knowledge to train the next gen of engineers to be more aware of these issues and work toward said smaller team & better engineers direction, knowing that we might never reach it. Most modern patterns (and org structures) don't incentivize these 2 qualities.
> The way black box composition is done in modern software, your n=100 code (say, a component) gets reused into a another thing somewhere above, and now you're being iterated through m=100 times. Oops, now n=10k
That doesn't seem quite right. as 100 * (100^2) <<<<< 10000^2
Yeah I was only talking about quantities. Equivalently, assume that it's a linear algorithm in the child and a linear one in the parent. Ultimately it ends up as O(nm) being some big number, but when people do runtime analysis in the real world, they don't tend to consider the composition of these blackboxes since there'd be too many combinations. (Composition of two polynomial runtimes would be even worse, yeah.)
Basically, performance doesn't compose well under current paradigms, and you can see Casey's methods as starting from the assumption of wanting to preserve performance (the cycles count is just an example, although it might not appeal to some crowds), and working backward toward a paradigm.
There was a good quote that programming should be more like physics than math.
Quite a few APIs use a pair of `{start, length}` instead, which in the context of the post's example, is even clearer. Empty interval would be `length == 0`, time interval would be a single array of `starts`, etc. Fewer subtractions (to get length) usually end up nicer too.
Personally I feel that Cappuccino is one of the last frameworks that still cared about a kind of interaction design that’s no longer discussed on the web, replaced mostly by more devops/abstraction-oriented discussions.
I manage a web UI programming language in my free time, and the juxtaposition of folks claiming FP ergonomics benefits, then upon my request, showing a static, interaction-less end result whose improved version would obviate their pristine architecture, is pretty staggering. The typical defense is "hey we're not designers" but if you zoom out a bit you realize the whole environment doesn't foster engineers to care about design concerns anymore (barring a niche but valuable vertical of optimizing for payload size). This in turn puts pressure back onto designers who come to expect less and less of what they care about on the web.
Just the other day a newcomer shipped an animated row transition after fighting her framework for 3 weeks. The designer was delighted, but the manager didn't even get the point because he matured in whichever era of JS framework that de-emphasized acquiring taste in interactions.
I myself come from a Flash background, so rather than seeing an upward trend, I see a decline in UX concerns, followed by an incline of devops-related concerns in UI frameworks (accompanied by HN comments saying that in both cases the web should have stayed as a document format, only to end up with an awkward mix of document + app architecture their desktop apps through Electron anyway).
If I were to categorize these "eras", I'd rather take the perspective of wondering at which point, and why, framework process ended up more important than the product. Heck, a similar thing is happening on native too, unfortunately. Where did all the interaction designers go?
Maybe AR would nudge more folks to learn and focus on rendering, gestures, transitions, framerate, intent and the rest.
Some upcoming languages like Zig and Jai have features to allow you to easily switch from array-of-structures to structure-of-arrays at writing time (https://www.youtube.com/watch?v=zgoqZtu15kI). Some edge cases also require rather unique ways of chopping up data, that wouldn't lend themselves well to be shoehorned in one of the canned categories of transforms.
Although constraint declaration is orthogonal to data-oriented development, I'd say that the philosophy of using generic constraint solvers (or other overly generalized CS ideas) goes against the spirit of DOD, which is to "just simply do the work". It is after all reacting against object-orientation (and even modern FP) which tended to think too much in the abstract and worrying too much about some form of taxonomy, as opposed to coding against plain data.
> Some upcoming languages like Zig and Jai have features to allow you to easily switch from array-of-structures to structure-of-arrays at writing time.
I help maintain ReScript (https://rescript-lang.org) and we've been rolling without an import statement for years now (basically OCaml's module system). The default is to just write `MyModule.doThis` at callsites. Sometime you do wildcard open (`open MyModule`) within the right scope. Sometime you do it at the top level for convenience (e.g. stdlib), but folks try to be judicious. And yes, you can alias a module path, e.g. `module Student = School.Class.Student`. Worth noting: the reason why fully qualified `MyModule.doThis` can be the default, is that we _usually_ dictate filenames to be unique within a project, so there's no verbose path issue nor refactoring issues (and yes this works fine... Facebook does it too).
Static analysis-wise (which is what most of the blog post's about), things are basically the same. The tradeoffs are mostly just in terms of readability. I used to be ambivalent about this myself, but looking at the growing body of GitHub TypeScript code with mountains of imports automated by VSCode, imo we've landed on a decent enough design space.
Not familiar with Rust, but I really like the ReScript (OCaml) module system.
It just gets out of the way. Verbosity is easy to prevent with "open" statements. The nice thing about those is that you can put them in a local scope where you use them (which is generally recommended).
It's just a lot more flexible and a lot less hassle.
Jai is, imo, doing it right! For custom iteration one really _just_ wants it to be a little syntax sugar over a regular loop. Most other languages end up constructing some paradigm for iteration that drags in OO, FP, some interface concepts, generics, etc. Never mind that some edge-case async generator iteration can't leverage the `for` syntax; they deserve to stand out and be examined anyway. Especially in the case of Jai, where you'd want assurance that the for loop isn't incurring some weird function, allocation and other hidden overhead. No sufficiently smart compiler needed.
Compare that to e.g. Swift's iteration, with corners of undefined behaviors, potential allocations, and a collection types hierarchy that feels more like doing taxonomy than just looping.
Though to be fair, Jai's loop is rather intense in its usage of a macro system's features.
...So none of these typical hand-wavy dismissals apply:
1. "Is it really faster for edge cases"
2. "He probably didn't implement certain features like Arabic and Chinese"
3. "Businesses just wants enough"
4. "Businesses just wants enough"
5. "It probably makes some closed-world assumption"
The performance of RefTerm didn't come from some big tradeoff; it came from using the right perspective and keeping things simple.
Sure, past a certain complexity threshold, you'd have to work for extra perf, but from my observations, folks who immediately jump to the wrong implicit conclusion that "good perf must have required compromises" got conditioned to think this way because the software they work on are already (often unnecessarily) complex.
I don't think dismissal 1 applies anyway - even if RefTerm didn't implement, say, variable-width fonts - you could just build a terminal that uses RefTerm's fast algorithms for the common case, then falls back to the Windows Terminal's slower algorithms for the more general case.