> “And Knosso whatever-his-name was a physicist. That’s the Library of Nothing.”
> “Nothing?”
> The old woman had her spiel ready. “Physicists long since determined that most of space was empty, and most of each atom was empty, so that the overwhelming nature of the universe is nothingness, with tiny interruptions that contain all of existence. So their library is named for this Nothing that comprises most of the universe. And the mathematicians share the space, because they are proud to say that what they study is even less real than what the physicists study, so their portion is called the Library of Less than Nothing.”
> Rigg decided he was going to like the physicists. It seemed to him, though, that the mathematicians must have an annoying competitive streak.
Also a good lesson about including screenshots in any PR with UI changes. Once the maintainer saw the changes visually they were much more willing to iterate and get the PR merged.
I love this approach and have used it many times in JavaScript. I often end up adding an additional map in front with the resolved values to check first, because awaiting or then'ing a Promise always means you will wait until the next microtask for the value, instead of getting it immediately. With a framework like React, this means you'll have a flash of missing content even when it is already cached.
This surprises me. I had expected the microtask to be executed right after the current one, ie before any layouting etc - isnt that the whole point the "micro" aspect?
I was able to reproduce this in a simple example [1]. If you refresh it a few times you will be able to see it flash (at least I did in Chrome on Mac). It is probably more noticeable if you set it up as an SPA with a page transition, but I wanted to keep the example simple.
The event loop shouldn't continue the circle until the microtask queue is empty, from what I remember.
Browsers probably used to squeeze layouting and such things in when the event loop completed an iteration or became idle. However nowadays it's entirely possible that have, or will have, browsers that do even work in parallel while the javascript event loop is still running, possibly even beginning to paint while javascript is still running synchronously (haven't seen that yet).
I've seen instances where browsers seem to be like "nah, I'll render this part later" and only would render some parts of the page, but certain layers (scrolling containers) would not be filled on the first paint - despite everything being changed in one synchronous operation!* And I've seen them do that at least 5 years ago.
It seems to me the situation is complicated.
* There was a loading overlay and a list being filled with data (possibly thousands of elements). The loading overlay would be removed after the list was filled, revealing it. What often happened was that the overlay would disappear, but the list still looking empty for a frame or two.
Thanks for calling out that this was inspired by CodeFlow, I kept thinking that as I was reading. Still one of my favorite code review tools, used ~2012.
One of my favorite features was a panel with every comment on the PR, sorted and organized by status. Because all files in the PR were preloaded clicking a comment instantly took you to the specific code and version. So good!
If you click the "Comments ->" link in the meta section on the left nav of the review, it will take you to my version of the comments panel.
A couple improvements I made:
* You can see the whole discussion. The last time I used CodeFlow it only showed one line of the comment which often wasn't enough for me to remember what it was so in practice I always had to click on every single one to see the whole thing. Not a big deal with CodeFlow since it was so snappy as you mentioned.
* It shows a preview of the code that was commented on to provide context.
Thanks for taking a look! Great to see other CodeFlow fans out there.
Using thin-backend has been one of the most delightful experiences I've had making an SPA with a simple backend. The developer experience with the generated TypeScript types is particularly great, the DB migrations remind me of Prisma, and knowing I have a standard Postgres db under the hood is comforting. It isn't a large or complex app, but Marc has been super responsive to bugs and feature suggestions so it has been fun to iterate on. You can find the code at https://github.com/ianobermiller/colorcal.
This is fantastic, wish I had it before we had our fourth and final child. One piece of feedback: when I use the app with my thumb, the natural position for it to rest is right over the name. Consider moving the name more towards the center or top of the screen and making it possible to swipe anywhere in the top yellow area.
The problem is that TypeScript does not scale to the size of the giant monorepo at Facebook, with hundreds of thousands, if not millions of files. Since they aren't organized into packages, it is just one giant flat namespace (any JS file can import any other JS file by the filename). It is pretty amazing to change a core file and see type errors across the entire codebase in a few seconds. The main way to scale in TypeScript is Project References, which don't work when you haven't separated your code into packages. (Worked at Facebook until June 2021).