Hacker Newsnew | past | comments | ask | show | jobs | submit | cuddlecake's commentslogin

Promises are, technically speaking, "abstracted crap".

So is abstracted crap only ok if it's already in the JS standard library?


Promises proved themselves in libraries before they got added to Javascript.


Signals have, too. And it is literally spelled out in the readme, in the introduction section


They did create a polyfill, but the readme says it was based on design input from other projects, not on aligning multiple existing designs. This at least sounds like the opposite of what happened with promises, where they already existed in multiple libraries before the Promises/A design came out.


> where they already existed in multiple libraries

Signals exist in multiple libraries with what are really minor variations on the theme.

> before the Promises/A design came out.

That's why the current proposal asks for input about design.

IIRC, promises also had multiple iterations on the design. There were calls to make them more monadic, less monadic, cancelable, non-cancelable etc.

And the original proposal looks nothing like the eventual API: https://groups.google.com/g/commonjs/c/6T9z75fohDk [1]

And new things are still being added to them (like Promise.withResolvers etc.)

[1] There's a great long presentation on the history of promises here: https://samsaccone.com/posts/history-of-promises.html


I hate promises. Writing:

let result; let error; await blah .then(r=>result=r) .catch(e=>error=3) .final(callback(err, result));

Is disgusting.


How _do_ you get a 3px gap though?


As a frontend dev, what are good resources to learn about principles and heuristics that I can apply in my work?

I have some knowledge, albeit very primitive, that I picked up from an HCI introduction and some good designers, but nothing too technical / formal yet.


I'm not a UI/UX designer, and am equally frustrated by the failures described in the article and other comments.

One item I can bring to the table from the world of designing cockpits for racecars and airplanes is that the primary principle to follow is:

Reduce Driver/Pilot Workload.

Does whatever you are doing around the workspace increase or decrease the work that the driver or pilot must do? Can the status of [thing] be discovered (read/heard/felt) with minimum time and effort? Does [thing] create an otherwise unnecessary need to take even a quick glance at something? Is something in the way of taking an action, requiring an extra motion?

Of course, there are some things that maybe it IS desirable to add an extra bit of work, e.g., a switch that kills critical [thing] maybe should have a cover over it requiring to flip up the cover then hit the switch; an extra moment of thought.

This one principle translates well to software interfaces. Screw whether it "looks clean" or not — does it reduce or increase the user's workload? Even a tiny change can make a huge difference, because many actions are repeated.

I hope this helps


Also, frequently needed actions should be doable mindlessly. With hotkeys, for example. The example from the world of physical UIs would be that one Tesla car that does everything — including shifting gears and adjusting AC — through a huge touchscreen. You need to actually take your eyes off the road and dashboard to operate it because you can't do it without receiving visual feedback. If it had a physical lever for gears and knobs for the AC, you would've been able to control these things using just your muscle memory.


A few key items to consider in reducing workload:

Reduce the thinking load

Implement things that actually reduce the need for the driver/pilot/user to think, allowing just trained reactions.

Be careful with this sort of "automation" — it has to be absolutely reliable, i.e., the unthinking response needs to be correct every time, and the feature does not lead the driver/pilot/user into errors — the occassional errors will force the driver/pilot/user to think MORE, e.g., "wait, will this lead me to an error this time?!?", making everything actually slower. OTOH, if you can truly reliably automatically manage some part of the driver/pilot/user workload, that frees up mental space for every other part of the workload.

Speed Is Life

The interface MUST be faster than the fastest driver/pilot/user. ANY perceptual delay causes multiples of that delay in the response input from the driver/pilot/user, or creates a bad feedback loop (start correcting, response data is delayed, leading to overcorrection, then bigger input the other direction, which data is also delayed, bigger over-reverse correction... crash). In software UI/UX, there may bot be a risk of physical crash, but the effects of microscopic delay are actually insanely corrosive on productivity, think 1-2 orders of magnitude.


> the primary principle to follow

Yes. I spend a lot of effort with people at all levels trying to get them to agree on what, fundamentally, the system should achieve for the user. Sometimes it's speed, sometimes entertainment, trust or some other outcome. But that principle needs to be expressed and agreed by those executing the design, if only to prioritise features. I'm always astonished that they seldom want to talk about that, as if it's mindlessly theoretical or something.

> a switch that kills critical [thing] maybe should have a cover over it requiring to flip up the cover then hit the switch; an extra moment of thought.

As an aside, in the digital world a better pattern is usually to have an undo (probably executed as a delayed trigger allowing you to undo it for a time). This allows you to be confident in using the system in the knowledge that if you do something you don't intend, you can just take it back. Not a thing for cars or planes though, I admit.


> I'm not a UI/UX designer,

> One item I can bring to the table from the world of designing cockpits for racecars and airplanes

Sounds like you are most definitely a UI/UX designer. Those are interfaces. Those are experiences.


I wish somebody would take that approach with Visual Studio Code, which is -- I think -- a particularly egregious offender. I would dearly love it somebody would take a 21st-century approach to time-and-motion studies for programming in Visual Studio code. I think Visual Studio actual did something along the lines of time-and-motion studies at some point in its long history. But I'm increasingly thinking that Visual Studio Code is having a a 3 or 4x impact on my productivity.

Things seem to start out well with blank projects; but eventually, somewhere around the mid-scale 30k+ line point, all the advantages of all those incredibly great features in VS Code seem to be offset by scaling issues.

There was a point recently at which I seriously thought I was losing my programming edge. But a recent adventure with a non-VSCode project, and a recent discovery with respect to the VSCode debugger is making me wonder whether it's my tools that have lost their edge.

There are a bunch of things going on in that user interface of Visual Studio code that are completely tanking my productivity.

A telling example: When using the Visual Studio Code GDB debugger, Visual Studio code fetches the local variables of ALL stack frames on ALL threads every time you single-step. This happens even when the thread views are collapsed. As a consequence, it takes between 10 and 30 seconds (depending on how thread-heavy your app is) to single step once. I was pushed over the edge when I dropped into a new project and Visual Studio Code was take three minutes to single-step. (Not a memory issue, not a disk issue, not an inadequate machine issue).

So I did some research. This has been an issue since 2016 in Visual Studio code! The solution (according to message posted in 2016 somewhere on the internet): switch to the CodeLLDB debugger extension.

The result: on the 3-minute code base, single steps take distinctly less than a third of second.

It's like the frog in a boiling pot of water: you don't realize the toxic effect that kind of latency has on your your debugging productivity until it instantly goes away. 3-minute single-stepping is obviously impossible to work with; but the cumulative effect of taking 10 seconds to single step is definitely large. I can debug things orders of magnitude faster than I could before. I don't lose my train of thought. I can set breakpoints fearlessly, and single-step through dozens of lines of code -- all things I couldn't do with the default Visual Studio debugger.

The only peculiar side-effect of CodeLLDB: the debugger expression evaluators use RUST expression syntax even for c++ code. Which isn't actually terrible. It's just very strange.

But there are other things as well: the latency on Intellisense updates, where you have to wait 10 or 20 or 30 seconds for the error squigglies to update. Every edit becomes: type a few characters, wait for 30 seconds to see if Intellisense likes its, type a few characters.... And WHATEVER you do, don't unbalance parens or curlies, which will pin all available CPUS for minutes at a time! (Huge recent productivity improvement -- set the number of Intellisense threads to number of CPUs minus 1!).

In the old days, on much less capable hardware, a compile would fail in 3 or 4 seconds; so you could just "type a few characters; press F7, wait 4(!) seconds, and see if the squiggly went away. Although typically, you would fix a few things, wait 4(!) seconds and see if the squigglies went away. (More-or-less instantaneous checks were run in the editor to check for paren/brace balancing).

Instead, VSCode accumulates rats nests of red squigglies that wont go away until background intellisense completes (which could be 90 seconds or more), or until a compile completes, which runs the constant risk of facing a list of "12,000+"(!) errors in the error window, because you have an unbalanced paren or brace, which can take 10 or 15 minutes to straighten out. And the three or four G++ errors that are legitimate get buried in thousands of Intellisense errors that wont go away (that's a recent regression; you used to be able to temporarily filter out Intellisense errors).

I'm increasingly thinking that Intellisense for C++ is dramatically tanking my productivity as well! I mean it does actually work on toolable languages, like C#+Visual Studio, or Java+Android Studio, where the tooling update time is sub-10-seconds. But for C++ it seems to be a disaster. (Wondering whether it actually does work, though).

Build turnaround: there is NO obvious point in the Visual Studio UI to indicate that a build has completed. A tiny status message buried in the status bar. For some reason, my build output window always seems to come unglued from auto-scroll. And VSCode auto-tabs away from the Errors Window to the Build Window without un-de-autoscrolling. So you hit the F7 key, your eyes glaze over while a toxicly overlong build runs for a minute past the point where the error occurred, and you snap to 3 minutes later realizing that the build finished with errors, ages ago.

Mouse-keyboard context changes. Switching your left hand from mouse to keyboard and back is an expensive operation relatively speaking. By my best estimate, I spend about 40% of my editing time switching between mouse and cursor keys. Whereas... on Visual Studio, major editing operations can be performed entirely from the keyboard. (ctrl+space, cursor cursor, carriage-return is burned into my muscle memory for some reason, even though it's been years since I used Visual Studio -- or is that Android Studio?). There's something seriously wrong there too in the arrangement of short-cut keys, mouse and cursor movements.

Ctrl+Left/Right Arrow! There's a simple thing that is bizarely broken, that has a dramatic effect on my productivity. It does something seriously wrong, I'm not sure what, but it's not productive! It consistently takes you to the wrong edge of identifiers and operators, so you end up pounding on unmodified left or right arrow to get to the right place. I'm DEAD certain Visual Studio does it differently; and it's not a problem I've noticed at all in Android Studio.

I'm convinced that a little time spent on time and motion studies could increase the productivity of average developers by dramatically large amounts. Something in the order of 2 or 3- or 4-times more productive. Most especially for Visual Studio Code, which -- for reasons I still don't completely understand -- seems to be extra-double-plus toxic for productivity.


The Design of Everyday Things, by Don Norman is a good place to start.

https://www.amazon.com/Design-Everyday-Things-Revised-Expand...


It's more about trying to work out what motivates people to do things while using some reasonably simple tools like the N/N web heuristics to guide you. More a way of thinking straight about what qualifies as a good design under a certain condition, and what hypotheses to research and test out of that. So it's contextual. It may seem like there should be a more logical formula, but there's really not a lot of technical/formal things (along the lines of, say, accounting or structural engineering) that lead you to the best approach. My frustration is not that people don't necessarily know about these things (although I bet if I mentioned Bruce Tog to anyone at work they'd give me a blank stare), but that they can't marshal them in their work.


I think it's funny you don't see a connection between "Most designers I work with don't have the vocabulary or knowledge about what makes something "usable" in the wider sense" and your inability to answer the question "how do I improve the usability of my work (in the wider sense)." Maybe that's a place to start.


What do you do, when you straighten out your spine? I mean in terms of movement, or do you just change your posture?


Stuart McGill recommends periodically standing and stretching your arms over your head a couple times. You can see the motion demonstrated here: https://m.youtube.com/watch?v=bcbuhePZZj0&t=459s

He also recommended doing the cat-cow from yoga, and there's a set of exercises referred to as the McGill big 3.

I've seen some positive references to the McKenzie method book, Treat Your Own Back. I haven't tried it specifically, but had good luck with their equivalent book for neck treatment.


Basic yoga moves are fantastic for this.

Sun salutations with downward and upward dog. They also strengthen your core.

There are more moves in the Ashtanga sequence, whenever I do it I get a feeling of my discs being stretched and exercised like a rubber ball.


One of the best back stretches that I've found is hanging from a pull-up bar (wall-mounted, removable door frame bar, power rack).

The recent version of Stretching[0] also has computer-specific stretches in it. I was recommended this book by a physical therapist and it's great.

[0]https://www.goodreads.com/en/book/show/561546


Not the original commenter, but I've started doing basic piriformis stretches. Even just the first one on this list has noticeably helped my back/sciatica pain.

https://health.clevelandclinic.org/piriformis-syndrome-stret...

I've added it as a goal in the Finch app, so I get a little dopamine reward when I do it.

Plus owning a dog helps too, getting regular exercise even if it's just walking.


> In no time we'll see SO spammed with questions using `computed(async () => ...` or race conditions when using `effect`.

As opposed to the very few questions about rxjs, where people understand everything perfectly.


Wouldn't just `.` do the same as `d;` in that case?


Not always, the ";" sets the repetition-count of the operation to 1.

You can type "d2;" however.


Good to know, thank you


As most people will - die without legacy and respect, because they are not a corporate overlord.

I guess.


Hi, congrats on the launch.

On my screen, the website is scrollable. Not sure if the animation needs to have top: 30vh and height: 80%

Also: there is a big layout shift right after loading.


Do you have an example for what you want to do, with an object before and after mapping? I don't understand what you mean by "functionally map over a typed object by key"


I think I have run into a similar issue. I wrote a lexer in Typescript. It is table-based, as is the parser that runs after it. The type for the table looks something like this:

    type TokenTable<T> = {
        plus: T,
        minus: T,
        bang: T,
        parenOpen: T,
        //etc.
    };
I also have a type defined as 'type TokenID = keyof TokenTable<unknown>;' this makes it possible to check if a string is a valid key at compile-time. The innermost loop of the lexer is a for..in loop. This gives you the keys of the object. One problem: if you try to apply the TokenID type to the loop variable, you get this message: "The left-hand side of a 'for...in' statement cannot use a type annotation." Because of the design of JavaScript, TS cannot give object keys any other type but 'string', even though this type seems like a clear match.

To get the typechecking back on the keys, you either need to declare the loop variable outside of the loop itself, or use type casting like this:

    let token = "";
    let id: TokenID | undefined;
    for (let key in patterns) {
        const match = patterns[key as TokenID].exec(substring);
        if (match && (match[0].length > token.length || key == "EOF")) {
            token = match[0];
            id = key as TokenID;
        }
    }
Neither is particularly clean.


You can approximate something by defining the valid TokenTable keys as an enum, and using a mapped type for the actual TokenTable type.

There's some boilerplate in the definition, but it's fairly clean and non-repetitive. And easy to use in the "client code".

https://www.typescriptlang.org/play?#code/KYOwrgtgBAKg9ga1AS...


You can also write the array first, without using enum:

const tokens = ['parenOpen', 'bang', 'plus', 'minus'] as const;

type Token = typeof tokens[number];

type TokenTable<T> = Record<Token, T> // alias for { [key in Token]: T }

const isToken = (t: string): t is Token => tokens.includes(t);

const patterns: TokenTable<RegExp> = { bang: /\+/, // rest }


You put some real effort into the example. It's the opposite approach of how I did it, yet works just as well. Thanks!


Can you define a const array with type Array<TokenId> and use it every time you want to loop through these keys?


That's a possibility, yes. This is the only time in the program that a token table is iterated through, however. Most of the time a table is consulted to pursue an action in the parser. For example, there's a function table for when a statement is encountered, another for when an expression operand is encountered, etc. Each entry in the table is either an error message or code which completes the parsing of that statement. The awkwardness above is excusable when it is encountered so little. When writing expression-heavy stuff like 'Object.keys(typedObjName).map(...)' it's more of a problem.


I can't really think of like, a practical use case but here's a jsFiddle of what I mean.

https://jsfiddle.net/knvztxq7/ ^ that code will work, it outputs the key names to the console

https://jsfiddle.net/knvztxq7/1/ ^ that code... is also working even though I swear it hasn't worked for me before so now I think I'm actually just losing my mind.

Yeah wow I can't reproduce my error now... I'll come back here if I figure out what I did.


For one, it's a wrapper in React, so no need to implement the glue code yourself if you want to integrate PDFs in your web app.

Also, it appears to be feature-rich and yet easy to use for simple use cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: