Hacker News new | past | comments | ask | show | jobs | submit | throwitaway1123's comments login

They're going to add it once it stabilizes in Node: https://github.com/denoland/deno/issues/24828#issuecomment-2...


Preact is definitely a good choice if you're looking for something lightweight. React-dom was already relatively hefty, and seems to have gotten even larger in version 19. Upgrading the React TypeScript Vite starter template from 18 to 19 increases the bundle size from 144kB to 186kB on my machine [1][2]. They've also packaged it in a way that's hard to analyze with sites like bundlephobia.com and pkg-size.dev.

[1] https://github.com/facebook/react/issues/27824

[2] https://github.com/facebook/react/issues/29913


> Enums is going to make your TypeScript code not work in a future where TypeScript code can be run with Node.js

Apparently they're planning on adding a tsconfig option to disallow these Node-incompatible features as well [1].

Using this limited subset of TS also allows your code to compile with Bloomberg's ts-blank-space, which literally just replaces type declarations with whitespace [2].

[1] https://github.com/microsoft/TypeScript/issues/59601

[2] https://bloomberg.github.io/ts-blank-space/


Those flags have already started to show up in today's typescript: verbatimModuleSyntax [1] and isolatedModules [2], for instance.

[1] https://www.typescriptlang.org/tsconfig/#verbatimModuleSynta...

[2] https://www.typescriptlang.org/tsconfig/#isolatedModules


Those definitely help, but the proposed erasableSyntaxOnly flag would disallow all features that can't be erased. So it would prevent you from using features like parameter properties, enums, namespaces, and experimental decorators.

It would essentially help you produce TypeScript that's compatible with the --experimental-strip-types flag (and ts-blank-space), rather than the --experimental-transform-types flag, which is nice because (as someone else in this thread pointed out), Node 23 enables the --experimental-strip-types flag by default: https://nodejs.org/en/blog/release/v23.6.0#unflagging---expe...


Also worth noting that the eslint rules for what `erasableSyntaxOnly` might be are already well established, for those looking to do it today.


Agreed.

Frontend frameworks often do spend a lot of time thinking about the accessibility concerns associated with client side routing, so it's not absurd to consider this question in scope for a frontend library that handles DOM updates.

See for instance this 2019 study by Gatsby: https://www.gatsbyjs.com/blog/2019-07-11-user-testing-access...

Or even the modern Next docs on route announcements: https://nextjs.org/docs/architecture/accessibility#route-ann...

Some of this will have to be bespoke work done on a per-site basis, but I'm not sure I'm comfortable with the idea of completely punting this responsibility to developers using htmx, even if it does make philosophical sense to say "this is scope creep", because ultimately users with disabilities will end up being the ones whose experience on the web is sacrificed on this altar of ideological purity.


> They don't work offline

This isn't true. Offline functionality is the raison d'être for Service Workers. You can run an entire HTTP request router on the client to respond to requests while offline: https://hono.dev/docs/getting-started/service-worker


Are you guys okay? Don't get me wrong it's clever, but it's also insane.

If I pitched the idea of having SMB shares work online by shipping a driver that could intercept low level SMB calls and reroute them to a mock SMB server that holds the cache they would have assumed I'd lost it.

Surely the browser could help you a bit more to implement offline sites in a more integrated fashion.


It's ultimately just a little event listener function that accepts a Request object and returns a Response object. I bundled the service worker by running a quick `npx esbuild --minify --bundle --outfile=sw.js sw.ts` command, and it produced an 18.6kb JS file in 10 milliseconds. That's not even half the size of libraries like HTMX, Alpine, and jQuery.

You can of course use the CacheStorage API directly as well (you're not obligated to use a mock server): https://developer.mozilla.org/en-US/docs/Web/API/CacheStorag...

I've certainly seen crazier things though. People routinely include entire copies of Ubuntu LTS in their Docker images to ship tiny HTTP servers.


> I'm not sure that's even that correct. If you write code using browser APIs that are currently standardised, your code will work indefinitely, whether or not you use Web Components, React, or jQuery.

The most charitable interpretation of this argument is that framework specific component libraries assume for the most part that you're using that specific framework. The documentation for popular React component libraries like shadcn are largely incomprehensible if you're not using React. Libraries like Shoelace (now being renamed to Web Awesome) make no such assumptions. You can just drop a script tag into your page's markup and get started without having to care about (or even be aware of) the fact that Shoelace uses Lit internally.

> But without Javascript, Web Components are essentially empty holes that can't do anything. They don't progressively enhance anything.

This is not true if you're using the new Declarative Shadow DOM API [1]. You literally just add a template tag with a shadowroot mode attribute inside your custom element, and then the component works without JavaScript. When (or if) the JavaScript loads, you simply check for the existence of a server-rendered shadow root using `internals.shadowRoot`. If a shadow root already exists then you don't have to replace anything, and you can attach your event listeners to the pre-existing shadow root (i.e. component hydration).

[1] https://web.dev/articles/declarative-shadow-dom#component_hy...


At this point, I think it's better to point to the MDN docs.

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/te...

Your link uses some deprecated functionality that was only ever created on Chrome.

Anyway, it's good to know that browsers actually did implement that. It was a major complaint when web components were created.


The only reason I avoid linking to that MDN page is because I think it does a bad job of actually assembling all of the pieces together into a working example. That's why I linked specifically to the component hydration section, which assembles it all into a functioning component complete with an actual class and constructor. The code example in that specific section doesn't appear to use any deprecated or non-standard features. Otherwise, I normally do prefer MDN as an authoritative source of documentation.


The MDN page has the deprecation notice for the `shadowRoot` property.

Yes, the examples on your page are way more comprehensive. Even with compatibility support for old Chrome versions. Unfortunately, that support example will also break your code on other browsers.

People may want to read it anyway, and just fix the problem. It being outdated doesn't make the explanation bad.


> The MDN page has the deprecation notice for the `shadowRoot` property.

The `shadowRoot` property of the ElementInternals object is not deprecated: https://developer.mozilla.org/en-US/docs/Web/API/ElementInte...

What's deprecated is the `shadowroot` attribute on the template element, which the example I linked to does not use. The second line of code in that example is `<template shadowrootmode="open">`.

All of this is mentioned at the very top of the article where it says: "Note that the specification for this feature changed in 2023 (including a rename of shadowroot to shadowrootmode), and the most up to date standardized versions of all parts of the feature landed in Chrome version 124."

> Unfortunately, that support example will also break your code on other browsers.

No, it will not.


> You literally just add a template tag with a shadowroot mode attribute inside your custom element, and then the component works without JavaScript.

Wtf is a shadowroot?

I'm of increasing confidence that the entire project to unify document display and application runtime has utterly failed and there's no way (and no benefit) to resuscitate it. We need two different internets: one for interactive applications and one for document browsing.


> Wtf is a shadowroot?

In the Document Object Model (DOM), every document has a root node, which you can retrieve by calling `getRootNode()` on any node in the document (e.g. `document.getRootNode()` or `document.body.getRootNode()`).

Custom Elements can have their own "shadow" document that's semi-isolated from the parent document, and the root node of that shadow document is called the shadow root.

The idea is to be able to create your own HTML elements (which also have their own hidden DOM). If you enable a setting in Chrome's Devtools (Show user agent shadow DOM) [1], you can actually see the hidden shadow structure of built in HTML Elements: https://www.youtube.com/watch?v=Vzj3jSUbMtI&t=291s

[1] https://developer.chrome.com/docs/devtools/settings/preferen...


Promisify converts a callback based function into a promise returning function [1]. If the function has a `promisify.custom` method, `promisify` will simply return the `promisify.custom` method instead of wrapping the original function. Calling `promisify` on `setTimeout` in Node is redundant because Node already ships a built in promisified version of `setTimeout`. So the following is true:

  setTimeout[promisify.custom] === require('node:timers/promises').setTimeout
You could of course manually wrap `setTimeout` yourself as well:

  const sleep = n => new Promise(resolve => setTimeout(resolve, n))

[1] https://nodejs.org/docs/latest-v22.x/api/util.html#utilpromi...


TypeScript has the equivalent of what you're describing via the `Parameters` and `ReturnType` utility types [1][2], and I've found these types indispensable. So you can do the following:

  type R = ReturnType<typeof someFunction>
  type P = Parameters<typeof someFunction>
[1] https://www.typescriptlang.org/docs/handbook/utility-types.h...

[2] https://www.typescriptlang.org/docs/handbook/utility-types.h...


Yeah, now that you mention it, I remember using it a lot when I worked more in that language.


> In C#, awaiting for things which never complete is not that bad, the standard library has Task.WhenAny() method for that.

It's not that bad in JS either. JS has both Promise.any and Promise.race that can trivially set a timeout to prevent a function from waiting infinitely for a non-resolving promise. And as someone pointed out in the Lobsters thread, runtimes that rely on multi-threading for concurrency are also often prone to deadlocks and infinite loops [1].

  import { setTimeout } from 'node:timers/promises'
  
  const neverResolves = new Promise(() => {})
  
  await Promise.any([neverResolves, setTimeout(0)])
  await Promise.race([neverResolves, setTimeout(0)])
  
  console.trace()

[1] https://lobste.rs/s/hlz4kt/threads_beat_async_await#c_cf4wa1


> Promise.race

Ding! You now have a memory leak! Collect your $200 and advance two steps.

Promise.race will waste memory until _all_ of its promises are resolved. So if a promise never gets resolved, it will stick around forever.

It's braindead, but it's the spec: https://github.com/nodejs/node/issues/17469


This doesn't even really appear to be a flaw in the Promise.race implementation [1], but rather a natural result of the fact that native promises don't have any notion of manual unsubscription. Every time you call the then method on a promise and pass in a callback, the JS engine appends the callback to the list of "reactions" [2]. This isn't too dissimilar to registering a ton of event listeners and never calling `removeEventListener`. Unfortunately, unlike events, promises don't have any manual unsubscription primitive (e.g. a hypothetical `removePromiseListener`), and instead rely on automatic unsubscription when the underlying promise resolves or rejects. You can of course polyfill this missing behavior if you're in the habit of consistently waiting on infinitely non-settling promises, but I would definitely like to see TC39 standardize this [3].

[1] https://issues.chromium.org/issues/42213031#comment5

[2] https://github.com/nodejs/node/issues/17469#issuecomment-349...

[3] https://github.com/cefn/watchable/tree/main/packages/unpromi...


This isn't actually about removing the promise (completion) listener, but the fact that promises are not cancelable in JS.

Promises in JS always run to completion, whether there's a listener or not registered for it. The event loop will always make any existing promise progress as long as it can. Note that "existing" here does not mean it has a listener, nor even whether you're holding a reference to it.

You can create a promise, store its reference somewhere (not await/then-ing it), and it will still progress on its own. You can await/then it later and you might get its result instantly if it had already progressed on its own to completion. Or even not await/then it at all -- it will still progress to completion. You can even not store it anywhere -- it will still run to completion!

Note that this means that promises will be held until completion even if userspace code does not have any reference to it. The event loop is the actual owner of the promise -- it just hands a reference to its completion handle to userspace. User code never "owns" a promise.

This is in contrast to e.g. Rust promises, which do not run to completion unless someone is actively polling them.

In Rust if you `select!` on a bunch of promises (similar to JS's `Promise.race`) as soon as any of them completes the rest stop being polled, are dropped (similar to a destructor) and thus cancelled. JS can't do this because (1) promises are not poll based and (2) it has no destructors so there would be no way for you to specify how cancellation-on-drop happens.

Note that this is a design choice. A tradeoff. Cancellation introduces a bunch of problems with promise cancellation safety even under a GC'd language (think e.g. race conditions and inconsistent internal state/IO).

You can kinda sorta simulate cancellation in JS by manually introducing some `isCancelled` variable but you still cannot act on it except if you manually check its value between yield (i.e. await) points. But this is just fake cancellation -- you're still running the promise to completion (you're just manually completing early). It's also cumbersome because it forces you to check the cancellation flag between each and every yield point, and you cannot even cancel the inner promises (so the inner promises will still run to completion until it reaches your code) unless you somehow also ensure all inner promises are cancelable and create some infra to cancel them when your outer promise is cancelled (and ensure all inner promises do this recursively until then inner-est promise).

There are also cancellation tokens for some promise-enabled APIs (e.g. `AbortController` in `fetch`'s `signal`) but even those are just a special case of the above -- their promise will just reject early with an `AbortError` but will still run to (rejected) completion.

This has some huge implications. E.g. if you do this in JS...

  Promise.race([
    deletePost(),
    timeout(3000),
  ]);
...`deletePost` can still (invisibly) succeed in 4000 msecs. You have to manually make sure to cancel `deletePost` if `timeout` completes first. This is somewhat easy to do if `deletePost` can be aborted (via e.g. `AbortController`) even if cumbersome... but more often than not you cannot really cancel inner promises unless they're explicitly abortable, so there's no way to do true userspace promise timeouts in JS.

Wow, what a wall of text I just wrote. Hopefully this helps someone's mental model.


> This isn't actually about removing the promise (completion) listener, but the fact that promises are not cancelable in JS.

You've made an interesting point about promise cancellation but it's ultimately orthogonal to the Github issue I was responding to. The case in question was one in which a memory leak was triggered specifically by racing a long lived promise with another promise — not simply the existence of the promise — but specifically racing that promise against another promise with a shorter lifetime. You shouldn't have to cancel that long lived promise in order to resolve the memory leak. The user who created the issue was creating a promise that resolved whenever the SIGINT signal was received. Why should you have to cancel this promise early in order to tame the memory usage (and only while racing it against another promise)?

As the Node contributor discovered the reason is because semantically `Promise.race` operates similarly to this [1]:

  function race<X, Y>(x: PromiseLike<X>, y: PromiseLike<Y>) {
    return new Promise((resolve, reject) => {
      x.then(resolve, reject)
      y.then(resolve, reject)
    })
  }
Assuming `x` is our non-settling promise, he was able to resolve the memory leak by monkey patching `x` and replacing its then method with a no-op which ignores the resolve and reject listeners: `x.then = () => {};`. Now of course, ignoring the listeners is obviously not ideal, and if there was a native mechanism for removing the resolve and reject listeners `Promise.race` would've used it (perhaps using `y.finally()`) which would have solved the memory leak.

[1] https://github.com/nodejs/node/issues/17469#issuecomment-349...


> Why should you have to cancel this promise early in order to tame the memory usage (and only while racing it against another promise)?

In the particular case you linked to, the issue is (partially) solved because the promise is short-lived so the `then` makes it live longer, exacerbating the issue. By not then-ing the GC kicks earlier since nothing else holds a reference to its stack frame.

But the underlying issue is lack of cancellation, so if you race a long-lived resource-intensive promise against a short-lived promise, the issue would still be there regardless of listener registration (which admittedly makes the problem worse).

Note that this is still relevant because it means that the problem can kick in in the "middle" of the async function (if any of the inner promises is long) while the `then` problem (which the "middle of the promise" is a special case of "multiple thens", since each await point is isomorphic to calling `then` with the rest of the function).

Without proper cancellation you only solve the particular case if your issue is the latest body of the `then` chain.

(Apologies for the unclear explanation, I'm on mobile and on the vet's waiting room, I'm trying my best.)


I don't want to get mired in a theoretical discussion about what promise cancellation would hypothetically look like, and would rather instead look at some concrete code. If you reproduce the memory leak from that original Node Github issue while setting the --max-old-space-size to an extremely low number (to set a hard limit on memory usage) you can empirically observe that the Node process crashes almost instantly with a heap out of memory error:

  #! /usr/bin/env node --max-old-space-size=5
  
  const interruptPromise = new Promise(resolve =>
    process.once('SIGINT', () => resolve('interrupted'))
  )
  
  async function run() {
    while (true) {
      const taskPromise = new Promise(resolve => setImmediate(resolve))
      const result = await Promise.race([taskPromise, interruptPromise])
      if (result === 'interrupted') break
    }
    console.log(`SIGINT`)
  }
  
  run()
If you run that exact same code but replace `Promise.race` with a call to `Unpromise.race`, the program appears to run indefinitely and memory usage appears to plateau. And if you look at the definition of `Unpromise.race`, the author is saying almost exactly the same thing that I've been saying: "Equivalent to Promise.race but eliminates memory leaks from long-lived promises accumulating .then() and .catch() subscribers" [1], which is exactly the same thing that the Node contributor from the original issue was saying, which is also exactly the same thing the Chromium contributor was saying in the Chromium bug report where he writes "This will also grow the reactions list of `x` to 10e5" [2].

[1] https://github.com/cefn/watchable/blob/6a2cd66537c664121671e...

[2] https://issues.chromium.org/issues/42213031#comment5


Just to clarify because the message might have been lost: I'm not saying you're wrong! I'm saying you're right, and...

Quoting a comment from the issue you linked:

> This is not specific to Promise.race, but for any callback attached a promise that will never be resolved like this:

  x = new Promise(() => {});
  for (let i = 0; i < 10e5 ; i++) {
    x.then(() => {});
  }
My point is if you do something like this (see below) instead, the same issue is still there and cannot be resolved just by using `Unpromise.race` because the underlying issue is promise cancellation:

  // Use this in the `race` instead
  // Will also leak memory even with `Unpromise.race`
  const interruptPromiseAndLog = () =>
    interruptPromise()
      .then(() => console.log('SIGINT'))
`Unpromise.race` only helps with its internal `then` so it will only help if the promise you're using has no inner `then` or `await` after the non-progressing point.

This is not a theoretical issue. This code happens all the time naturally, including in library code that you have no control over.

So you have to proxy this promise too... but again this only partially solves the issue because you'd have to promise every single promise that might ever be created, including those you have no control over (in library code) and therefore cannot proxy yourself.

And the ergonomics are terrible. If you do this, you have to proxy and propagate unsubscription to both `then`s:

  const interruptPromiseAndLog = () =>
    interruptPromise()
      // How do you unsubscribe this one
      .then(() => console.log('SIGINT'))
      // ...even if you can easily proxy this one?
      .then(() => console.log('REALLY SIGINT'))
Which can easily happen in await points too:

  const interruptPromiseAndLog = async () => {
    console.log('Waiting for SIGINT')

    // You have to proxy and somehow propagate unsubscription to this one too... how!?
    await interruptPromise()
    
    console.log('SIGINT')
  }
Since this is just sugar for:

  const interruptPromiseAndLog = () => {
    console.log('Waiting for SIGINT')

    return interruptPromise()
      // Needs unsubscription forwarded here
      .then(() => console.log('SIGINT'))
  }
Which can quickly get out of hand with multiple await points (i.e. many `then`s).

Hence why I say the underlying issue is overall promise cancellation and how you actually have no ownership of promises in JS userspace, only of their completion handles (the event loop is the actual promise owner) which do nothing when going out of scope (only the handle is GC'd but the promise stays alive in the event loop).


For that matter C# has Task.WaitAsync, so waited task continues to the waiter task, and your code subscribes to the waiter task, which unregisters your listener after firing it, so memory leak is limited to the small waiter task that doesn't refer anything after timeout.


But if you really truly need cancel-able promises, it's just not that difficult to write one. This seems like A Good Thing, especially since there are several different interpretations of what "cancel-able" might mean (release the completion listeners into the gc, reject based on polling a cancellation token, or both). The javascript promise provides the minimum language implementation upon which more elaborate Promise implementations can be constructed.


Why this isn't possible is implicitly (well, somewhat explicitly) addressed in my comment.

  const foo = async () => {
    ... // sync stuff A
    await someLibrary.expensiveComputation()
    ... // sync stuff B
  }
No matter what you do it's impossible to cancel this promise unless `someLibary` exposes some way to cancel `expensiveComputation`, and you somehow expose a way to cancel it (and any other await points) and any other promises it uses internally also expose cancellation and they're all plumbed to have the cancellation propagated inward across all their await points.

Unsubscribing to the completion listener is never enough. Implementing cancellation in your outer promise is never enough.

> The javascript promise provides the minimum language implementation upon which more elaborate Promise implementations can be constructed.

I'll reiterate: there is no way to write promise cancellation in JS userspace. It's just not possible (for all the reasons outlined in my long-ass comment above). No matter how elaborate your implementation is, you need collaboration from every single promise that might get called in the call stack.

The proposed `unpromise` implementation would not help either. JS would need all promises to expose a sort of `AbortController` that is explicitly connected across all cancellable await points inwards which would introduce cancel-safety issues.

So you'd need something like this to make promises actually cancelable:

  const cancelableFoo = async (signal) => {
    if (signal.aborted) {
      throw new AbortError()
    }

    ... // sync stuff A

    if (signal.aborted) {
      // possibly cleanup for sync stuff A
      throw new AbortError()
    }

    await someLibrary.expensiveComputation(signal)

    if (signal.aborted) {
      // possibly cleanup for sync stuff A
      throw new AbortError()
    }

    ... // sync stuff B

    if (signal.aborted) {
      // possibly cleanup for sync stuff A
      // possibly cleanup for sync stuff B
      throw new AbortError()
    }
  }

  const controller = new AbortController()
  const signal = abortController.signal

  Promise.cancelableRace(
    controller, // cancelableRace will call controller.abort() if any promise completes
    [
      cancellableFoo(signal),
      deletePost(signal),
      timeout(3000, signal),
    ]
  )
And you need all promises to get their `signal` properly propagated (and properly handled) across the whole call stack.


Node is the only major outlier. Bun supports this convention as well: https://bun.sh/docs/api/http#export-default-syntax


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: