Hacker News new | past | comments | ask | show | jobs | submit login

I am still not understanding the purpose of this as I believe it is grounded on the wrong assumption.

Pretty much every single asynchronous operation other than some `Promise.resolve(foo)` where foo is a static value can fail. Reading from the file system, calling an api, connecting to some database, etc.

If the original promise fails you're gonna return a cached failure.

Mind you, I'm not stating this might be completely useless, but at the end of the day you will be forced to add code logic to check all of those asynchronous computation results which will eventually outweight the cost of only saving the resolved data.




> If the original promise fails you're gonna return a cached failure.

This is usually a good thing, and even in cases where it isn't, it's often a worthwhile trade-off.

In the common case a failure will either be persistent, or - if load/traffic related - will benefit from a reduction in requests (waiting a while to try again). In both of these cases, where your first key request fails, you want the immediately-following cases to fail fast: "caching" the promise caches the failure for the duration of one request (presumably the cache is emptied once the promise resolves, allowing subsequent key accesses to retry).

The less common case where the above isn't true is where you have very unstable key access (frequent one-off failures). In those cases you might want a cache miss on the second key request, but successful key retrieval usually isn't as critical in such systems which makes the trade off very worthwhile.


> If the original promise fails you're gonna return a cached failure.

In the scenario where that's an issue, you would need to add (extremely trivial 3-5 lines) logic to handle retrying a cached failure. The underlying data structure would continue to be a promise map.


> If the original promise fails you're gonna return a cached failure.

In many cases, another near-in-time request would also fail, so returning a cached failure rather than failing separately is probably a good idea (if you need retry logic, you do it within the promise, and you still need only a single instance.)

(If you are in a system with async and parallel computation both available, you can also use this for expensive to compute pure functions of the key.)


You do one IO operation instead of two.

It's unlikely that always doing two is going to be more successful than just trying once and dealing with failure


It's a tiny optimization.

When the VERY FIRST ASYNC operation is inflight the cache is immediately loaded with a Promise, which blocks all other calls while the FIRST async operation is in flight. This is only relevant to the very FIRST call. That's it. After that the promise is pointless.

As for the Promise failure you can just think of that as equivalent of the value not existing in the cache. The logic should interpret the promise this way.


It's not always a tiny optimization. If you have an expensive query or operation, this prevents potentially many duplicative calls.

A practical example of this was an analytics dashboard I was working on years ago -- the UI would trigger a few waves of requests as parts of it loaded (batching was used, but would not be used across the entire page). It was likely that for a given load, there would be four or more of these requests in-flight at once. Each request needed the result of an expensive (~3s+) query and computation. Promise caching allowed operations past the first to trivially reuse the cached promise.

There are certainly other approaches that can be taken, but this works very well as a mostly-drop-in replacement for a traditional cache that shouldn't cause much of a ripple to the codebase.


Yep, I've been bitten by exactly this failure mode.

You have to invalidate/bust the cache when a failure is returned (which is racy, but since cache busting is on the sad path it's a fine place to protect with a plain ol' mutex, assuming you even have real parallelism/preemption in the mix).

Alternatively you can cache promises to functions that will never return failure and will instead internally retry until they succeed. This approach generalizes less well to arbitrary promises, but is more friendly to implementing the custom back off/retry scheme of your choice.


It's not the cost of saving the resolved data.

If I understand the pattern correctly, it is to avoid multiple asynchronous requests to a resource that has yet to be cached.


Yeah, that's my understanding too.

It seems like an optimization to prevent lots of downstream requests that occur in rapid succession before the first request would have finished. I'd also suspect that the entry would be removed from the map on failure.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: