Be careful with this data structure. If the language allows async exceptions or you have a big where the promise won’t become deferred, there are a lot of edge cases. Examples of edge cases:
- if the promise never becomes determined (eg bug, async exception) your app will wait forever
- if the promise has high tail latency things can be bad
- if the language eagerly binds on determined promises (ie it doesn’t schedule your .then function) you can get weird semantic bugs.
- changing the keys of the table incrementally instead of just using for memoisation can be really messy
I have a couple pieces of code where we had to add rate limiting because it was just too easy to make too many async calls all at once, and things only got 'worse' as I fixed performance issues in the traversal code that were creating an ersatz backpressure situation.
Philosophically, the main problem is that promise caches are a primary enabler of the whole cache anti-pattern situation. People make the mistake of thinking that 'dynamic programming' means memoization and 'memoization' means caching. "Everything you said is wrong." Trying to explain this to people has been a recurring challenge, because it's such a common, shallow but strongly-held misunderstanding.
Youtube recently suggested this video to me, which does a pretty good job of explaining beginning and intermediate DP:
What I love most about this video is that it introduces memoization about 10 minutes in, but by 20% of the way through it has already abandoned it for something better: tabulation. Tabulation is an interative traversal of the problem that gathers the important data not only in one place but within a single stack frame. It telegraphs a information architecture, while recursive calls represent emergent behavior. The reliance on cache to function not only obscures the information architecture, it does so by introducing global shared state. Global shared state and emergent behavior are both poison to the long-term health of a project, and here we have a single data structure that represents both in spades.
We are supposed to be fighting accidental coupling, and especially global variables, not engaging in word games to hide what we are doing.
So I think my answer to OP's question is 'tabulation', which is part data structure and part algorithm.
I wrote some Python code recently that uses a similar data structure (Futures instead of Promises, without knowing necessarily about the data structure's use in JavaScript). It wasn't really for caching.
I modeled mine on the Django ASGI reference library's server implementation, which uses the data structure for maintaining references to stateful event consumers. Exception handling is done with a pre-scheduled long-running coroutine that looks at the map.
I'm curious about your second point -- why exactly do things get bad with high tail latency? Is it only a weakness of the data structure when used for caching? I'm having trouble picturing that.
Suppose the call to evaluate has a p95 of 5 seconds (this is very large of course). If your first call to compute a value hits it, all the subsequent requests for that cell are blocked for 5s. If you didn’t try to cache, only one might block for 5s and the rest could go through fast. On the other hand if you do 20 requests then about one of them will get the p95 latency.
Best practice in my experience is to use a timeout on all async operations to handle edge cases 1 & 2 above. The third case isn't possible with JavaScript AFAIK.
- if the promise never becomes determined (eg bug, async exception) your app will wait forever
- if the promise has high tail latency things can be bad
- if the language eagerly binds on determined promises (ie it doesn’t schedule your .then function) you can get weird semantic bugs.
- changing the keys of the table incrementally instead of just using for memoisation can be really messy