I think the author wants promises to represent computation, whereas they represent predetermined (i.e. single-shot) events. He mentioned C# Tasks, which do mainly represent computation, but in some cases Tasks are also used as events and this gets confusing as hell. I've worked with C# Tasks and hope that MS once cleans this up and builds the stuff on promises instead. Note that the C# language construct uses the awaitable pattern (GetAwaiter method) instead of tasks - awaitables are actually pretty similar to promises.
1. Eager, not lazy - I think it was a mistake for the promise constructor to take a function, and in that way lead the users to believe the promise represents a computation. Creating a pair of promise and future (the latter as the producer side, like in C++) would be much cleaner. I disagree that lazy would be more general, you can simulate lazyness with functions, but you couldn't eliminate the performance cost of creating the unnecessary closure with a lazy solution. Regarding getUserAge - the common case for that function would be to take the user ID as the parameter (and hence would be lazy by construction), the parameterless version is a special case.
2. No cancellation - cancellation is much better represented with cancellation tokens (even C# Tasks cancel with cancellation tokens, so does fun-task mentioned at the end, though in non-composable way) - you cannot build a generic solution that can cancel the right computations. With cancellation tokens it's clear what cancels what.
3. and 4. (as well as being allowed pass non-promises to places where only promises make sense, like Promise.all and await) are unfortunate accidents that make typed environments (e.g. TypeScript) harder to work with but are not that important as 1 and 2.
As it is, the fact that the promise initializer function is called immediately is a god send. If I had a dollar for every concurrency race-like condition that has prevented..
Constructor doesn't just take a function, it takes a closure if you so choose.. A closure that has immediate access to resolve and reject and doesn't have to worry about other code paths access its enclosed vars first.
Actually, cancellations can be useful to prevent wasting resources on useless computations. Imagine, if you have 2 threads doing some computations that will be later merged. If one thread fails, it makes no sense to continue executing other thread. That is where cancellation can help - but it should be thoroughly designed, not hacked like they usually do it in Node.JS.
I was not saying cancellations are not useful, I was saying cancellations are better handled explicitly via cancellation tokens (which compose perfectly, unlike computation based cancellation).
This article doesn't even touch on the worst sin of JavaScript Promises: they swallow errors and exceptions, which makes them nearly impossible to test correctly and makes debugging horrifying (if you even notice anything is wrong.)
Promises are a great example with the problems of believing that something good in one language will be good in another. Promises in JavaScript are fighting the language, because JavaScript is fundamentally a collection of isolated but contextual behaviors.
Using Agents to encapsulate callbacks is easier to reason about, easier to test, less likely to swallow errors whole, and doesn't have any of the problems laid out in this article. Unfortunately because it isn't a model popular in any other language, it doesn't have the name recognition Promises do.
If there’s no logic to catch the rejection it will be silently ignored, which is effectively the same as “swallowing exceptions” – your distinction is correct but it’s academic at best.
In practice, this behavior causes very real problems and in node land they made the sane decision to kill the process whenever an unhandled promise rejection comes about. I don’t know if this has landed yet, but you’ll see a warning about it if you run a node process in which you reject a promise.
This is a half truth. Native promises landed quite some time before a way to catch unhandled promises did, and even that event wasn’t enough, as evidenced by at least node deciding that the process needs to crash. (As it would’ve when unhandled exceptions were thrown.) The browser story is of course different.
Regardless, the original poster was correct – this has been a sin of promises for a long time. What’s worse I think is that because of this we move have semantics that are close but not quite the same as exceptions. Case in point: throwing an exception while executing a promise function will reject the promise. But it’s not an exception anymore, even though the value of the rejection is in fact the exception. The semantics are now different – because promises.
Promises in JavaScript have a certain almost-but-not-quite quality to them.
Sure, but "exception" is the programming term for that kind of behaviour. Saying "JavaScript has exceptions" doesn't imply that JavaScript has a global builtin object named `Exception` any more than saying "JavaScript has loops" implies that there's a global builtin object named `Loop` or `For`.
"throw" takes the value to throw as an exception. An exception is an event which can be handled with "catch". [1] The standard refers to predefined errors such as TypeError as "exceptions".
I don't know. André Staltz is a great programmer, but I can't help but think what seems "opinionated" to him about promises boils down to the fact that they don't perfectly match certain quasi-ideological preferences he has about async programming, at the expense of all other concerns. As he states at the end, promises still work, you can get things done and everything is fine. But the part about them being opinionated I just can't get behind.
In fact, if promises worked the way he wanted them to, it would hurt the ecosystem in every category he mentions. Lazy promises would cease representing a single value, and be un-cacheable. Promises that didn't flatten inner promises would create endless confusion and ambiguity over "onion-promise" scenarios. Sometimes-synchronous promises would introduce subtle and sometimes catastrophic runtime ambiguities (aka "release zalgo"). Even cancelable promises would raise thorny issues regarding whether promises are intended to be multicast or unicast, which is a problem the current design side-steps entirely.
Hi. First I'd like to comment that I wrote that blog post literally quickly because I just wanted to convert an informal Twitter thread into a more shareable and digestible format, so I didn't take the time to make it "hackernews comment resistant" if you know what I mean.
On opinionated choices: I try to base that on mathematics. When it comes to async programming, that means equational reasoning and following some basic properties such as composition, associativity, left identity, right identity, etc, see https://github.com/fantasyland/fantasy-land . These would be neutral because math is often neutral. For instance, if intelligent aliens exist, they probably figured out the circle just like we did.
On ideological preferences: there isn't anything free of ideology, so yes I am motivated by some ideology, not unlike influential members of TC39. There, you'll often find opposition to functional programming ideas, even though JS was originally designed with influences from Scheme, and allowed functional programming better than, e.g., Java at the time. See these TC39 notes, for instance: https://github.com/tc39/tc39-notes/blob/master/es8/2017-09/s...
FP ideas are usually opposed because TC39 proposals are driven by concrete use cases, and FP is all about abstractions. FP ideology is that abstractions are good because they accomplish abstract goals, not a single particular concrete goal. That does not play well with the use-case-first process at TC39. So, ideologies.
On "it would hurt the ecosystem": I think by now a lot of people from different language communities recognize the success of RxJava/RxJS/Rx.NET, and it tackles all those complications you mentioned, but for even more complex use cases, because it handles multiple values over time. I'm not arguing that Rx is a silver bullet, I'm just saying the Rx community has first hand experience with solving and teaching solutions to the problems you mentioned in the second paragraph and it's nowhere near "catastrophic".
Another example relevant in node is continuation-local-storage (equivalent to threadlocal storage). Implementing it on top of generators or other "chainable / thenable" abstractions is trivially easy. Implementing it on top of native promises and async/await is impossible without deep hooks into the platform.
In the meantime generator based libraries would've properly explored the whole breadth of power that co-routines can give you, creating cowpaths to be paved by TC39.
Promises make trade-offs, and they end up with a design that is generally good and can be used well in some number of situations. But not all. Not nearly enough to get first class syntax support that makes them privileged over all other solutions.
Gorgi Kosev's post highlights a very nice use case for generators (database transactions). However, in all my JavaScript over the last few years, that is the only good use case I've found so far in my code base for generators. In all other use cases I've come across, async-await works just fine, and has a much nicer syntax to work with.
I elaborated that case in the most detail, but there are many other problems that generators solve mentioned in the blog post. Another very common one is getting the current user that initiated the request (or maybe their session), which you need to pass around to all your functions/classes.
What if you could simply `yield getCurrentUserSession` and the engine which ran the toplevel generator returned it back to you?
jhusein's compositional functions solved the syntax issue.
MobX is the most powerful front end development library on the market right now. It single-handedly solves the cache invalidation problem in a performant way, and its pluggable into any framework.
Slightly confusing api, no structural comparison, no adapters for popular frameworks (e.g. S.js-react or S.js-preact), no laziness (computations are recomputed even if they're not requested by reactions).
I don't want to use surplus because e.g. I want to use well developed UI components or toolkits like blueprintjs, which is implemented on top of React.
Slightly confusing is a real objection. MobX strives to implement transparent reactive programming, where the way you access, update and transform values works exactly like it would with regular objects. S.js has a worse learning curve.
> MobX strives to implement transparent reactive programming, where the way you access, update and transform values works exactly like it would with regular objects. S.js has a worse learning curve.
Hardly: reactive values are get/set functions, and you create new ones with S(() => ...). That's it.
MobX's attempt at transparency yields inescapable and surprising corner cases.
> Redundant computations are computed; [...] even though the completed() computed isn't used anywhere, the reduction is still being recomputed every time todo state changes
Incorrect. The todos binding is an SArray not a regular array [1]. See my modified version where I log the events to the console [2].
The number of tricky corner cases in MobX are very few. Also, once the implementation switches to proxies the vast majority will disappear.
I'm willing to accept a few corner cases as long as there is a large common subset of functionality that works both with and without a small number of decorators. This can be utilised to write models that can be used in both a reactive and a non-reactive context with a different set of decorators injected in. S.js looks too invasive to do this.
Your codepen has no completed todo count. Why are the recomputations logged every time here?
> Your codepen has no completed todo count. Why are the recomputations logged every time here?
Because you can't apply reduce any other way. It's a computation defined over a whole collection. To incrementalize it, you'd need to be able to invert whatever function you're trying to apply in order to arbitrarily undo and redo the operation as elements are added/removed. This is literally impossible in general as not all functions have inverses.
We use this to great extent in our application, by only rendering components that are visible in the viewport at the moment.
You can also keep its cached value alive, but not recompute it until needed by implementing a reaction that observes a computed but doesn't request its value. This will keep the entire computation graph cached but idle and partially dirty until the value is requested, at which point only stale dependencies will be recomputed. This can be extremely powerful: for example you can implement a state tree undo/redo by implementing serialize, then observing the serialize computed for the root item without requesting its value (keeping things cached) and only requesting recomputations when certain sufficient number of mutations are made. (with the vast number of reused values being structurally shared between undo/redo states)
> What would you call it? S.atomic? I don't see how the existing name is particularly unsuitable.
Yes atomic would be an improvement. Freeze only makes sense if you are thinking in terms of FRP signals in time, and its unclear whether the abstraction tries to hide its signal underpinnings or expose them (its somewhere inbetween)
The way that promises are implemented actively hurts the ecosystem. Trying to e.g. create a sound typing of the Promise API is really, really hard because of it's auto-flattening nature. This severely hampers languages like TypeScript, Flow & ReasonML that build on top of JavaScript.
I have to agree. This blog post comes from a place of judging what is "better" by some arbitrary abstract metrics (hint: the author's own library wins this contest). It doesn't consider what programmers use them for.
Promises are meant to semantically (and with async/await, syntactically) resemble a function call as closely as possible, while minimizing confusing errors. The author's "improvements" would ruin this.
If I need a bunch of features that a promise doesn't provide, like cancellation, I will write something to do it manually with callbacks. This is far less than 1% of async calls though.
> This blog post comes from a place of judging what is "better" by some arbitrary abstract metrics
Or you could consider the possibility that they aren't arbitrary at all. Perhaps these properties are well motivated by, for instance equational reasoning, which is a cornerstone of extensible and maintainable programming.
> Promises are meant to semantically (and with async/await, syntactically) resemble a function call as closely as possible, while minimizing confusing errors
Why? What purpose does that ultimately serve given we already have functions? The whole point of an abstraction is to provide sophisticated semantics to do an important job you would otherwise have to do by hand.
Although I don't like most of his alleged improvements, surely a separate map/flatMap would be more like function calls.
If I write
function foo(v) {
return function () { return v }
}
const fv = foo(4)
then I expect fv will be a function, not a number. In that respect, I think the choice to make `then` flatten was a poor decision that makes it more confusing for people who don't really take the time to understand. Such people don't realise it's confusing: they simply notice it's usually convenient; but they're inhibited from drawing the correct analogies.
But as for the rest. Yeah, it's just arbitrary and weird. Maybe we should make integers all arrays lazy by default too?
I think there's a similar but stronger form of opinionation where a choice prevents you from ever implementing something.
In this case, not being able to synchronously resolve a promise prevents you from using them in certain ways. You can attach flatMap to the prototype or build a better way of doing cancellations, but you'll need to make additions to the language itself to turn a promise synchronous, which is what await/async were all about.
Having said all that, the bar for promises isn't set by RxJS, it's set by callback(err).
EDIT: also, promises were very much discovered. They're an opinionated solution to the very specific problems of callbacks. They're chainable because of callback hell. Errors propagate because that's a problem with callbacks. Their execution is always delayed because callbacks made it hard to reason about execution order. They don't handle cancellation because callbacks don't handle cancellation.
The analogies damage this article because they feel wrong. For example the "never synchronous" example is more like this:
You order a burger at the cashier window, then go to the pickup window. If the burger is already made, it's already at the pickup window when you get there.
The author wants a special case where if the burger is already made, they hand it to you immediately at the cashier window. This might seem more efficient, but both in the restaurant and in code it makes logic way more complex.
Exactly. If for some reason my promise is immediately fulfilled (e.g. the result was previously memoized), I don't want to provide an alternate codepath to handle the result.
I work with fairly large JavaScript codebases, and the issues mentioned in the post has never been an issue for me. The switch from callbacks to Promises, and later to async-await has made a massive improvement to the ease of writing, reading and maintaining the code. Lazy and cancellable tasks are edge-cases that don't need support in Promises directly. I haven't needed either of those in more than a couple of places in the code, versus thousands of places where Promises are used as is.
I can see the automatic unwrapping of Promises to be an issue in some libraries that want to make specific guarantees, but in most of my code this has behaviour has simplified things.
I definitely prefer having Promises and async-await right now over another theoretically sound (but probably more verbose) system available in a couple of years.
I think Promises=>async/await works great for certain domains like get a value from a database, them make an insert, then do something else and then return a message to the user.
I however write a lot of systems where almost all operations need to be concurrent, cancelable and rate-limited.
Personally I find callbacks and the event loop easy to reason about, but too daunting when all you do is CRUD requests to a database.
The problem with Promises though is that they spread, they don't like to live side by side with other async paradigms.
In my (more or less extensive) experience missing cancellation is what bites you more. Not because Promises don‘t implement it (I worked a lot with bluebirds CancellablePromises a lot and… it’s not fun) but because there’s no unified, “standard” and “specced” way.
AbortControllers (which are just abort tokens hidden as EventEmitters) are not that bad but we’ll have to wait a lot before all fetch implementation support it (I’m looking at you V8 and Safari!). I still don‘t understand why they didn’t called it CancelController since “abort” is such an overlodaded term.
Also (still in my experience) cancellation is incredibly more complex than it could seem. You don‘t want to simply stop the synchronous propagation, you want to act on it! You resources to be freed, you want partially completed async process to rollback what has already been done. Not easy.
I don't think the author has experience working with C# tasks.
Technically, the API documentation indeed says a Task has Start() methods i.e. is lazy.
But practically, in the majority of cases they are created already in Running or WaitingToRun state. This applies to tasks returned by asynchronous APIs in the framework, tasks implemented by user-written async methods, and tasks started with Task.Run() static methods. Calling Start() on them will throw an exception complaining about the wrong task state. So, in the current versions of .NET, the tasks are eager just like in JS.
I think the lazy tasks are mostly for backward-compatibility with older .NET framework 4.0 that already had tasks but didn’t support async-await.
Neither lazy nor eager is neutral. Sometimes you want one, sometimes you want the other, and you can build either out of the other.
For promise cancellation, this has been talked to death, but in short, making any function preemptable at any point in its execution makes writing correct code much much harder. As an example, I've got an API that takes independently cancellable requests. Multiple requests often need to calculate the same thing, so there's a cache. Any given promise in the system might be downstream of multiple requests. If cancellation is built into promises, how do I express how cancellation should propagate through the tree of promises?
A C#-style cancellation token API, orthogonal to promises, is simple, easy to build, and easy to understand.
> A C#-style cancellation token API, orthogonal to promises, is simple, easy to build, and easy to understand.
The problem is that promises were added to the language without any of that being hashed out. Now you may say "Having language-level support for promises is useful; cancellation tokens are more library details", and you'd have a point. The flip-side, however, is that e.g. the Fetch API is only now getting any sort of cancellation ability, and currently I believe it's limited to Firefox and Edge.
Which is not to say that I'm even confident that language support for Promises should have been held up. But I understand the complaints.
Creating a cancel token gives you two things: a token and a function. You call the function when the token should be cancelled, and the token can tell you when it has been cancelled. The simplest way to query the token is to call token.throwIfRequested(), which throws a Cancel if the token is cancelled. The token can also give you promises or callbacks of cancellation if you like, so you can do stuff like `const result = await Promise.race(token.promise, promiseOfResult);`
So you pass the token into a cancellable API, and that API calls token.throwIfRequested() in places where it is safe for it to do so (i.e. outside of critical sections).
If promises were lazy (basically just function composition), it seems like we'd have similar problems to Haskell where it's difficult to understand performance. You wouldn't know when an I/O operation starts or whether it will get executed again. Maybe that's okay for a high-level API, but low-level I/O operations are not idempotent, so this seems risky?
So this looks like a trade-off: you could make function composition easier only by making Promises less suitable for their original purpose. By going generic, you lose an important guarantee that a Promise is just a value.
const myJob = { run: () => fetch(... is too long? Eager is easy to make lazy. While the opposite is also true, it means a superfluous run. It is worse, semantically speaking. I'd argue that eager is more general than lazy.
Also, cancellation is yet another state and it's hard to generalize especially when you don't have threads.
Promises should always be async because you'd want the result to be consistent. If I'm returning a promise and you are depending it to be sync, that means it weakens my flexibility. It makes the code harder to reason about.
OK, we can't do threaded imperative programming because threads are expensive and people botch the locking. So we have callbacks where completion of some external event calls you back. Then you need closures so the callback has some state so it knows what to do when called back. Now you have a control structure problem, and need a state machine to decide what to do next. But most of the time you just want to do the next thing, so there's syntax such as ".then()" so you can write imperative programs again.
One of the big challenges mentioned with promises is that you have to kind of commit to promises linking to promises... this is a common problem with async systems added later, where you have to "line them up like gears", and you can't just do async functions which call non-async functions which call async functions and expect it to work. Python has this problem too.
There's a solution in delimited continuations, however delimited continuations seem to only be used and understood in the Scheme community (are they used anywhere else?). Delimited continuations allow you to suspend your code to a "prompt" lower in the stack at that point... and it doesn't matter if you have non-"async" code in between.
It'll be nice when they make their way to other more mainstream languages.
- Eager is relatively easy to convert to lazy, if needed. The inverse isn’t true.
- async is relatively easy to turn into sync. The inverse isn’t true.
- no API design is ”neutral”. Any API design is opinionated. Cancelable lazy synchronous promises is just as opinionated a design as the current design.
The whole idea of cancelation is poor; it shouldn't even be a feature.
The way you avoid unnecessary computation, when you have laziness, is to just roll it into the lazy semantics.
Have it so that if the promise generates something complicated, like a sequence, that the promise only generates as much of that something as is accessed (and maybe only a little bit beyond that).
In other words, the async promises should perhaps behave not so differently from synchronous lazy mechanisms.
The two are flipsides of the same coin. Say I have a synchronous lazy list (of strings). The strings come from reading a file. Ah, but reading a file is asynchronous at the OS level. So actually the list is asynchronous, in a sense. When we access the first element in the list, a line is read from the file. The underlying stream object reads an entire buffer-sized chunk, though: still synchronously. Moreover the OS behaves asynchronously and reads ahead in the file, caching more of it than the stream library asked for. Of course, the OS doesn't read the whole file (unless it's small). Just a little bit ahead. Enough ahead not to hammer the I/O subsystem with lots of small operations.
We can create this list over a log file that has 100 million lines, then read just the first 100 lines and stop using it. The underlying stream library might read 16K of the file, of which the 100 lines occupies only the first 8. The OS might have read ahead by quite a bit more than that and cached more of the file, and the hard drive's firmware might have buffered an entire track. If we don't read anything more from that list, then the operation is effectively canceled. The OS won't cache any more from the file; the stream library won't buffer more of text stream.
> Have it so that if the promise generates something complicated, like a sequence, that the promise only generates as much of that something as is accessed (and maybe only a little bit beyond that).
What about when you start composing Promises? For example, say I have a top-level promise that just returns a value. But under the hood, it needs a promise that generates an array. Even if the under-the-hood promise generates the array piece by piece, the caller of the larger promise is only even going to get a single value, so they lose the ability to "stop".
(You might imagine that the composed-over promises are network I/O, for example.)
In that case, one thing we can do is that the under the hood promise is not forced at all if the wrapping promise's value isn't forced yet. Then we don't have any async behavior, unfortunately; no calculation begins until the promise is called in. We can do part of the calculation ahead of time, but then stop and don't complete it until there is an indication that the value is required. That is fudgy though: how far is far enough to reap the async benefit without the downsides.
How about this alternative: instead of .cancel() on promises have a .commit(). This is called if you're sure that you will eventually need that value. The calculation then proceeds full steam ahead: no going back.
You can ask for the value with or without .commit(); it is just a hint. But if you ask without .commit(), you may have to wait for a completion that was deliberately stalled due to your lack of commitment.
Without .commit(), async promises will still proceed on their own to some extent based on some fudge factor; we don't want programmers automatically calling .commit() on every promise they make to get the async benefit, which defeats the purpose.
Uncommitted promises could be identifiable to the garbage collector and subject to an internal cancelation protocol between GC and the promises. That protocol basically helps the promise's thread vacate the object so it can be reclaimed.
Promises could have some sort of hint about how far to proceed before requiring commitment. This would have to be well thought out: such hints tend to be too system and workload specific. Automatic tuning is better. The promise system could keep some statistics about how soon various kinds of promises are called in after being initiated, and how often they are called in at all, and then uncommitted promises could decide based on that how far to compute.
This was a weird blog post to read. I think I agree on all of your points (Promises should be lazy[-ish], cancellable and optionally synchronous) but disagree on all of your proposed solutions.
I do think `p = new Promise(fn);` shouldn't kick off the `fn` immediately. But that it should start right away in the next event loop. I haven't had issues with creating promise getters for repeatable calls. And think it organizes the business code away from the low level code.
I don't see a problem with the original Promise.cancel() you proposed or how your lazy promises makes canceling them any easier.
And don't we have `await` for the synchronous problem?
One nice thing about "new Promise()" calling the function immediately is that if you are prepared to provide the value immediately then you don't have to return into the runloop, but probably the reason I'd give for why delaying the call would be a horrible idea is that the vast majority of the time the promise is going to do some minimal amount of setup work and then... return to the runloop (and if it isn't, I am going to ask why you are using a promise). That means that the behavior they currently have of calling the function immediately minimizes returns to the runloop and provides performance as close as possible to what you would get if you hand-coded it using callbacks (the only overhead being the unlikely-to-be-optimized-away-fully-by-the-VM object allocations and indirect function calls; but like: this paradigm in a language with zero-cost abstractions would be perfect).
> if you are prepared to provide the value immediately then you don't have to return into the runloop
It might just my own mental model. But if I'm using a promise, it is a future value. So I don't understand why you would ever want to do that. Plus that's what Promise.resolve() is for. I'd expect them to act more like the now defunct setImmediate() function.
I concede that it may be more performant to do it this way as it results in less context switching, but I personally don't think performance should dictate a language's design of primitives.
To use Promise.resolve you have to know whether the value is ready or not before you enter the promise, but you should enter the promise and then check that status. There are very few situations where Promise.resolve should be used: it is essentially a weird performance optimization for constant values. A basic example is something like reading from a network buffer: you call read, and there might be data immediately or you might have to wait; and in JavaScript the way you read might involve a callback, so it isn't even like "check buffer and then use Promise.resolve as awkward special case" but instead "call read passing callback; if data is already there the callback is called immediately". This also comes up while implementing stuff like "a set of values that consumers can ask for a value from; if the set is empty they will have to wait until there is a value".
Regardless, I legitimately believe the mental model of it not switching back and forth is fundamentally more correct for the primitive. The goal should also be that the abstraction has the same behavior and ordering semantics as doing it by hand. When you do it by hand, you essentially must run the code there immediately as otherwise there is no way to even run that code at all. Why would you ever want that code to be delayed? It frankly sounds like you are modeling Promise as if it meant Thread or something... a Promise is just a tiny adapter whose purpose is to change a wrap some setup code for accessing the value of a later event together into a common interface. A promise isn't doing the work: it is adapting the API of random models of doing future work (one-off callback APIs, random evented interfaces, etc.) to its own. Having this allows us to hack in (due to the lack of good monad support in most programming languages) async/await in a non-horrific way. If you want to do future work your first step should be to come up with a model for how your future work will happen, and then you use Promises just to do this adaptation, not to like, spawn some computation that will take time and which should happen later.
No, you won't get the result immediately anyway because then() callbacks are called only on next event loop iteration. This protects from overflowing the stack, but might have small impact on performance.
This isn't correct for native Promises. Native Promises flatten themselves out at the end of each tick and will do any synchronous work possible before either the Promise settles or there's something that needs to get thrown to the event queue.
This all happens synchronously, although deferred, but still can block the app completely if someone is simply wrapping synchronous actions in Promises expecting them to be "async".
OK. I see why they would have chosen to do that, and it disappoints me, but it seems to be weirdly more complex than that... this code prints the numbers in order (and continues to do so if you rearrange the calls to setImmediate and setTimeout or move them outside of the function either before or after)... so it is definitely returning from the function but it seems like the resolved value gets to jump the queue?
Agreed 100%. I did a bunch of js work about 3 years ago, used tons of promises. Then started a new job using Scala. The futures api in Scala is exactly what the author advocates, and it is definitely better for the reasons he gives.
He actually missed my least favorite thing about the promise API, which is that they fail silently. I'd argue that by default, an unhandled rejection should throw an exception at the end of an event loop, with an opt-in for the current behavior per-promise.
I seem to be in a minority, but rarely do I want to use the `new Promise()` mechanism for creating a promise, and I get the distinct impression that having it be the 'default' is a bad idea -- the amount of times I've seen people wrapping up all their promise-related code inside the constructor, finishing off with `.then(function (x) {resolve(x)})` is disappointing :(.
async/await solves much of this, of course, but where that's not available I much prefer to keep all my async functionality actually async, and start off by using `Promise.resolve()`. Save the constructor for when you need to encapsulate some non-promise async code.
This feels analogous to the problems w/ futures in Scala/Java as they were first introduced. And the solutions are provided by libraries like https://monix.io/.
So why are promises the problem instead of the lack of libraries on top of them? I understand cancellation cannot be fixed, but laziness sure can. As for synchronous execution, that's just not gonna happen in event-driven-land. It doesn't with other callback-based APIs (except AJAX which is deprecated) and I don't see the complaints there.
I’ve been using redux-saga a lot recently and they really fill the gap between the concept of a long running Task and asynchronous values/executions (Promises).
You can have your referentially-transparent cake and eat it too, but the main problem is that no one has developed a decent library that marries FantasyLand-compliant wrappers with Promise interop.
There are about 8 million Task/IO monad implementations and no one stopped for a second to think that `task.fork` could just return a Thenable and work with async/await as expected.
I rarely use promises. Just for the most simplest logics. When a package return promises in its API, I just use Rx.Observable.fromPromise(theRomise) .
ReactiveX, especially RxJS, is so powerful to handle async events, from differents sources to differents logics. You can build powerful pipelines with it.
I don't agree. Promises are a solution to the asynchronous callback world, not a dream spec someone came up with.
1. Eager, not lazy: Why is lazy better? Sometimes I want eager, I use promises. Sometimes I want lazy, I use a promise getter. Done. What if it was the other case, how would I turn a lazy promise into an eager one without messy code?
2. No cancellation. Events that permit cancellation are rare. Situations in which you would want to cancel something are rare. If you face these, use a promise library that does permit cancellation. Bluebird does it.
3. Never synchronous. If you want synchronous, just don't use a Promise, use a function that takes another function. I don't get the point about "callbacks to sync". Callbacks are asynchronous. "Synchronous callbacks" may have this name, but they're not actually callbacks, they're functions. A function can take another function as a parameter, that doesn't automatically make it into a "callback".
In Haskell you can generate a list lazily. The last item in the list could be the final result-value, whereas the leading elements could be the progress information. Canceling a computation could be done simply by stopping to "listen" to the result (i.e. stopping the evaluation process).
I guess you could implement this in JS by having a promise-like structure which returns a tuple containing progress information AND a promise for the remainder of the computation. In a sense, this is similar to the generator approach.
One problem is if your program execs an external program. At what point should a promise kill the external process?
I have a feeling the author doesn't understand Promises well. In my opinion, they are in fact designed poorly, but I don't see any problems with points the author describes.
He doesn't like that the callback is called immediately - but Promises just represent a result that will be available later and do not guarantee (and should not) when the function will be called. If you want to delay some function call, do it explicitly or use a delay promise.
In my opinion, the main problem with promises is broken error handling. They don't play well with exceptions. For example:
var p = Promise(function (res, rej) {
throw new RuntimeError("System is broken");
});
This code will just ignore the error. While it is expected that the runtime error would float up and terminate the program - that is what runtime errors are made for.
This also makes writing tests more difficult because tests often use exceptions to indicate failure.
I have some ideas how to fix it (neither is perfect), but the comment will become too long.
This is a fundamental issue with multitasking, in that an independent task will have its own stack, so errors can't propagate up the stack of the function that started the task.
e.g. consider this python code:
def start_task():
Thread(do_task).start()
What happens if do_task throws an exception? The exception can't propagate up from start_task because start_task may have returned when the exception is thrown.
At some point you have to join your threads, and you have to await/then your promises, otherwise there's no well defined place in your program for the exception to go.
Linters can help with this. tslint is one example, I believe it can give a warning for unhandled promises.
`p` will reject, any function that awaits p will reject. The error is that you've fired off an asynchronous task but you don't have any code that cares about the result of that asynchronous task (and the stack of any code that
Two Erlang processes (~actors) can be linked together. If one then crashes (uncaught exception), the other one is automatically killed - unless it takes special steps to "trap exits", in which case it is sent a message describing the failure in its linked peer.
> otherwise there's no well defined place in your program for the exception to go.
I don't see the problem with that. Unhandled exceptions can occur at any place of your program.
I also don't see why the unhandled exception from the background thread cannot terminate main thread. Why not? That is how unhandled exceptions are supposed to work. Terminating the program is the optimal default behaviour for any error in my opinion. This way you won't miss them.
> I also don't see why the unhandled exception from the background thread cannot terminate main thread. Why not? That is how unhandled exceptions are supposed to work. Terminating the program is the optimal default behaviour for any error in my opinion. This way you won't miss them.
This is how unhandled promise rejections will be handled in node soon. In the browser, you can declare an event handler for unhandled promise rejections.
I don't think it is beautiful. This fills program with lots of unnecessary `if`s for checking results of each call. Also, it prevents one from chaining functions like a(b(x)).
> This code will just ignore the error. While it is expected that the runtime error would float up and terminate the program - that is what runtime errors are made for.
No, this behavior is consistent with how asynchronous programming works in JS.
> That is because browser environment catches all exceptions and displays them in console. So effectively they become handled exceptions.
No, it has nothing to do with the developer console. It has everything to do with the fact that async exceptions do not bubble up in the main execution thread. Promises work the exact same way since they existed as libraries long before they were included in the language. It's about consistency, nothing more, nothing less.
He literally talks about the work-arounds you can use to delay the function call in the article, and explains why you would prefer it to be default behavior.
Promises existed way before they were put in the specification and way before async/await. So no, promises could not have been advertised as part of async/await since they came before async/await.
I started working with JS promises specifically when they were barely available in a beta runtime. It took me over a year of working with them to really get a feel for them, now it's been far longer. That's because while you can "understand" the description and use it just fine, but a deeper comprehension and intuition takes much more time. I experimented a lot and insisted on writing my own helpers from scratch, without looking up other people's code, because I wanted to get a feeling for the details.
This article seems quite artificial to me, the problems mostly made-up.
I don't see the point of the first complaint. If you don't want to start right away chain it to something that it should wait for. If it should not wait, then it can start right away. Her writes "Functions rescue us in this case because functions are lazy." which I don't quite understand: what is he running through promises if not functions? Hi "betterFetch" example mixes synchronous and promise syntax - how about using async/await if you prefer the former? I admit though I don't quite get the point of that example.
I don't understand the whole "run a promise" idea either - because you don't "run a promise", that whole notion has nothing to do with what "promise" means. Just look at the word! It represents a (wrapped) future value. Where does the idea of "running it" come from? How do you "run" a (future) value?
You have a function and it is quite easy IMO: Using a promise you chain it to whatever you want to wait for. These days you can even use semi-synchronous syntax (async/await). "Running a promise" makes no sense to me, you run functions, and I don't see where the difficulty lies here?
The second point, cancellation, has been discussed very, very thoroughly - after all, this was on the table to be standardized. One of the issues he raises is the same as point one - if you have a chain it's automatic. The main issue of cancellation is that you have zero control over the actual asynchronous operation that the promise actually stands for - because this is controlled by the OS alone! If you started I/O, what does "cancelling the promise" mean?
1. If it is still waiting: If you don't want to run something make sure the previous step returns a rejected promise. You can easily "cancel the promise". Just let your promise function check something in the parent scope (via callback or it is in its lexical scope) when its chained function starts, and if that says "you are canceled" then don't do it. You can put such a check as a standalone function anywhere in the promise chain you created, just let that "amIcancelled()" function throw or return a rejected promise. The whole chain aspect is something that the article is missing.
2. If the code is already running: you cannot cancel the actual (OS controlled) asynchronous operation, nor can you cancel a running JS function (unless you use async/await see bottom paragraph).
I agree that promises are not perfect, but async/await -
not mentioned at all! - makes it a bit easier for many people - as long as they don't forget one thing: Even if your functions now look like synchronous ones there is a fundamental difference: A synchronous JS function is never interrupted by any other code. An async function is suspended and other JS code gets to run in the middle of it when it encounters an "await". This is something new first introduced by generators, before that JS functions were atomic (now some are not).
> you cannot cancel the actual (OS controlled) asynchronous operation
This is way too broad a claim. If I do something like:
(sleep 1 ; echo "done") &
kill $!
I'm pretty clearly "cancelling the actual (OS controlled) asynchronous operation". Now it may have done some sleeping at that point, and if you replace the sleep with something that has side-effects, then some of those side-effects may have occurred, but the operation is still being "cancelled".
Obviously it's not the case that every operation can be (meaningfully) cancelled, but some can. This is even more true when you consider that when you're using JS, you're generally way above the OS level. XMLHttpRequest has an abort() method for a reason: The underlying socket request may be queued up based on your browser's connection limits and, even if it's kicked off, your browser is going to have multiple opportunities to abort the process even if none of the underlying sub-operations can be preemptively cancelled.
> It took me over a year of working with them to really get a feel for them
I had a similar experience. It took quite a while for me to stop shooting my foot. My takeaway from that experience was that, while they do have certain advantages, Promises suck. Any abstraction that is so unintuitive that it takes beginners dozens or hundreds of hours to master is probably not an abstraction worth using - especially if it is supposed to be a primary feature of the language.
I had the same experience with the callback pattern. It literally took a whole year to grok. And I code almost every day. I'm now a ninja with callbacks. So it's a hard to motivate myself to learn Promises. Syncronious code is more easy to deal with, and you get concurrency by thread abstraction. But it will eventually bite you when you start to get double transactions eg line 1 checks if there's funds in the account, line two draws money, line 3 inserts good. But then another thread takes the money between line 1 and 2. And then the "single threaded" event loop actually becomes easier to deal with then making sure your code is "thread safe" with locks etc.
Cancelling the promise can be useful in fact to prevent doing computations that won't be used anyway. I tried to explain it in this comment: https://news.ycombinator.com/item?id=16386454
Promises are classic JS clusterfuck. Replace something shitty, like nested callbacks, with something even more shitty. Thereby ignoring 40 years of experiences of other languages.
I don't think that's quite fair. At worst promises are marginally less shitty than callback hell, so long as you don't go nuts and nest them too deeply.
1. Eager, not lazy - I think it was a mistake for the promise constructor to take a function, and in that way lead the users to believe the promise represents a computation. Creating a pair of promise and future (the latter as the producer side, like in C++) would be much cleaner. I disagree that lazy would be more general, you can simulate lazyness with functions, but you couldn't eliminate the performance cost of creating the unnecessary closure with a lazy solution. Regarding getUserAge - the common case for that function would be to take the user ID as the parameter (and hence would be lazy by construction), the parameterless version is a special case.
2. No cancellation - cancellation is much better represented with cancellation tokens (even C# Tasks cancel with cancellation tokens, so does fun-task mentioned at the end, though in non-composable way) - you cannot build a generic solution that can cancel the right computations. With cancellation tokens it's clear what cancels what.
3. and 4. (as well as being allowed pass non-promises to places where only promises make sense, like Promise.all and await) are unfortunate accidents that make typed environments (e.g. TypeScript) harder to work with but are not that important as 1 and 2.