> Another way of solving this problem comes from the realisation that we only really need to keep track of the dependencies for a promise while the promise is in the pending state, because once a promise is fulfilled we can just execute the function right away!
This is a common gotcha in Javascript implementations, in that you think you want this, but you really don't! Now you never know if your code is synchronous or will run in a subsequent tick. Your call tree will look completely different depending on race conditions...
This comes up in user code as well; any time you write a function that takes a callback, it's probably a good idea to either always run it either in the same call tree or in a new stack, but never mix the two. It's usually easier to just do the latter using process.nextTick.
You don't even have to resolve the promise via a callback for that to be the case - you can use the static resolve method. This is really useful if you have a function that can sometimes return a promise and sometimes a "normal" value.
It's actually not the next event loop tick (or it's not supposed to be; some implementations get this wrong). Promises use microtasks, so an already-resolved promise dispatches at the end of the current tick.
I get what you're saying, but for some reason it doesn't feel right to me.
Part of it is that if your API requires you to think about whether or not it will run in the current or next tick, that's probably not the right API.
I find in JavaScript it's best to just assume your callbacks could be called at any time, in any order. And if you need a specific priority then write some code that actually explicitly orchestrates that.
I don't blame you. I basically don't use any libraries any more because the proliferation of promises (and libraries with promise-like architectures) has made everything impossible to reason about or debug. But I have the luxury of being a wizard in the forest who answers to no one.
This is a common gotcha in Javascript implementations, in that you think you want this, but you really don't!
No, I really, really, DO want this. And I want my entire codebase written in such a way that this is fine.
You need consistent abstractions to make asynchronous logic flow comprehensible. And once you have them, stick to them rigorously. Promises are such an abstraction. Build on it, don't ruin it.
Yes, there are lots of ways to ruin it. And for every way it can be ruined, there is a way to make it work without breaking the abstraction. Do the latter.
I think you (or I) misunderstand what you're replying to. When they said "you really don't want this", they're saying you don't want promises to sometimes be async and sometimes resolve in the same tick. Always make them async. That agrees with you, since it would mean you can always treat promises the same.
>sometimes be async and sometimes resolve in the same tick
That's still asynchronous though isn't it? (Correct me if I'm wrong I'm just spitballing, still learning).
There's no lower bound on the minimum amount of ticks a process can take for it to be async. What gives it that async-ness is that the upper bound is unknown, indeterminate, and changeable.
Ergo, if some process is sometimes synchronous, and sometimes asynchronous, it's really a process that is asynchronous, but just so happens to occasionally execute in synchronous-like time.
The meaning of 'tick' here is special, since we're talking about Node. It's not a measure of time, it's an iteration of Node's event loop. Because Node is single-threaded, everything that happens in a single tick (i.e, one run of the event loop) is in the same execution context/stack/call tree/whatever you want to call it. So we generally call this "synchronous", although technically you could be holding onto the tick forever, firing off a bunch of nonblocking IO in there and doing your own polling/event management.
Of course, that would be silly, since Node manages an event loop for you. So your nonblocking IO would instead push events onto the queue, and let Node move on to the next tick. If something happens in a different tick, it gets its own clean slate of an execution context, and we call this "async", since the function that triggered it is no longer running.
The general pattern is write code to do initial processing, set up promises, return a promise depending on those promises that will do something.
The promises you set up should always be assumed to do things in whatever order they happen to happen. Your code that does interesting stuff happens before any promises are seen. The interesting code in your returned promise will happen after the necessary promises are resolved.
As long as things are always set up this way, you'll be fine no matter how the promise library is set up. Which is good because unexpected implicit dependencies are a problem.
I'm going to say that I've also been bitten by this, but in C#. Callbacks (or async tasks/promises, observables, etc.) should not be called immediately when they are registered. It makes it way too hard to figure out if your code is correct.
This doesn't break the abstraction, but it does make the abstraction simpler. It means that your callbacks (or whatever abstraction is built on top of them) are always called from the same context.
Oh yeah, I don't think any real/production Promise implementation behaves this way - but it can be unclear why. As a beginner, you might expect Promise.resolve('foo').then(...) to happen immediately, as the quoted passage suggests; I was trying to get at why that's not the case.
I have seen how Promises' concept was abused in a project at work. All the promises were just returning Future objects, and they were exposed everywhere. And, in case a future fails for some reason, there was no way to have a new Future: all users were doing future.get, resulting in an exception thrown to the caller.
What a mess.
IMO if you're to get the benefits of promises, you need to use callbacks rather than using synchronous accessors like a .get() method. Making a 'get' accessor available just encourages blocking, reducing the very parallelism and asynchrony that promises are introduced to make usable.
When you chain and compose together operations on promises using a nice fluent API, you're building up a monadic value that encapsulates the whole computation. You can then apply error handling to the final monadic value and be sure all errors across the whole computation will route to a single handler. When reading and writing code, the division between "creating the computation" and "running the computation" is thereby explicit and the error handling obvious.
I've found promises most useful in managing asynchronous operations in the browser. With UI widget support (disabling things for the duration of a promise computation, showing spinners, showing progress bars, etc., automatically handling errors in an error alert area) it makes non-blocking UI much much easier to get right. For parallelism, I find different approaches easier to reason about, whether it's work queues or something like parallel LINQ / Java 8 Streams.
i have really been trying to work on this, amd my js design in general.i have been trying to watch at least 1 doug crockford lecture a day and dig in to node design patterns.
i am a mediocre developer and it is super hard to find resources. either they are way to beginner such as an introduction yo for loops, a project structure that is not modular but one monolithic binary for the basic example or an otherwise simplified usecase that can not really be extended very well.
otoh, some stuff is way to intense and can't grep that either. if you know of any good design patterns that explain concepts like promises, leveraging functional closures in a factory/singleton or otherwise great code concepts; but explained either at the intermediate or beginner level, that would be great.
i love resources that assume limited knowledge, they just don't seem to go into the interesting/useful parts of the language in enough depth to leverage
The difference is SUPPOSED to be that a Future is read-only while a Promise is settable. And should be set at most once! So you can use a Promise as a Future, but you can't always use a Future as a Promise.
Linguistically you can understand this as, "I Promise you'll see a Future answer." I need to be able to write to the Promise to fulfill it. You can only hope that some day the Future will arrive.
That changes by language - what you say is true in Scala, but in JavaScript the terminology is different. A future is not defined, a promise is a future and a deferred is a promise.
That is what I mean about people being inconsistent.
The ideas are much older than Scala, JavaScript, and many of the people commenting in this discussion. Believe it or not, both terms date back to the late 1970s.
Scala got it right. JavaScript got it wrong. And polyglots like me have to suffer with having to sort out who means what, where, on what platform.
In Javascript: Promise is read-only (public interface), Deferred is writable (private)
on JVM: Future is read-only (public), Promise is writable (private)
The writable interface has "resolve" and "reject" to return a value. The readable interface has "then", "error", "finally" etc which let you compose and combine functions that deal in asynchronous values.
The best Javascript implementation of promises is Bluebird, which doesn't expose a Deferred instance to avoid confusion, it is very nice. bluebirdjs.com/docs/api-reference.html
As far as I can tell, there's two differences. 1) a Promise typically has a "public" API and a "private" API -- the private API has read/write access and the public API has only read access. This way you can return the public API object to consumers of a method/library without worrying about them mutating the promise. 2) Promises tend to be more...composable? Java's "CompletableFuture" API fails on the #1 point I made above, but otherwise has a whole slew of then* methods that aren't on Java's Future. Then again, that might just be because Java's Future is mind-numbingly simplistic.
Nicely done, thanks! Still, having to explain a concept as simple as eventual computation reinforces my belief that promises as a whole is broken, and should be done in something that looks like synchronous code with some help of the underlying runtime. (And no, ES7's async is not good enough for me here)
Sounds like a leaky abstraction. Easier to get started with maybe, but woe betide the legion of programmers who make assumptions based on the appearance of synchronous execution only to be left stranded when those assumptions don't hold.
Asynchronous execution isn't an unfortunate design choice to be papered over; better to make it as easy as possible to learn and work with.
I am not calling to ignoring the async nature of computing in general, I am calling for better tooling.
The Linux (and other OS'es) kernel handles I/O asynchronously, yet processes using open(2) + friends work quite well and are usually written in synchronous style. That the actual asyncness is hidden in kernel space does in general not lead to bad programs; and whoever needs async in the space of a single process can still use async tools provided by the kernel when needed. That tooling is much better as it leads to simpler code.
I don't see ES7's async support being there yet. Elixir, Go, a number of others: much more.
Imho in this area there is really no general good solution: Either you use the leaky blocking IO abstraction over the async IO - and you will require multiple threads/tasks for doing anything more complicated and lots of complex synchronization. Or you use the async APIs directly and need to work with callbacks/promises/etc., but you can at least avoid synchronization.
I personally prefer the async solution, and I think async/await is a good solution to make async code a little more readable.
> Either you use the leaky blocking IO abstraction over the async IO
What do you mean the leaky blocking IO. It is the non-blocking IO that is leaky, isn't it. If at the business logic there is a request being processed that needs to go through steps a,b,c and then return. b, can't be done unless a finishes and c unless b finishes is mapped pretty cleanly to an execution context of a thread/actor/goroutine/task/process.
Shoving IO callback functions in between handling the steps, or spreading them across callbacks or using promises is leaking the abstraction of the platform not being able to handle concurrent processing very well.
What I meant is that blocking IO is a leaking abstraction, because IO is always asynchronous. What blocking does is adding an extra step (start the process AND waiting for the result) and thereby hiding something.
I agree that if you have some pure linear steps (do A, then do B, then do C, and probably use the result from the last step for the next), then the synchronous abstractions work fine. But as soon as you add various timeouts, different error handling strategies, multicast communication, cancellation and other stuff to your async IO procesing then it isn't purely linear anyway. I often end up implementing quite complex state machines for heavy IO related code, and for this I am more happy with working in an asynchronous single-threaded environment.
I think you're using the term "leaky abstraction" much more literally than most other people do. If we take your interpretation to the logical (albeit extreme) conclusion, then even "read the value of variable X" is asynchronous and should really be modeled asynchronously.
> for heavy IO related code, and for this I am more happy with working in an asynchronous single-threaded environment.
You're not describing the majority of applications. The vast majority of applications do negligable amounts of complex network I/O (ignoring GUI, which is handled for them by... an abstraction).
It would be a bad idea to base our abstractions on things only needed for a tiny minority of applications.
I don't think there are really numbers that tell us what the vast majority and the tiny minority of applications use. I think most people will have here some personal bias, because they see the applications that they are working on, the applications that their particular industry is working on and think that the rest of world might be similar.
When I worked on simulation code some years ago I haven't thought about complex IO at all. Even when I did some basic web development (with PHP...) 16 years ago I had nothing but synchronous IO and was happy. Now realtime gateway systems are my daily business and the "typical application" looks completly different.
And you already have given GUI as a counter example yourself. More then a tiny majority of applications have a GUI. And all major GUI frameworks are built on top of asynchronous abstractions, because you want GUIs to be reactive (no hanging due to blocking stuff), and because every input port (either from a human input device or from network) can change the shared state and cause output to more than one output device.
> I don't think there are really numbers that tell us what the vast majority and the tiny minority of applications use. I think most people will have here some personal bias, because they see the applications that they are working on, the applications that their particular industry is working on and think that the rest of world might be similar.
I'm specifically going against my personal bias. I write server-side applications, but most of the code out there is basically apps and such.
> And you already have given GUI as a counter example yourself. More then a tiny majority of applications have a GUI. And all major GUI frameworks are built on top of asynchronous abstractions, because you want GUIs to be reactive (no hanging due to blocking stuff), and because every input port (either from a human input device or from network) can change the shared state and cause output to more than one output device.
If you look at how the users of GUI frameworks write their code, it's mostly synchronous code with some interspersed message-passing type code. The asynchronous nature of GUIs is only really relevant when dealing with button presses and such.
You don't want to have to program all of it as if it's asynchronous. Having to reify the stack manually into a state machine would quickly induce madness for all but the most trivial of GUIs.
Reactive GUIs are pretty simple to do using blocking code: Have a main thread resonsible for the GUI, offload long-lasting work to other threads. Use message passing for communication. Done. You're still programming in a mostly-synchronous model. I think my point stands.
EDIT: Btw, the beauty of this message-passing-mostly-synchronous model is that it doesn't force you to completely change everything in your program around the asynchronous bits. You restrict your asynchrony to where it really matters. That's a big deal for clarity, IMO.
Creating a socket() creates a blocking socket by default, doesn't it. Doing a recv on it, blocks the current thread. Pretty sure by default IO is synchronous. Sure at some point they are all connected to a single wire plugged into the switch and there is a single network card. But if you are thinking that low, then well javascript is not the framework to talk about we are in firmware, dpdk, and driver land.
The socket API is synchronous. Do do stuff "asynchronously" you have to go the extra mile and enable turn on some options, use select/poll/epoll callbacks, explicitly pass state around between callbacks etc.
> What blocking does is adding an extra step (start the process AND waiting for the result) and thereby hiding something.
See to me at the socket layer non-blocking is the extra step. Having to enable a new mode, then use some extra polling hub to see which socket has data etc. That mixes context between separate requests, etc.
> I often end up implementing quite complex state machines for heavy IO related code
In specializes cases for performance reasons I think switching to asynchronous, but that is a rather special case. It works for short callback chains and situations optimized for IO throughput say things like nginx or haproxy. Doing it for an online store or some business middle layer would be very tricky.
That the default posix socket API is in blocking mode by default was a decision that was made lots of years ago and for some specific reasons. If the reasons at that time might have been different, then it could now also be async by default. So I don't think that tells us whether one or the other model is the more natural or a better one.
Look at the LWIP library for embedded for example. There the async API is the default one, and you have to take the extra mile (and some performance hit) to get the blocking behavior. The midori OS prototype also seems to have implemented a pure async IO subsystem.
I also think it depends on the use-case whether one or the other model is more appealing, but I don't think it's only for performance reasons. A lot of people here think about IO mainly in terms of http request processing because web services are the usual domain here. For this such applications I think blocking abstractions come quite naturally, because you there you read the input stream once, then do a sequence of intermediate processing steps and then write a response. If you instead are working on other domains (writing a message broker, a gateway application, http/2 library, realtime sensordata processing system, ...) where there is no linear sequence of read/write operations and where you have shared state between your IO endpoints then your needs and the preferred building blocks might be different.
Apples and oranges. You're describing a platform where it's considered acceptable for programs to block on IO and jump through hoops in rare cases where that's absolutely not ok. A set of assumptions that maybe still made sense 30 years ago, and also a leaky abstraction, but one we've become accustom to working around.
If you want that in JavaScript, you can have it; promises were created on the assumption that asynchronous logic is desirable as a matter of course.
What are you going to do in Javascript, though, if the I/O takes some time to happen? In golang or lua or whatever I can say "this execution context should block on the I/O for x seconds, then give up and return an error." (the other tens of thousands of execution contexts can keep going)
In Javascript I would probably do... the same thing? But I would do it using promises?
You can do something like this (using Bluebird's Promise.delay for convenience):
// using Promise.race so whoever finishes first wins
var fetchingWithTimeout = Promise.race([
// first promise: actually try to perform the computation
fetching,
// second promise: wait x seconds, then reject
Promise.delay(x*1000).then(Promise.reject('Timed out')),
]);
You can apply the timeout wherever you want (or apply different timeouts in different corners of the app) without throwing out the raw promise to perform the computation no matter the time cost.
Yeah, that would work. My question is really whether I should prefer this over a sequential API that does the same thing, and why. "Does the same thing" here includes the property of "not blocking an OS-level thread".
Some Promise library add support for things like `.timeout` so you can force a promise to reject if the promise takes too long to resolve.
In cases like a node.js server, Promises allow a single node instance to handle thousands of concurrent requests because the event loop isn't blocking waiting for a single IO request to complete, the process can happily do lots of other things while Promises are pending.
Yes, I expect Promise libraries to support timeouts.
So it's right that promises allow me to do the same thing I would do with sequential code in other languages? If I had the option of using coroutines when would I choose to use promises?
Edit: I'm asking because the context of this thread is that one person said that sequential APIs for asynchronous operations, such as open(2), are pretty nice, and someone else said no they're not pretty nice and we should explicitly deal with the asynchronous nature of operations like open(2) by NOT writing sequential code there.
If you have promises OR you have coroutines, your life is good. Either way, you don't have to choose between writing fragile sequential code and writing error-prone synchronization logic.
If you don't have either, then you'll probably end up re-implementing one or the other eventually anyway.
NodeJS: you needed 5% of your code to be async, so we made everything async so you get to write awkward async-but-not-really code for the other 95%, too. That's better, right?
At least that's how it's always felt when I've used it for anything non-trivial yet well within its typical set of use cases.
Well yeah, I usually end up writing lots of async code because you pretty much have to since all the libraries assume that's what you want, but shoehorning in dependencies (promises, callbacks, callbacks mutated into promises by a promises library—it's a mess) until it could have just as well been written as at most two threads. So it's async in pattern but not in fact.
Most of the time that IS what you want in general in node, as if you start running anything with even moderate CPU time (let alone a call to an external service) you block the event loop and chug the system. The big performance gain you get with node in the form of max simultaneous connections per clock you get from not blocking. Most (95%+) of packages I've seen use callbacks, and maybe ~10% expose promise interfaces even if they use them internally.
Despite the downvotes, you are absolutely correct. My feeling, after working on a moderately complex server application in Node is that it was designed for only very simple applications that can be done with one level of callback. I find the "it works this way because its better" cult quite irritating. It works that way because the #*(^#$&^ JS interpreter isn't multithreaded.
It wouldn't even need to be multithreaded, I don't thnik. You could have some green threads-type system, using asynchronous I/O, a single-threaded server, and something along the lines of Lua's yieldk. Have a new script spring into life for each connection, suspend it when an I/O operation starts, and resume it when the results are ready (or when something goes wrong). I think this will be what Erlang does.
(I'm sure there are potential problems with this, but hopefully it would be overall no worse than the current situation, while being easier to work with.)
Potentially almost all of it, but the point is that if _most_ of your requirements are "get thing, do thing with it, decide other thing to do based on that" you don't have to spin off three goroutines to accomplish it (roughly the equivalent, as far as wasted effort/mental overhead goes, of needless promises/callbacks which are abused to work out identically to performing those things in sequence). You do it all in one. Maybe it's just me, but I find that most of the time what I need to do can be boiled down to between 1 and 3 (and usually on the lower end of that) lists of things that need to be done in sequence (threads, if you will) and that Node is a poor fit for that, since its ideal case seems to be doing dozens of things, none of which depend on one another, then collecting them at the end, which is something I rarely ever need to do—if I do, typically the the "dozens of things" are really just one or two things with different data and therefore are very easy to parallelize without resorting to Node's typical patterns.
I'm sure there's a workload where async-all-the-things makes things easier rather than harder, I just haven't run into it.
[EDIT] "rarely never" to the intended "rarely ever"
What JavaScript needs (and other languages need, too) are first-class continuations. First-class delimited continuations to be precise. If you have those, you can implement whatever control flow you'd like, such as coroutines.
It does. The problem is simply solved because Elixir has preemptive multitasking [1], and supports millions of processes on a simple server. So let say you have a blocking function fetch_data_sync, and want to execute it in a non blocking way: just spawn a process:
So you can mix 'red' and 'blue' functions as much as you want. Problem solved.
[1] Erlang/Elixir multitasking is often called preemptive, but in fact it is a little bit more subtle than that. It is however a good first approximation when writing code.
"Erlang/Elixir multitasking is often called preemptive, but in fact it is a little bit more subtle than that."
The same is technically true of Go. In practice, in both cases, unless you're backing to a lot of C code, or in the case of Go, manage to write a really tight loop that never gives the scheduler a chance to run, it is not something that comes up in practice very often. (I'm at a total of 0 after ~6 years of use of Erlang and Go. YMMV, since I never did use any oddball C extensions, but every month for both languages that's less of a restriction than it used to be.)
The platform is better designed and smart where you don't need red and blue function. You just have green functions (being a bit silly here with colors).
> [From Post] Async functions don’t compose in expressions because of the callbacks, have different error-handling, and can’t be used with try/catch or inside a lot of other control flow statements.
Good news again. Don't worry about that. Work sequentially as you need in inside each request/task. If you want to do multiple tasks in parallel, spawn multiple tasks. The Erlang folks (and Go-routine folks too but a bit in a different way) figured this out many years ago and have built large, complicated and reliable distributed systems (WhatsApp, smartphone<->internet gateways, databases etc...)
Yes. In elixir and erlang, you can call a function without knowing whether it will wait on an asynchronous message. The function you called will not return until the message is received or a timeout is reached.
the conclusion of the article specifies goroutines as solving this problem. Erlang processes are the same.
Here's a simple test for checking this: does the language provide a green-thread/coroutine abstraction, or do you work at the OS thread level:
In JS, C#, JVM languages, Python etc you work at the OS thread, so async operations need to be handled differently from synchronous one.
Elixir/Erlang, Go, Haskell etc provide green thread abstractions, so your green thread can block on an async operation without blocking the OS thread.
Elixir/Erlang is especially nice since each process (green thread) is completely isolated. This has multiple advantages:
i) when an erlang process (green thread) crashes, it doesn't take the other green threads on the OS thread down with it.
ii) garbage collection is per erlang process, not for the entire run-time. So the garbage collector doesn't halt everything when it runs. This makes it a really solid option for soft real-time systems
iii) stack trace is also per green thread, so you only see execution steps for the particular green thread in the stack trace, this makes debugging much much easier
iv) erlang processes can communicate with processes on other machines, this makes it a really good option for dist systems
The only drawback is that it's slow for cpu bound stuff, so i'd avoid it for anything that's computation intensive
In golang, by convention, you only ever use asynchronous functions that return values. So there is "only one color." (You also have the option of writing callback spaghetti or implementing promises, but why would you do that?)
In elixir, you have the option of blocking on a Task. So you can do the same thing you do in golang, if you want. I don't know enough about elixir to say what the culture is like around this.
I found this document : https://github.com/kriskowal/q/tree/v1/design very helpful to understand how promises work behind the scene
Also liked being able to access the author's reasoning and the motivations behind his design decisions
The second nGram linked there (showing the rise of fulfill - with 2 l's in American English)[1] seems to show that the increase in use of 2 l's started around the same time as Noah Webster's dictionary was first published. [2]
I've always been fascinated why there is such a difference between spelling in "British" and "American" English, I remember reading that the main reason was Webster's dictionary and his desire to reform spelling according to pronunciation. That Wikipedia article though quotes John Algeo: "He was very influential in popularizing certain spellings in America, but he did not originate them."
So now I'm as confused as ever. Why DID the spellings change so significantly. I can accept that they took hold due to Webster's work, but I wonder why the differences arose in the first place?
Nice summary! I wish I had this when I started getting into Promises a couple of months ago.
For a really interesting look ahead to how Promises (and more!) might address sync/async symmetry in ES6/7, check this out: https://youtu.be/DqMFX91ToLw
Really a lot more explanation than necessary. Get a feel by using them and using a debugger or just console.log to trace execution. Then compare with equivalent callback code and async/await with babel.
Then use async/await and look at node-modules.com to find modules that convert to promises so you can use async/await.
See section about handling error and promises. Very well done!
You won't see that in the typical "check out how cool promises are, async and fast all the things".
And promises indeed look cool, and make for nice short demos and they are easy to understand in short examples. Only when you start building a large applications based on them, where error handling has to be done, you start realizing they are bit like threads. Promise callback chains started from one event, can interfere with other promise callback chains that started from another event. And if they modify the same data, you now also have a race condition as well.
I would basically look at this statement "In the synchronous world, it’s very simple to understand computations when thinking about functions: you put things into a function, and the function gives you something in return" and follow through, but in a different direction -- pick the synchronous world if you can.
Sequential things should be sequential and concurrent things should be concurrent. Single request processing is sequential. For this request do x,y,z in order. Can't do y unless x finished. Well sit and wait for x to finish. But requests themselves can be concurrent and run in parallel. For example a request comes in, reads the database, updates the database, bumps metrics, makess other sub-requests and then responds. It is sequential and should be kept sequential. If you platform cannot handle that, think very well about your platform, and perhaps pick a better platform. What does that mean practically? It means picking green threads (Python gevent, eventlet), it means Elixir, Erlang, Go goroutines, Rust's threads. Streams can be used as a higher level abstraction sometimes and so on.
Another good way to make things sequential is C#-style async/await, which is enough syntactic sugar over promises that everything looks sequential again.
It has regular threads but with appropriate data race safety guarantees. So that won't help in context when large number of concurrent contexts are happening, but perhaps it should first become a bottleneck and then solved, because it might just be good enough.
IMO the Promise API in javascript is incomplete when it comes to error handling. If you use the native implementation, all you have for error handling is .catch, but that changes the result of the computation. It would be extremely convenient to have out of band onSuccess/Failure/Complete lifecycle callbacks which don't have return values, but do let you perform logging, etc. on the side without affecting the result. You wind up having to catch errors and re-throw them, which is error-prone and janky.
This is a common gotcha in Javascript implementations, in that you think you want this, but you really don't! Now you never know if your code is synchronous or will run in a subsequent tick. Your call tree will look completely different depending on race conditions...
This comes up in user code as well; any time you write a function that takes a callback, it's probably a good idea to either always run it either in the same call tree or in a new stack, but never mix the two. It's usually easier to just do the latter using process.nextTick.