Recent versions of Chrome DevTools have a new profiling feature called 'Record allocation profile' that may help. Enable this around a few of the sawtooth and it will give you a profile based on a sampling of allocations that happen during that period. The profile will include the stack-trace at the time of the allocation which should help you figure out where the allocations are coming from.
Thanks, this view looks really interesting and I hadn't looked at it deeply.
With that said, do you have any advice on how to use it in practice? Timeline profiles tell me my app goes through maybe 5MB of heap per second, but when I use this feature for say, 5 seconds, it tends to report 2-3 functions as having allocated 16kb each. (And if I run it again, I get similar results but with a different 2-3 functions.) Is it just reporting a very small subset of allocations?
This profiler is sampling based. It takes a sample once every 512KiB allocated (on average, randomized) and reports all the allocations still alive at the end of the interval. So, yes it reports the subset of allocations that are sampled and are 'leaking' from the interval. In that sense it is better at finding memory leaks.
If you want to look at all the allocations during the interval, then you can use the 'Allocation timeline' profile – this will give you all the allocations but note that this might have significant overhead.
Thanks for the info. Is there a way to get the Allocation timeline to report about all allocations though? It seems to only report objects that are uncollected (that show up as blue in the timeline). That's useful for finding true leaks, but in my case (trying to fix a sawtooth pattern of heap usage), stuff that was allocated and then quickly GC'ed is exactly what I want to know about. Or am I looking in the wrong place?
To be precise, Node.js 6.x will include V8 5.0. There has been a little discussion on being able to include V8 5.1 down the road, maybe, but there is nothing concrete on it at this point.
Note that ptc syntax _ensures_ tail calls, tail calls themselves have _already_ been approved in ES2015 and are live in V8 (under a flag), you can follow their status here: https://bugs.chromium.org/p/v8/issues/detail?id=4698
Never use a single workload as a predictor of performance of the universe of all other workloads. The best benchmark is not a benchmark at all – it is your actual workload.
V8 has had arrow functions for a long time, but until recently they were not fully spec compliant. That is also why iojs put them behind a flag (separate from the flag that enabled stable harmony features).
Yeah, 'ship' means different things in different contexts. In the context of your original comment, node stable is getting arrow functions within a week of Chrome stable getting them.
In general, io.js has been very good at picking up stable V8 soon after it ships in Chrome. The exception was V8 4.3, which was not picked up because of API compatibility issues.
This is not a problem for Chrome because chrome doesn't expose the V8 C++ API to large body of third party module writers like Node does. It takes time to deal with some API changes.