Also worth mentioning here is perf[1], which is great for low overhead profiling. Also, perf profiles can be turned into profiles compatible with GCC and LLVM PGO to build optimized binaries based on production runs, using autofdo[2]. In my use case, the instrumentation overhead was too high to use regular profiling on production workloads.
perf and its ilk are obviously useful, but you need to be aware of several cans of worms with sampling hardware counters, in particular. These include the timing mechanism for sampling, the documentation and intrinsic usefulness of particular counters, and issues with multiplexing more than what can be used simultaneously. For multiplexing see, for instance, https://www.research.manchester.ac.uk/portal/files/59933625/...
I'm just going to drop coz[1] as another suggestion. Ever since their talk/paper I expected other implementations of a causal profiler but for some reason everyone is steeped in the old ways. The concept just seems like such a huge efficiency boost compared to raw flame graphs. If you have time to watch their talk, it's linked on the github readme.
coz requires the user to instrument the code. So it's interesting but also much more costly to run experiments with.
I suspect it's cheaper to start with callgring to get an idea of the hot spots in the code base, find the low hanging fruits. Then switch to coz if you really need to squeeze out the last performance juice.
If you watch the talk, he finds a number of examples of already implemented optimizations that don't work. I wouldn't be surprised if it was better to immediately start with coz. If nothing else, it forces you to model the problem better so there's overlap with test driven design there, no?
I absolutely love flamegraphs for analysing performance. If you haven't used one before and you're interested in optimization (in particular, of large programs you're unfamiliar with) then check them out! I also find them to be an easy way to get a grasp on complicated call stacks, since the addition of method linking on the call stacks makes it really easy to follow.
I've asked before without luck: How are flamegraphs preferable to the well-established sorts of visualizations in the common HPC performance tools, like CUBE, Paraver, and TAU, say? They typically provide at least inclusive or exclusive function/region views with choices of metrics for profiling and/or tracing over serial, threaded, or distributed execution.
Well, I'll start by saying I'm not familiar with any of those tools. Took a quick look at them though. It looks like Paraver offers a time domain look at performance? And cube seems to offer time based and a graphviz of the call tree.
In a flame graph the width of a stack frame correlates to the % of CPU time spent in that stack frame, and the y is the particular call stack.
This means that you can quickly tell what functions, and from what call sites, are the most expensive.
The only visualization I know of that matches the ability to quickly zero in on things, while maintaining context, is a graph of call stack with frames colored by cumulative CPU time, but that has the issue that laying out the graph is hard, and seeing everything at once is difficult.
That may be OK in simple cases where you can easily eyeball it, if you're only interested in aggregated CPU time as a metric, and if you win most from optimizing the obvious function in all modes of the program. That's not necessarily the case in complex scientific codes, for instance, especially parallel ones.
A problem with google perftools is the `SIGPROF` signal used in sampling will interrupt polling such as used in ZeroMQ. Otherwise, it is a good tool in the toolbox.
It requires access to hardware counters you don't normally have in EC2, and at a privilege level I wouldn't want to enable in a multi-user compute system.
Ho, ho. There are assorted free performance tools that do, though -- at least on POWER and ARM64 for various HPC-focussed ones. I don't know much about VTune, but it's not clear to me what it does that I can't do with other tools on x86_64, and others allow me to measure serial and communication metrics together.
[1]: https://en.m.wikipedia.org/wiki/Perf_%28Linux%29
[2]: https://github.com/google/autofdo