What's the standard for metrics gathering, push or pull? I prefer pull, but depending on the app it can mean you need to build in a micro HTTP server so there's something to query. That can be a PITA, but pushing a stat on every event seems wasteful, especially if there's a hot path in the code.
I don't think there's any clear standard. There's many confusions about push vs pull that make the discussions hard to follow, as they often make apples to oranges comparisons. For example the push you're talking about in your comment is events, whereas a fair comparison for Prometheus would be with pushing metrics to Graphite. https://www.robustperception.io/which-kind-of-push-events-or... covers this in more detail.
Taking your example you could push without sending a packet on every event by instead accumulating a counter in memory, and pushing out the current total every N seconds to your preferred push-based monitoring system. You could even do this on top of a Prometheus client library, some of the official ones even as a demo allow pushing to Graphite with just two lines of code: https://github.com/prometheus/client_python#graphite
In my personal opinion, pull is overall better than push but only very slightly. Each have their own problems you'll hit as you scale, but those problems can be engineered around in both cases.
At this point Prometheus is pretty close to becoming the boring technology. The latest versions have finally brought in the plumbing and tuning knobs to protect against [most] overly expensive queries. So you can't easily take it down anymore.
The single-binary approach is still a problem, though. In my mind any serious telemetry collection stack should separate the query engine and ingestion path from each other - Prometheus has both the query interface and the ingestion/writing subsystem in the same process.[ß]
As for the parent poster: you certainly want to push telemetry out on every event, but the mechanism has to be VERY lightweight. With prometheus the solution is to have a telemetry collection/aggregation agent on the host, feed it with the event data and have prometheus scrape the agent. Statsd with the KV extension is a great protocol for shoveling the telemetry out from the process and into the agent.
ß: you can get around this with Thanos + Trickster to take care of the read path only, but it's quite a bit more complex than plain Prometheus.