Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find Stackdriver ugly compared to SumoLogic or Datadog. Also it has ingestion limits, we're losing logs when load becomes considerable.


From what I've seen at demo booths in conferences, Datadog's logging is impressive but also incredibly expensive. At the rate that we produce logs we'd be paying over $30k/mo. They claim that we can use log rehydration and not ingest all logs but then we can't really have alerts on them so what's the point in having them. Yes, I understand that you can look at the logs when things are going wrong but you can also know _from_ the logs when things are going wrong.


You can create metrics and alerts from filtered logs in Datadog.

The process would be: log data -> add index filters -> go to live tail and create a metric on a filtered log event -> create monitor on metric.

edit: also, you log 24 billion messages a month? I think that's what it would be to cost $30k for their platform per month


24B/month is less than 10k/second.

10k qps is certainly not a dev/test instance, but if you were to log all RPCs including database lookups, you could easily get there with even just 100-ish user-facing requests per second.


Indeed, I did the calculation myself as well. If you're logging every RPC including database lookups in production you might have a problem with your logging principles (signal vs noise etc) and if you really need that log data for every request but can't afford $1.27 per million you might have a product problem.


indeed, 10k qps in a large production instance is actually quite typical. Datadog is expensive when it comes to logs.


I just checked and we've got roughly half of the number you came up with (~11B). We store logs for 30 days instead of 7 which does increase the price.

Also it's worth noting that not all of the logs are necessarily our application logs but could also be audit logs of third party services we use (Okta, GSuite, AWS, etc.) to detect anomalies or potential breach attempts. We have a pretty comprehensive alerting pipeline based on our logs so we're unfortunately unable to pay that much for logging. I understand that this doesn't apply to everyone but we're able to run a self-hosted logging pipeline for a fraction of that cost without a dedicated team running it (the Infrastructure team, the team I'm on, currently maintains this pipeline along our other responsibilities).


In which conferences do you see Datadog booths


Both AWS re:Invent and KubeCon had Datadog booths


AWS Re:Invent had a DataDog booth.


Imo stack driver on GCP is too expensive too




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: