From what I've seen at demo booths in conferences, Datadog's logging is impressive but also incredibly expensive. At the rate that we produce logs we'd be paying over $30k/mo. They claim that we can use log rehydration and not ingest all logs but then we can't really have alerts on them so what's the point in having them. Yes, I understand that you can look at the logs when things are going wrong but you can also know _from_ the logs when things are going wrong.
10k qps is certainly not a dev/test instance, but if you were to log all RPCs including database lookups, you could easily get there with even just 100-ish user-facing requests per second.
Indeed, I did the calculation myself as well. If you're logging every RPC including database lookups in production you might have a problem with your logging principles (signal vs noise etc) and if you really need that log data for every request but can't afford $1.27 per million you might have a product problem.
I just checked and we've got roughly half of the number you came up with (~11B). We store logs for 30 days instead of 7 which does increase the price.
Also it's worth noting that not all of the logs are necessarily our application logs but could also be audit logs of third party services we use (Okta, GSuite, AWS, etc.) to detect anomalies or potential breach attempts. We have a pretty comprehensive alerting pipeline based on our logs so we're unfortunately unable to pay that much for logging. I understand that this doesn't apply to everyone but we're able to run a self-hosted logging pipeline for a fraction of that cost without a dedicated team running it (the Infrastructure team, the team I'm on, currently maintains this pipeline along our other responsibilities).