10k qps is certainly not a dev/test instance, but if you were to log all RPCs including database lookups, you could easily get there with even just 100-ish user-facing requests per second.
Indeed, I did the calculation myself as well. If you're logging every RPC including database lookups in production you might have a problem with your logging principles (signal vs noise etc) and if you really need that log data for every request but can't afford $1.27 per million you might have a product problem.
I just checked and we've got roughly half of the number you came up with (~11B). We store logs for 30 days instead of 7 which does increase the price.
Also it's worth noting that not all of the logs are necessarily our application logs but could also be audit logs of third party services we use (Okta, GSuite, AWS, etc.) to detect anomalies or potential breach attempts. We have a pretty comprehensive alerting pipeline based on our logs so we're unfortunately unable to pay that much for logging. I understand that this doesn't apply to everyone but we're able to run a self-hosted logging pipeline for a fraction of that cost without a dedicated team running it (the Infrastructure team, the team I'm on, currently maintains this pipeline along our other responsibilities).
The process would be: log data -> add index filters -> go to live tail and create a metric on a filtered log event -> create monitor on metric.
edit: also, you log 24 billion messages a month? I think that's what it would be to cost $30k for their platform per month