I wonder if our brains do the same timestamping when we listen to someone sing a song. Our brain database will not have enough examples of those exact words being sung in that musical way. I guess that's why we need a little more focus to understand the lyrics of a song vs someone talking to us.
Can you explain how it works, I tried to figure it out from the website but drowned in noise. I got it gives me insight into how I use third party APIs, but how does it work? Does it wrap their SDKs with some language specific replacements? Does it mitm access to them?
It's hard to compose a product description that makes sense to all readers, I honey understand that. But shouldn't developer oriented tools also try to offer a developer focused, no bullshit, "dev to dev" straight talk somewhere?