Yes, but that works. Taking in such JSON and then immediately spewing it back out doesn't change the underlying meaning. Transforms from such JSON to some other format (perhaps even another JSON format) must explicitly choose what to do with "comments", instead of accidentally just discarding them. Given that parsing directives are going to exist somewhere, this is the correct place for them.
(Better yet is to create an explicit place for metadata. I almost reflexively use {"metadata": null, "payload": ...} now instead of putting my payload right at the top level, because wouldn't you know it, sooner or later some metadata always seems to wander in. And if it doesn't... shrug. If you're in a place where you can afford JSON in the first place the extra cost is probably way below the noise threshold for you.)
But the point is that if you have comments in your JSON, the first time you do some sort of "for key in data" to transform the data and spit it back out, the comments are gone; you may never even realize they were there to start with.
If you do that with the metadata explicitly stored as a separate key-value pair in your blob, then this doesn't happen; the meta data is never silently discarded when you, say, take all the key value pairs in the JSON blob and send them out down the wire to a client. If you want to strip the meta-data, you have to do that.
I know this isn't Python, but I think the Zen of Python is on point here: "Explicit is better than implicit."
> But the point is that if you have comments in your JSON, the first time you do some sort of "for key in data" to transform the data and spit it back out, the comments are gone; you may never even realize they were there to start with.
If you've stored comments as regular data, you haven't lost them but you've just transformed them in the output.
Your criticism appears to be based on a transform written with an incomplete understanding of the source data. I'd submit the problem lies in the incomplete understanding of the source data, not the fact the source data had comments. If your transform didn't "know" comments were possible, what else did it not "know"?
> I'd submit the problem lies in the incomplete understanding of the source data
I'd submit that an incomplete understanding of the source data is not necessarily a problem. It's often a design goal. Generic tools have a limited understanding of the source data by design. I don't want my JSON parser/formatter/minifier/etc. to know about some silly parsing rules you added as comments. I want my JSON parser to understand JSON as it's defined.
Your nonstandard comment directives are the problem, not the fact that I didn't write a custom tool.
(Better yet is to create an explicit place for metadata. I almost reflexively use {"metadata": null, "payload": ...} now instead of putting my payload right at the top level, because wouldn't you know it, sooner or later some metadata always seems to wander in. And if it doesn't... shrug. If you're in a place where you can afford JSON in the first place the extra cost is probably way below the noise threshold for you.)