Too easy to produce bloated files. People do dumb things like put field attributes and metadata on every row of the output, so you end up with files that can be several times the size of the raw data.
Parsing times are often horrible.
There’s no standard for tabular data. You invariably need some overly complicated XML map, because people can’t resist the temptation to over-engineer.
I had anticipated the overhead of XML being brought up, hence the other suggestions there, which is anything + jsonschema. jsonschema was used for illustrative purposes, it's a lot more powerful than most usecases call for, and it pays forward for that by making the syntax very verbose and longwinded. It'd certainly be an alternative with less overhead, though.
I've not benchmarked XML parsing times in a very long time, I'd be interested in seeing the numbers now.
Tabular data is barely data, honestly, hence my PS, I don't think spreadsheets in general are a very good way to store anything.
And yes, absolutely, overengineering is bound to happen, which is unfortunate, but I'm not sure if it really can be avoided while still keeping many of those upsides (especially the rigorous definitions)
Parsing times are often horrible.
There’s no standard for tabular data. You invariably need some overly complicated XML map, because people can’t resist the temptation to over-engineer.