I'm sorry, quite right - you did. Using the computer locale.
The trouble is that people transmit data from one computer to another and so from one locale to another. And sadly, they do not always set the character encoding header correctly, if they even know what that is.
I mean take csvbase's case. It has to accept csv files from anyone. And christ preserve me, they aren't going to label them as "Win-1252" or "UTF-16" or whatever.
There is no alternative but statistical detection. And there is good evidence that this solution is fairly satisfactory, because millions of computers are using it right now! csvbase uploads run into more problems with trailing commas than with character detection getting it wrong at this point, that is your "Schelling Point" I'm afraid!
HTTP actually does quite a good job of providing headers containing MIME type and encoding. There is a little work to get the default (e.g. HTML and XML are different), and decide on the case where the XML payload encoding is different to the HTTP transport encoding (e.g. perhaps XML parsers need a way to override the embedded header).
So we end up at another plausible future-directed design decision: computer-computer communication should use HTTP. I think many systems have ended up there already, perhaps prompted by the issues we have discussed.
Moral: good specs attract usage; bad, incomprehensible or inconsistently implemented specs fade away.
Let me reframe it as a Schelling Point [1] - the uncoordinated coordination problem.
You arrange to meet your file on a certain day in New York, no place or time were mentioned. When and where will you go? It seems impossible.
But perhaps you go at noon, to the UTF-8 Building in midtown Manhattan. Are you there now?
[1] https://en.wikipedia.org/wiki/Focal_point_(game_theory)