In fact I use exclusively the A variants as far as possible, because anything else is really bad engineering, as I've explained. Even Microsoft has started to acknowledge this, and whereas the A variants had been implemented as wrappers around the W variants before, I heard they started to reverse that and started recommending the A functions.
If you browse around various documentation, you'll see contradicting statements which variants are recommended, which I take as another sign that the idea of making a distinction on the type level is simply a bad idea.
As to the macros, yes, if you give the A or W explicitly, you won't need the macro setting that translates the unspecific names to the A or W variants. And as said, the macros aren't a good idea anyway, as it's still extremely hard to properly abstract the distinction (the types are different sizes!!), so code is generally tied to a specific choice either way.
> Unicode cannot be treated as just a byte stream as soon as you want to do something to the content (or just for very narrow circumstances).
It can be stored as a byte stream. Whenever you work with the data, you might need to operate on transformed representations - 32-bit codepoints, or larger combinations, or even more complicated stuff like words, sentences, paragraphs, tags, whatever it needs to do the task as done. This is programming, you transform data to achieve things.
What I say is that it's stupid to make a distinction on the type level between things that are entirely the same thing in memory, and that are going to be used for the same things. It's stupid because it unnecessarily create incompatibilities between data and introduces unnecessary "conversions" / copies and requires more code.
See here for example: https://docs.microsoft.com/en-us/windows/apps/design/globali...
If you browse around various documentation, you'll see contradicting statements which variants are recommended, which I take as another sign that the idea of making a distinction on the type level is simply a bad idea.
As to the macros, yes, if you give the A or W explicitly, you won't need the macro setting that translates the unspecific names to the A or W variants. And as said, the macros aren't a good idea anyway, as it's still extremely hard to properly abstract the distinction (the types are different sizes!!), so code is generally tied to a specific choice either way.
> Unicode cannot be treated as just a byte stream as soon as you want to do something to the content (or just for very narrow circumstances).
It can be stored as a byte stream. Whenever you work with the data, you might need to operate on transformed representations - 32-bit codepoints, or larger combinations, or even more complicated stuff like words, sentences, paragraphs, tags, whatever it needs to do the task as done. This is programming, you transform data to achieve things.
What I say is that it's stupid to make a distinction on the type level between things that are entirely the same thing in memory, and that are going to be used for the same things. It's stupid because it unnecessarily create incompatibilities between data and introduces unnecessary "conversions" / copies and requires more code.