The key takeaway here is that you can't correctly process a string if you don't what language it's in. That includes variants of the same language with different rules, eg en-US and en-UK or es-MX and es-ES.
If you are handling multilingual text the locale is mandatory metadata.
I thought the German language deprecated the use of ß years ago, no? I learned German for a year and that's what the teacher told us, but maybe it's not the whole story
Se fareblus oni, jam farintus oni. (It definitely won't happen on an echo-change day like today, either. ;))
Contra my comrade's comment, Esperanto orthography is firmly European, and so retains European-style casing distinctions; every sound thus still has two letters -- or at least two codepoints.
(There aren't any eszettesque bigraphs, but that's not saying much.)
Language is just part of the problem. Unicode lets you store text as entered, but what you do with that text completely depends on what your problem domain is. When you're writing software to validate that the name on someone's ID matches that on a ticket, you're probably going to normalise that name to your (customer's) locale rather than render each name in the locale it was originally written in. As long as you keep your locale settings consistent and don't do bad stuff like "iterate over characters and individually transform them", you're probably fine, unless your problem domain calls for something else.
If you're printing a name, you're probably printing the name for the current user, not for the person who entered it at some point. If you're going to try to convert back like that, you also need to store a timestamp with every string in case a language changes its rules (such as permitting ẞ instead of SS when capitalising ß). And even then, someone might intend to use the new spelling rules, or they might not, who knows!
This article probably boils down to "programmers don't realise graphemes aren't characters and characters aren't bytes even though they usually are in US English". The core problem, "text processing looks easy as long as you only look at your own language", is one that doesn't just affect computers.
Your best bet is to just avoid the entire problem by not processing input further than basic input sanitisation, such as removing whitespace prefixes/suffixes and maybe stripping out invalid unicode so it can't be used as a weird stored attack.
islower is actually supposed to account for the user's "locale", which includes their language.
The key takeway is that lowercasing a string needs to be done on the whole string, not individual characters, even if std::string had a way to iterate over codepoints instead of bytes (or code units, in the case of wstring).
And there isn't a standard way to do that, you either meed to use a platform specific API, like the windows function mentioned, or use a library like ICU.
If you are handling multilingual text the locale is mandatory metadata.