the biggest barrier, in my opinion, is the ugly punycode links. most browsers still do not render them properly, and if they did, it would have false positives. (i.e. there is no way to tell if a string is intended to be punycode or not)
It's not as much a matter of "still not rendering them properly", but more that they are doing it on purpose to avoid homoglyph attacks. Earlier browsers more OFTEN used to render them "properly", I believe.
Of course, this only helps against foreign letters, you still have people squatting or putting phishing on common typos of pure asci urls with basically the same result.
Meanwhile, almost all of humanity still can't get decent urls in their native language :(
> Meanwhile, almost all of humanity still can't get decent urls in their native language :(
Why not?
For example, http://xn--h1alffa9f.xn--p1ai/ renders the URL in Russian for me in the URL bar in all of Chrome, Firefox, and Safari (though Chrome converts to punycode if I copy the URL from the URL bar, unfortunately). [Edit: Also, it looks like HN's linkifier converts to punycode; what I wrote there is "россия.рф" and that's what HN has stored if I edit this comment.]
In more detail, for Firefox (where I can find this sort of thing quickly in the code), there are the following things affecting the display:
1) The "network.IDN_show_punycode" preference. This defaults to false, so punycode is not forced across the board.
2) There is a bunch of preferences for what toplevel domains are "safe" for use with non-ASCII chars by default no matter what. That option currently defaults to "false" as far as I can tell.