Hacker News new | past | comments | ask | show | jobs | submit login

They're not unrelated though. You have to have a way to get from your input format to the finished product in a consistent way, and the glyph set you design has a large bearing on that. You can't solve it completely with AI, because then you just have an AI interpretation of human language, not human language. A language like Korean written in Hangul would need to create individual glyphs from smaller ones through the use of ligatures, but a similar approach couldn't be taken to Japanese, since many glyphs have multiple meanings depending on context. How should these be represented in Unicode? Yes, these are likely solved problems, but I'm sure there are other examples of less-prominent languages that have similar problems but nobody's put in the work to solve them because the languages aren't as popular online.

You need to be able to represent the language at all stages of authorship - i.e. Unicode needs to be able to represent a half-written Japanese word somehow (yes, Japanese is a bad example because it has a phonetic alphabet as well as a pictograph alphabet).

Anyway, trying to figure out a single text encoding scheme capable of representing every language on Earth is not an easy task.




Its not an AI issue, just a small matter of having lots of rules. Moreover this is not just an issue for non-Western languages: the character â (lower case "a" with a circumflex) can be represented either as a single code-point U+00E2 or as an "a" combined with a "^". Furthermore Unicode implementations are required to evaluate these two versions as being equal in string comparisons, so if you search for the combined version in a document, it should find the single code point instances as well.


> Unicode implementations are required to evaluate these two versions as being equal in string comparisons

What do you mean by "required"? There's different forms of string equality. It's plausible to have string equality that compares the actual codepoint sequence, vs string equality that compares NFC or NFD forms, and there's string equality that compares NFKC or NFKD forms. And heck, there's also comparing strings ignoring diacritics.

Any well-behaving software that's operating on user text should indeed do something other than just comparing the codepoints. In the case of searching a document, it's reasonable to do a diacritic-insensitive search, so if you search for "e" you could find "é" and "ê". But that's not true of all cases.


Its part of the Unicode standard. See http://en.wikipedia.org/wiki/Unicode_equivalence for details.

(OK, so "required" might be overstating it; you are perfectly free to write a program that doesn't conform to the standard. But most people will consider that a bug unless there is a good reason for it)


Unicode defines equivalence relations, yes. But nowhere does is a program that uses Unicode required to use a equivalence relation whenever it wishes to compare two strings. It probably should use one, but there are various reasons why it might want strict equality for certain operations.


In some languages those accented characters would be different letters, sometimes appearing far away from each other in collation order. In other cases they are basically the same letter. Whereas in Hungarian 'dzs' is a letter.


Different languages can define different collation rules even when they use the same graphemes. For example, in Swedish z < ö, but in German ö < z. Same graphemes, different collation.


And we may even have more than one set of collation rules within the same language.

E.g. Norwegian had two common ways of collating æ,ø,å and their alternative forms ae, oe and aa. Phone books used to collate "ae" with æ, "oe" with ø and "aa" with å, while in other contexts "ae", "oe" and "aa" would often be collated based on their constituent parts. It's a lot less common these days for the pairs to be collated with æøå, but still not unheard of.

Of course it truly becomes entertaining to try to sort out when mixing in "foreign" characters. E.g I would be inclined to collate ö together with ø if collating predominantly Norwegian strings, since ö used to be fairly commonly used in Norway too, but these days you might also find it collated with "o".


Why does Unicode need to represent a half-written Japanese word? If it's half-written, you're still in the process of writing it, and this is entirely the domain of your text input system.

Which is to say, there is absolutely no need for the text input system to represent all stages of input as Unicode. It is free to represent the input however it chooses to do so, and only produce Unicode when each written unit is "committed", so to speak. To demonstrate why this is true, take the example of a handwriting recognition input system. It's obviously impossible to represent a half-written character in Unicode. It's a drawing! When the text input system is confident it knows what character is being drawn, then it can convert that into text and "commit" it to the document (or the text field, or whatever you're typing in).

But there's nothing special about drawing. You can have fancy text input systems with a keyboard that have intermediate input stages that represent half-written glyphs/words. In fact, that's basically what the OS X text input system does. I'm not a Japanese speaker, so I actually don't know whether all the intermediate forms that text input in the Japanese input systems (there's multiple of them) go through have Unicode representations, but the text input system certainly has a distinction between text that is being authored and text that has been "committed" to the document (which is to say, glyphs that are in their final form and will not be changed by subsequent typed letters). And I'm pretty sure the software that you're typing in doesn't know about the typed characters until it's "committed".

Edit: In fact, you can even see this input system at work in the US English keyboard. In OS X, with the US English keyboard, if you type Option-E, it draws the ACCUTE ACCENT glyph (´) with a yellow background. This is a transitional input form, because it's waiting for the user to type another character. If the user types a vowel, it produces the appropriate combined character (e.g. if the user types "e" it produces "é"). If the user types something else, it produces U+00B4 ACCUTE ACCENT followed by the typed character.


>Why does Unicode need to represent a half-written Japanese word? If it's half-written, you're still in the process of writing it, and this is entirely the domain of your text input system.

Drafts or word documents (which are immensely simpler if just stored as unicode). Then there's the fact that people occasionally do funky things with kanji anyway, so you're doing everyone a favour by letting them half-write a word anyway.


Interestingly enough, one early Apple II-era input method for Chinese would generate the characters on the fly (since the hardware at the time couldn't handle storing, searching and rendering massive fonts) meaning it could generate partial Chinese characters or ones that didn't actually exist.

http://en.wikipedia.org/wiki/Cangjie_input_method#Early_Cang...

In the Wikipedia article it even shows an example of a rare character that's not encoded in Unicode but which can be represented using this method.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: