Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, the difficulty with glyph caching IMO is handling things like combining diacritics. Really, you'd need to do proper Unicode grapheme cluster segmentation [1] to even decide on what is a valid cache key in the first place, at least if you intend on supporting all major languages. But if you only want to support most languages, you could get by without it, or just with Unicode normalization [2].

[1]: https://unicode.org/reports/tr29/

[2]: https://unicode.org/reports/tr15/



If you were short on CPU, you could handle "normal" combining diacritics like 0̩́ in a variety of ways, including just alpha-compositing several glyphs into the same pixels every time you redraw, and (except for emoji!) you could compute each scan line of a text layer as 8-bit-deep pixelwise coverage first, opening up the possibility of compositing each pixel with bytewise max() rather than alpha-compositing, before mapping those coverages onto pixel colors. But I think the high nibble of the above discussion is that there's quite a bit of performance headroom.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: