Hacker News new | past | comments | ask | show | jobs | submit login

One advantage of regular letters compared to this is that it's pretty easy to tell that even a fairly mangled A is still an A; a word written in dotsies loses a lot of its legibility as soon as it's not displayed in fairly large pixels across a very clear background. Viewing things at an angle? While it moves by? On a dirty or wet piece of paper? Forget about it.

It's a super fun idea, though. I would 100% put this in the background, or even the foreground, of some future scifi scape.




Yeah, it's a telling point that when we're asked to read passages of Dotsies, the font's sized up least three times (36px to 42px)—and so at least nine times the screen area taken—compared to the base 13px size used in the "How much better is it?" example.

You're going to need damn good eyes to read Dotsies at anything like 13px, and I suspect even the sharp-eyed are going to misread a lot. Size Dotsies text up to the point that people of average vision can read it as reliably as a normal font at a reasonable size, and I don't think you're going to get much, if any, greater information density.

I'm reminded of Speedtalk, a language from Robert Heinlein's story "Gulf", which had every combination of phoneme and delivery (pitch, length, etc.) represent a word in the Basic English vocabulary. So, every spoken "word" in the language is actually a full sentence. But, it takes intense futuristic training just to be able to speak vaguely intelligibly in it, because all the normal imprecisions of speaking (much less speaking in a new language) take any tiny error from "You used 'record' as a noun, but with a long 'e' and the stress on the second syllable." to "You have a hovercraft, and it's full of what???"

Now imagine trying to follow speech in such a language if the speaker's out of breath, alarmed, surrounded by background noise, etc. Just not enough redundancy for reliable communication.



That's the reference, yes.


Not to mention recognizing individual letters. The difference between “a” and “b” in Latin alphabet is drastic. In dotsies you could only tell them apart if other letters near them give you relative height of the dots to some baseline.

Dotsies seem to make a tradeoff where they optimize for higher information density at the cost of much higher error rate (or requirement for precision and accuracy if you will). I also definitely see your point about font sizes. Basically it’s a valiant attempt at a reimagined alphabet but I don’t think it actually achieves higher info density in the real world.


Dotsies is even worse if your eyes are some variant of broken. Nearsighted, cataracts, crosseyed, etc.


Perfect for automatic readers though.


Nope: automatic readers rely heavily on redundancy. QR codes are like 20% data, 80% error correction. OCR is still a difficult problem, despite the redundancy built in to the alphabet. There's no way a machine reader could tell apart characters that differ by one pixel


Too bad I'm not an automatic reader!


I've been an automated reader since I was probably, what, five years old?


Me too, I didn't even mean to read your comment, but as soon as my eyes saw it: boom, processed and read. like a computer without virus scan or security or anything, to see the words was to run their executable on my wetware.


I actually remember to have had severe headaches when I was something like 4 or 5 and I experienced the passage from deliberate reading and automatic reading. It was much worse being around in a car and just looking outside and unwillingly read all the signs because it added also elements of motion sickness to it...


I saw a great t-shirt at the 2006 HOPE conference that read, "Just by reading this you have given me control of piece of your brain"




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: