Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If your screenreader is any good, it'll be a piece of proprietary software that knows how to OCR the framebuffer and doesn't give a shit about "standard" UI.


It's still very difficult to navigate it if I make a custom drawn UI, even if you can read what the text on the form says.

Typically these crap apps have custom drawn widgets and buttons with no tab-order, no way to click without using mouse etc. The screen reading bit is pretty easy (fallback to OCR, as you mentioned). But once the screen reader says "please click this button" and there are no (known) buttons on the form, you are stuck.


Synthetic mouse clicks.

These are pretty much solved problems. I know because Nuance solved them. Dragon Naturally Speaking both OCRs the screen and sends synthetic mouse clicks to windows that use custom UIs, such as Microsoft Office, when you say something like "click OK".

Implementing this in an accessibility suite for the blind is definitely doable. It would be tedious and take a lot of testing to get right, but that's why the best solutions are proprietary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: