Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder if it’d be faster if you imagined typing instead of writing (obviously would require the patient to be a proficient typist).


This was my first thought. Writing is incredibly slow. I can barely even operate a pen anymore. I don't see any reason why this mechanism couldn't be used on any thought patterns. If the subtle motions of typing aren't high-fidelity enough to be differentiated, I still wouldn't have chosen normal letter patterns; I'd design a new motion alphabet that is much easier and faster to "write" by thought.

The article alludes to this:

> the researchers say that alphabetical letters are very different from one another in shape, so the AI can decode the user's intention more rapidly as the characters are drawn, compared to other BCI systems that don't make use of dozens of different inputs in the same way

The fact that it works so well on these complex motions means it can probably work better and faster if they use an alphabet with simpler--but still distinct--motions. Probably lots of lessons to be learned from shorthand and other rapid transcription techniques.

Losing the ability to communicate scares the hell out of everyone. This is amazing progress. And it'll have plenty of applications even for able-bodied people.


I don't think it works like that. Letters are shapes but keys just are a relative position. The software is reading gestures, specific keypress motions seems much less data to work with.


I think it’s trained.

“Imagine writing an A”

Then they look at what fires and record it.

Instead you’d ask “imagine typing an A” and then do the same thing.

Eventually when brought training happens to capture variation you start to get visual feedback and can train faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: