Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It only solves one side of man machine interaction: the machine-to-man part. For many interactions, its solution for man-to-machine is voice, which is anything but unobtrusive in public.


Some of my favorite fictional technology is subvocal speech recognition. (For examples, see the motes in A Deepness in the Sky, and Jane's interface with Ender in the Speaker for the Dead series.)



I think the wearable tinkering community has plenty of ideas for input devices to draw from. Such as this little one-handed keyboard: http://chordite.com/ . A touch panel (doing the same job as the one on the side of Glass, so you could operate Glass without reaching up to the side of your head) could be located pretty much anywhere and blend in quite well, on a belt or watch or some such. Someone else mentioned Myo (https://getmyo.com/ ); that would be nice too.

As long as the interface makes the number of choices available at each "page" of the interface pretty low, then input devices get small and easy to hide.


Wonder what their status on 'eye motion controlled interfaces' (sic) related patents is.


It seems like last year's demos emphasized the eye gestures while this year they're gone. It must have not worked out somehow.


Combined with the Myo, if the Myo performs as advertised... would be fantastic.


I tried googling "Myo" and "Myo Google Glass" and can't find anything. What is Myo?


It's an EMG (electroMYOgraphy; myo means muscle) wristband that is being marketed as a gesture-recognition input device for computers.

1. https://getmyo.com/



How about some buttons instead? Voice is not what I want for man-to-machine communication. IBM found out people don't want voice to text over 20 years ago http://vivondo.blogspot.com/2012/04/ibm-speech-to-text-exper....


There's a new type of microphone-attachment for non-audible murmur recognition.

http://library.naist.jp/dspace/bitstream/10061/7966/1/EUROSP...

Another gadget to wear, but better design and miniaturization can help.


What about you can use eye focus and movements to gesture in it? I'm thinking about a way to use it without words, or hands. An advanced Human Interface Device for Eyes only (optionally augmented with voice/hand gestures).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: