It only solves one side of man machine interaction: the machine-to-man part. For many interactions, its solution for man-to-machine is voice, which is anything but unobtrusive in public.
Some of my favorite fictional technology is subvocal speech recognition. (For examples, see the motes in A Deepness in the Sky, and Jane's interface with Ender in the Speaker for the Dead series.)
I think the wearable tinkering community has plenty of ideas for input devices to draw from. Such as this little one-handed keyboard: http://chordite.com/ . A touch panel (doing the same job as the one on the side of Glass, so you could operate Glass without reaching up to the side of your head) could be located pretty much anywhere and blend in quite well, on a belt or watch or some such. Someone else mentioned Myo (https://getmyo.com/ ); that would be nice too.
As long as the interface makes the number of choices available at each "page" of the interface pretty low, then input devices get small and easy to hide.
What about you can use eye focus and movements to gesture in it? I'm thinking about a way to use it without words, or hands. An advanced Human Interface Device for Eyes only (optionally augmented with voice/hand gestures).