I'm currently trying to solve my RSI problems by going farther than the normal progression of ergonomic peripherals. If that doesn't work, I'll have to give up and go to voice control.
I've designed and built my own chording keyboard, which should be 10x less stressful to type on than a normal keyboard. I also have an eye tracker that I'm going to program to use to replace most mouse tasks.
I don't like the idea of inaccurate input that makes noise (me speaking) that could disturb others in an office. I have a feeling that being too noisy for an open-concept office might limit my career options in the future. But I would still rather live with that than live with RSIs.
I do really like the idea of running Windows in a VM and proxying the commands to linux. That was previously one of my qualms with voice control: that I'd have to switch to Windows.
My other idea is to switch to a mass-motor gesture and eye-tracking based system using an Oculus Rift with an IR camera mounted in an eye socket. That might be an interesting and fun way of programming even if it wasn't to solve an RSI problem.
I'm fortunate that I've never had rsi despite my extreme computer usage, but I'm interested in the possibilities of eye tracking for interfaces. Is it accurate enough? One simple thing that would come very handy is widget focus switching, maybe in combination with some key combo, "switch focus to where I'm looking" - that would save me a lot of reaching for the mouse.
At least at this point it's accurate to about a palm sized region, and there is a cool auto scroll demo. But getting together UI and making it useable is a pretty big project.
That's not great but it would be acceptable to switch focus between screens (or big enough tiles in a tiled desktop). Together with some clever mechanisms one could define task/tab/window groups per area and make switching much more manageable for large amounts of elements without having to surrender to "pick the mouse"-"select"-"back to the kb".
With a nice microphone I have found that I can speak rather quietly and have it still pickup my commands accurately. My office mate has said it doesn't bother him much.
I've designed and built my own chording keyboard, which should be 10x less stressful to type on than a normal keyboard. I also have an eye tracker that I'm going to program to use to replace most mouse tasks.
I don't like the idea of inaccurate input that makes noise (me speaking) that could disturb others in an office. I have a feeling that being too noisy for an open-concept office might limit my career options in the future. But I would still rather live with that than live with RSIs.
I do really like the idea of running Windows in a VM and proxying the commands to linux. That was previously one of my qualms with voice control: that I'd have to switch to Windows.
My other idea is to switch to a mass-motor gesture and eye-tracking based system using an Oculus Rift with an IR camera mounted in an eye socket. That might be an interesting and fun way of programming even if it wasn't to solve an RSI problem.