Yeah, I think having a display that isn't obviously part of your outfit will be a really big enabler of wearable technologies. That and flexible printed circuits that can survive dry cleaning or a washing machine.
One of the reasons Glass failed was because it was about as in-your-face as possible. Resistive screens also failed in the same sense because having to pull a stylus out to use your phone is aggressively technological.
In contrast, touch is a really casual interface, and though the touch UIs that I've used all had warts it was at least possible for anyone to grasp the basic mechanics of the interface. In my mind a wearable device that is obviously a wearable device isn't going to be as successful as clothing that monitors your heart rate or SpO2 because you have to consciously interact with the device. If you fill your gym bag with smart clothes then you just wear them like you would wear gym clothes.
Yeah. The way I think of this is, let's start from (my subjective) aesthetic first principles. Take an abstract human body, and look at its poses and movements: standing, sitting (and other forms of resting), walking, running.
A smartphone is a complete failure from that perspective: looking down and pecking at a screen? No way. Now, holding and looking at something is a natural enough movement, but chronically craning your neck down with such focus and duration? I doubt it. A smartwatch is a bit more natural, but to bring up subjective aesthetics again, I'm not too comfortable with the concept of adults wearing digital watches.
Google Glass was a step in the right direction in my opinion--you can interact with a UI while looking straight ahead--but as you said, the camera and other aspects weirded people out (there may be a future in contact lenses with similar tech?) Voice interfaces like Alexa are definitely in line with my view.
I don't think it's about the pose. It's about attention. To interact with a system we have to focus on it and that means we have to take our focus off of everything else. Computers are notoriously bad at understanding us if we aren't focused on them exclusively.
That's why so many people are trying to get voice interaction right. It's the most natural way to split our attention.
A lot of this IoT nonsense is trying to discover a few corner cases where computers can do sensible things without a human needing to specifically focus on them. Like automatically reordering cereal, or unlocking your phone when you get close, or adjusting the temperature when you leave the house.
> Resistive screens also failed in the same sense because having to pull a stylus out to use your phone is aggressively technological.
Resistive screens worked fine with fingertips, too. In fact they worked perfectly well with gloves, pens, etc., unlike modern capacitive screens. They failed because they were less sensitive and less accurate.
One of the reasons Glass failed was because it was about as in-your-face as possible. Resistive screens also failed in the same sense because having to pull a stylus out to use your phone is aggressively technological.
In contrast, touch is a really casual interface, and though the touch UIs that I've used all had warts it was at least possible for anyone to grasp the basic mechanics of the interface. In my mind a wearable device that is obviously a wearable device isn't going to be as successful as clothing that monitors your heart rate or SpO2 because you have to consciously interact with the device. If you fill your gym bag with smart clothes then you just wear them like you would wear gym clothes.