I can see the killer app being something practical, like being able to look at a recipe and watch a video for food prep while Iām doing food prep. Or being able to look at an engine bay and diagnose am issue and see parts identified as I look and point at them.
I was thinking about that too. Imagine having image recognition within the headset or glasses which can draw boxes around items and match them against a global database. One could be walking around and see something interesting, then press a button or say a phrase to query it for information right in your glasses.
A lot of infrastructure and data would be needed before it reaches a critical mass and mass adoption follows. The more people using it, the more data there is in the system, the more people want to use it.
Another really great feature would be captions for conversations going on around you - a big help for the hearing impaired, and maybe even helpful to ordinary people to keep better track of the conversation flow.
The captions idea sounds really useful. I've used live captions during Google Meet calls and having a ~1 second visible history of live conversation is indeed really helpful for conversation flow. Having that in real life conversations would really be something.