Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ST Sees AR Glasses Replacing Smartphones (eetimes.eu)
27 points by jpindar on Aug 9, 2021 | hide | past | favorite | 57 comments


One commonality between mouse+keyboard and smartphone is that small movements mostly in fingers and wrists are all that's needed for input. Contrast with Xbox Kinect - hype then disappeared, Nintendo Wii - hype then disappeared, and now some VR input...in all of these systems many of the experiences are designed around large body movements that go against the ingrained human instinct to be lazy. See "gorilla arms".

AR/VR needs micro-gesture input. Logitech K830 keyboard is tracked in VR with Oculus Quest 2 which is a step in the right direction. FB acquired CTRL-Labs whose CTRL-kit allows for some elementary BCI and Valve is investing heavily in BCI, Gabe Newell can't stop talking about how excited he is about it. And I'm sure Google and MS are trying as well.

But AR will need low-effort input before it replaces smartphones (unless it focuses only on high read / low write use case as its killer apps). Hopefully Apple has an input system for its AR product that doesn't require gorilla arms. AR + smartphone where the smartphone touchscreen acts as a trackpad or controller would likely be a better system than any hand wave-y / hand tracking.


The AR case could probably be solved with a ring(in the jewelry meaning of the word 'ring')-like controller accessory, with a little touch pad and a couple of buttons. Hopefully it could be done with reasonable work time per battery charge.

I don't believe in hand tracking too much, for the first generation of AR headsets at least. People complained about latency a lot, and many use cases are simply impossible to reconstruct accurately with headset camera tracking. For example changing direction in a laser pointer mode, by lazily shifting your controller between your thumb and your pointing finger is something neural hand tracking would be hard pressed to reproduce adequately.


The hand tracking is getting pretty good if you look at a Hololens or some of the recent Quest 2 work. Trying to type with it is trash but you get much more fidelity when trying to manipulate something in 3D space than any other input system.

The UX primitives are still getting worked out. Question like "What is it like to scroll in 3D AR?", "Whats the best way to input text", "What gestures can be reliably tracked from a headset?" and many others are still open questions.

It will be interesting to see what shakes out. Maybe we'll end up with a lot of inter-connectivity with phones and PCs for better inputs.


As a quest user I have to object. Even though it is very good on first impression, the pure hand tracking gets very annoying very quickly when trying to use it as it still misinterprets too often.

Eye tracking will be the future of "lazy" UI, but this is a hard one. Also, distincting actions, e.g. clicks or scrolls is non trivial


I don't think an eye input keyboard will ever be ergonomic for anything beyond 'lol'. Repetitive strain injury is bad enough in hands, I shudder to think how it would feel if you were using your eyes.


yeah, not as keyboard, but e.g. mouse? Would even make todays VR much better / less anoying imo


> One commonality between mouse+keyboard and smartphone is that small movements mostly in fingers and wrists are all that's needed for input

How about pairing the glasses with a smart watch? And the input comes from eye movement and touching the screen on the watch?

Voice can also act as "low effort input"


AKA gorilla arm syndrome.


I don't really want/need AR. I don't want a camera, I don't need object tracking, I do not need to augment what is in front of me.

What I would buy today is glasses that allowed an overlay. Give me a screen and an API and let me tether it to my phone and push pixels and I'd buy that immediately.


No deal, if you can't see unblockable ads, you don't get a $1000 pair of high tech glasses.


I would buy AR glasses that censored ads though :)


> What I would buy today is glasses that allowed an overlay. Give me a screen and an API and let me tether it to my phone and push pixels and I'd buy that immediately.

An overlay was the google glass promise: https://www.youtube.com/watch?v=ErpNpR3XYUw


We’ve had the technology for delivering glasses with a screen overlay at reasonable cost and in unobtrusive lightweight form factor for a few years now. No idea why there’s nothing like it on the market. I wonder if it’s something that could be thrown together from off the shelf components at this point?


Because head locked huds are terrible in real life. You really do wanr the camera for motion tracking and world-pinned objects.


That's only true if you want to display information relative to the world. If you only want to display information relative to the user then you don't need those things. Eg displaying map directions that align with world objects requires world-pinning. Displaying notifications that are relative to the user regardless of where they're looking doesn't.

There's a strong argument in favour of 'version 0' of a HUD product being a simple dumb overlay with no fancy stuff, simply to get a high resolution HUD product working. Other sensor inputs could be added later.


No it's not. Head locked huds are simply very uncomfortable. You want to move you head to look at them and can't. It just doesn't work.


You can get that now with NReal and a few other glasses.

...But you really do want the camera and tracking so you have input and overlays that aren't head locked. Head locked overlays are very hard to read.


I feel very septic about this assertion. Today the "Smart-overlay" display market is only enterprise : -> helping people understanding a complex machine with an overlay explanation.

Doesn't this mean people use it only when the are "forced" to do it ?


Given the price of these systems and the current value proposition, I think it's more likely that they only do it when someone else is paying for it. Think business class. You wouldn't exactly say you were "forced" into business class.


Also, using an AR device in a business context means you probably don't have to deal with the downsides like limited battery life.


Literally everybody believes that if AR glasses are possible, they will replace smart phones. However, the display technology to actually make them useful is absent as is the input system. Is there anything on the horizon that actually fixes these two things?


I can't really see AR Glasses replacing Smartphones even if the tech is there.

Sure it's nice to have a display you can always see while keeping your hands free, but even when it perfectly recognizes what you say voice input is still slower and clunkier than using your hands. Gaze tracking is similarly clunky compared to multitouch.

Not to mention the privacy aspect (would you want everyone around you to know exactly what you're typing/searching for/interacting with?)

You could have some kind of physical input device you keep in your pocket - but at that point you're carrying around more stuff than if you just had a smart phone. (Edit: sensors on your fingers so you can type on an imaginary keyboard?)

Basically it just doesn't seem to make any sense to me, but hey maybe I'm missing something. Useful tech for certain use cases sure, but I don't think those use cases really overlap enough with what we use smartphones for to replace them.


Maybe you could wear some sleek data glove, having gesture recognition by twitching your fingers 'just so' like described in 'Ancillary Justice' by Ann Leckie.

[1] https://www.reddit.com/r/sciencefiction/comments/2ige0f/im_a...

Or type on some sort of virtual

[2] https://en.wikipedia.org/wiki/Chorded_keyboard (Octima) combined with something like

[3] https://www.openstenoproject.org/plover/

combined with something like

[4] https://en.wikipedia.org/wiki/T9_(predictive_text)

or by 'grunting/subvocalizing' into some tiny pearl fitted to your neck/larynx (from the outside OFC).


I don’t require input devices to with the world around me (generally). Why should AR be that different?


Does the world around you include lots of text projected onto your retinas, with no physical support?


Guess that they will have to use voice input in the next few years. But what if some brain-machine interface like Neuralink actually gets to general availability?


One downside of voice interfaces is they are bad for privacy. How do you enter your password, credit card details, or social security number?

Voice is also much more distracting than tapping thumbs, as anyone who works in an open floor plan office can attest.


Snapchat already has it, spectacles.com. Better approaches are on the way, kura.tech and the Magic Leap 2.0--and of course the technology in the article.

The real challenge with XR is the vergence-accomodation conflict, it's solved in AR but not VR [0].

AR input software is a commodity, open-source versions exist. They use gesture detection.

[0] https://www.mdpi.com/2076-3417/9/15/3147


What is the vergence-accomodation conflict in AR? Isn't AR just a 2D plane with a camera image superimposed with virtual objects? (e.g. a tablet with an AR app?) Or do you mean it is solved because it doesn't occur in the first place?


It occurs when you have glasses with 3D objects produced by either stereoscopic, multi-planar or other means overlaid onto the real world.


Glasses that show objects in 3D overlaid onto the real world sounds like XR to me.

But if you consider this to be AR, how is it different to XR and how can it be solved for AR but not for XR?


XR includes AR and VR


Well, there's the display technology described in the article....


One social drawback of the smartphone is that I usually feel guilty to check notifications while I'm having a conversation with someone. Ever-so-occasionally I'll miss an important contact and not get back to someone in a timely manner because I'm having a real face-to-face conversation.

Smart-glasses (not even AR) could allow me the guilty pleasure of checking that notification without leaving the conversation. Though I suspect that someone could notice my gaze even briefly leaving eye contact.

But wasn't that among the very things that gave people the creeps about Google Glass? "Are you recording me?" "Are you engaging with a social network while we are spending time together?"


> But wasn't that among the very things that gave people the creeps about Google Glass?

What gives me the creeps is the notion that we're expected to be constantly tethered to our digital communicators and supposed to synchronously spend energy processing/answering every single inane message thrown our way.


>What gives me the creeps is the notion that we're expected to be constantly tethered to our digital communicators and supposed to synchronously spend energy processing/answering every single inane message thrown our way.

Never a free moment. Honestly sounds like slavery.


It will be fine. They will just paint over your eyes so it appears that you are maintaining eye contact. ‘Smoothing over social graces’ may be a critically important job to be done by the AR system.


They will just paint over your eyes so it appears that you are maintaining eye contact.

Already available from Microsoft.[1]

[1] https://www.pcmag.com/news/microsoft-uses-ai-to-make-our-eye...


> “We see a decline in phone innovation”

This definitely feels true (though an attempt at quantifying this statement would an intersting read) but it is not specifically supportive of the case for AR as a successor platform. In fact understanding the causes of this "lack of innovation" would serve AR enthousiasts well.

Imho, the mobile platform suffers a prolonged stagnation not because it has exhausted its intrinsic potential but due to prevailing business models: i) the user as data product and not a client - with the associated collapse of trust and limited options for value creation ii) the oligolistic market structure that motivates defensive choices (preserve existing "cash cows" rather than expand the market with the associated risk of new entrants)

The range of high value applications one could envisage running on these billions of interconnected, portable "supercomputers" is stunning. The fact they are not happening is one of the biggest market failures ever. As we push beyond sustainability boundaries and desperately need information technology to help reshape our behaviors towards more balanced, just and sustainable societies it may turn out that it is also one of the gravest political and social failures.

Who knows what incredible AR platforms could emerge in a moderately remote future. But in the near future the toxic digital swamp of mobile "cr-apps" does not leave much room for new growth


In my experience, using a smartphone is cumbersome or unwise when I shouldn't be looking at the screen in the first place. Like when driving, for example. And in cases where I could maybe put the phone somewhere, my hands may be covered in dough, and I can't touch the thing. So I'm not convinced seeing the output is the problem that needs solving.

Now, a good audio interface would be great IMO. Most of the stuff I want on the web is text, really, so if I could just ask the phone to read me articles, expand on topics I want to know more about, make reservations etc. etc., that'd be much more useful to me than having a screen mounted to my glasses. Yea, we have some voice controls now, but in my experience, they're not useful enough to bother with.

Edit: some gesture recognition with a device like the Myo would probably augment an audio interface nicely.


I have high expectations for advanced AR glasses in the coming years, even as far as being able to replace the heavy VR headsets potentially.

But I feel like as far as getting rid of smartphones, you need and input mechanism. Maybe it can be done with hand tracking or light weight tracking gloves.


I suspect they will replace monitors first. You can still use keyboard and mouse for input, get a lot of screen real estate without clunky monitors or being tied to a desk / chair, and have to worry less about portability and battery life.


ImmersedVR on Oculus Quest 2. That's how I work everyday.


I tried a similar setup but the resolution of the Quest 2 is still too low for me.


What work do you do?

Edit: Realized you posted your profile. Did you find it not high enough resolution for you to code in, or do you have work that requires higher resolution?


I code. The Q2 resolution was too low for me to do that comfortably.


For a portable Linux VR headset that allows this: https://simulavr.com (expecting release around early/mid next year).


AR-glasses indeed will soon reach the masses but I don't look forward to it. Millions of people walking around with glasses that can facially identify passers-by... it's going to be the privacy-abuse singularity.


Given the reception of Google Glass I hope it doesn't come to that. But combine authoritarian ambition with money and ingenuity and I guess one day we will all ... uh, I for one welcome our new overlords!


I was actually thinking about millions of unambitious, but bored and nosey, AR-glasses users. It's not so hard to download archives of leaked and scraped data. We may soon live in a world where showing your face in public is the same as giving everyone your life story.


Yeah, Google Glass first go at it wasn't accepted socially. People were so very suspicious of it.

But I was implying, between China wanting the authoritarian way and FAANG making the tech their way and all the money and hence smart people, is on their side ... eventually people will have them and as you say, it is all she wrote from there.


I'm extremely bullish on AR/VR, neurotech, and smart wearables over the next decade or two, but this article seems oddly focused on AR glasses replacing phones. We're definitely seeing the phone separate into pieces closer to the body (wireless earbuds and watches), with the phone serving as a hub. The Apple Watch is almost where the original iPhone was in terms of technical capability (and has already passed some of the iPods, minus a headphone jack), but it lacks the large screen, large battery, and large compute power of modern phones. This should sound familiar: this was the main difference between the iphone and laptops at the time. Compute power and battery sizes can maybe reach good enough levels to replace the phone's role as a hub for most people within the next decade, but it's not clear anything can be done about the screen without expanding along or beyond the wrist. This, to me, is the value proposition of an AR HUD (which, maybe, absorbs or emerges from the growing smart earbud/headphone category). When looking at it as a compute element, I don't know if putting the brains on one's head (ironically) will be compelling vs the flexibility of storing it in a pocket or purse or strapped to some other body part with better ability to bear the extra load. I doubt AR glasses will gain brains fast enough to displace phones before something else takes its place. Strictly viewed as a device for viewing/hearing things controlled by some other device the user owns, the biggest challenge for glasses replacing the phone screen will be competition from all the other displays we're putting everywhere we look (living rooms, cars, pockets, etc...) It's possible AR wins by some benefit of economics (fewer screens!), but it's also possible that it loses because of inertia.

I have a lot more to say on this, but half a century from now, I think we'll certainly look back at the phone as a weird transitional device with a lot of odd UX compromises, but as it was (and still is) for laptops, desktops, and mainframes before them, the phone will probably not go away in some niches for decades (if ever). It's still a huge question to me whether over-eye AR will be the leading form factor that replaces them, and I'm unconvinced that the mass market future is AR glasses alone.


but this article seems oddly focused on AR glasses replacing phones

Probably because that's what Zuckerberg has been talking about. You will be connected to Facebook for all your waking hours.

I assume everyone here has already seen Hyperreality.[1]

[1] https://youtu.be/YJg02ivYzSs


I just don’t see AR taking off with the current state of things. The minute the ad industry gets their hands on it, it’ll be as unusable as a modern news website, but it’ll block your vision while doing so. This will not drive consumer adoption. We’ll need a sea change in application design—the “killer app,” if you will—to get away from screens, but nobody has introduced it yet.


Inclined to agree, though I have no clue about the timeline. As others here have said, need low effort input in addition to the AR. Something that tracks finger movements would be good.


Right. Gorilla arms aren’t much of an issue if you can input with your arms at rest. Repetitive use injuries would likely increase.


Anyone have a good forum or other source for tracking AR developments?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: