Not only swiveling your head around, but doing it with a couple pounds strapped to it. People's necks are going to be swole.
That being said, I've always wanted a wearable monitor so I can lay in bed (or stand, or lay in my hammock, or just have some variety). The chair is bad, and I've spent way too many years (literally) in it. I need options.
I'm a terminal nerd, though, so I don't care too much about all the 4k etc.
The ops folks at a company I used to work for tried a VR workspace to put all of their graphs and terminals in a big sphere around you. With 2k screens, the text got too pixelated to read very quickly. 4k should improve that somewhat, but I'm not sure it will be enough for a great text-based workflow.
Even at 4k per eye, if you imagine a screen at a typical viewing distance, the "dot pitch" of the display is going to just be massively less than a good quality high end monitor sitting on your desk.
We've been waiting like 10 years for that to change since Oculus Dev kit days, and its still not solved today. Advances in pixel density in this space have been incredibly slow.
I think it could be a very long time before a headset can simulate a really great display well enough for me, but other's mileage may vary.
Even with "foveated rendering" the peak dotpitch (the highest pixel density it can acomplish) simply isn't going to be good enough for me - it can't be any sharper than the dot pitch of the panel in front of the eye.
A 5k iMac has 14.7 million pixels - the pixel density needed to do this as well as a "real" display in VR could be pretty massive.
I agree completely. A few months ago, I purchased a Meta Quest Pro. Relative to the Quest 2, the Pro’s resolution blew me away. And it’s still not even close to usable for real work on virtual monitors.
It's not 4K, though. They're not giving a lot of information, but "23M pixels" for two eyes is 11.5M pixels per eye. 4K is 8.2M, so this is 40% more pixels than 4K.
11.5m per eye is still far short of what would be needed to approximate pixel pitch of many of Apple's "retina displays" at typical desk viewing distance display well, FWIW. This a really hard problem with tech we have today.
Whether its 8m or 11m or even 15m pixels isn't the point with regards to using it to replace desktop monitors - the point is the necessary density to compete with excellent real life physical displays is really high.
Your VR monitor only ever really uses a subset of the total pixel count - it still has to spend many of those pixels to render the room around the display(s) too.
The display system boasts an impressive resolution, with 23 million pixels spread across two panels, surpassing the pixel count of a typical 4K TV for each eye.
Thats still enormously less than the dot pitch of a good 4/5/6k monitor in meatspace/real life today - remember, a virtual monitor only ever uses a subset of the total pixels in a VR headset, which is why the pixel count has to be sky high to compete with real life.
Yeah, with VR headsets you generally only get to count the pixels for each eye since parallax vision means that you only have that many degrees of freedom to produce a color.
Was this before the advent of VR headsets that do eye-tracking + foveated rendering? With the tech as it is these days, you're not looking at a rectangle of equally spaced little dots; almost all of "the pixels" are right in front of your pupil, showing you in detail whatever your pupil is trying to focus on.
For what it's worth, this was with an HTC Vive of some kind. However, the screen pixel densities don't change when you do foveated rendering, it's more of a performance trick - the GPU focuses most of its compute power on what you are looking at.
> the screen pixel densities don't change when you do foveated rendering
That's the limited kind of foveated rendering, yes.
Apple has a system of lenses on a gimbal inside this thing. Which is precisely what's required to do the (so-far hypothetical) "full" kind of foveated rendering — where you bend the light coming in from a regular-grid-of-pixels panel, to "pull in" 90% of the panel's pixels to where your pupil is, while "stretching out" the last 10% to fill your peripheral vision. Which gives you, perceptually, an irregular grid of pixels, where pixels close to the edge of the screen are very large, while pixels in the center of the screen are very small.
The downside to this technique is that, given the mechanical nature of "lenses on a gimbal", they would take a moment to respond to eye-tracking, so you wouldn't be able to immediately resolve full textual detail right away after quickly moving your eyes. Everything would first re-paint just with "virtual" foveated rendering from the eye-tracking update; then gradually re-paint a few hundred more frames in the time it takes the gimbal to get the center of the lens to where your pupil now is.
(Alternately, given that they mentioned that the pixels here are 1/8th the size in each dimension, they could have actually created a panel that is dense with tiny pixels in the center, and then sparse with fatter pixels around the edges. They did mention that the panel is "custom Apple silicon", after all. If they did this, they wouldn't have to move the lens, nor even the panel; they could just use a DLP mirror-array to re-orient the light of the chip to your eye, where the system-of-lenses exists to correct for the spherical aberration due to the reflected rays not coming in parallel to one-another.)
I'm not sure whether Apple have actually done this, mind you. I'm guessing they actually haven't, since if they had, they'd totally have bragged about it.
I'm guessing from this comment that you may not know much about optics or building hardware. Both of the solutions you have proposed here are incredibly bulky today, and would not fit in that form-factor.
> The custom micro‑OLED display system features 23 million pixels, delivering stunning resolution and colors. And a specially designed three‑element lens creates the feeling of a display that’s everywhere you look
They have advertised that there are 3 lenses per eye, which is about enough to magnify the screens and make them have a circular profile while correcting most distortion. That's it - no mention of gimbals or anything optically crazy.
I'm thinking there is confusion with the system used to set the PD (distance between eyes). Of course there are not many details, but it does look like there's a motorized system to move the optics and screens outwards to match the PD of the user.
I think the key to that would be a design of interface which is a step beyond "a sphere of virtual monitors" where zooming was not just magnifying but rather a nuanced and responsive reallocation of both visual space and contextual information relevant to the specific domain.
Therein lies another problem with workspace VR, you still need a keyboard if you are doing any meaningful typing. So you still need a desk, or some kind of ergonomic platform for a lounge chair.
It is a great alternative for gaming in that sense however. Being able to game and be standing up and moving is great.
With screens detached from the input device, it should be perfectly possible to make a good keyboard + trackpad combo for use on your lap, on just about any chair/bed/beach.
With such a big terminal screen you might even recreate what an 720p screen can, with 256 colors!
I never really understood why we like to hack character arrays into pixels, when.. we can just manipulate the pixels themselves? I mean, I like and actually prefer the cli interface of many programs, but can’t ever imagine replacing a good IDE with vim.
I'm not mad about your IDE or anything. I've used some that I could like okay, with vim keystrokes. But vim lives where I live, in the terminal. I can't run your IDE in my environment. I can run vim anywhere.
I use a 32" QHD for a more limited but similar effect. 32" 4k and the text was too small and thus the extra resolution just complicated everything but 32"QHD and a tiling window manager is awesome, I don't use a second monitor anymore after years of doing so.
I am probably an edge case as I use a tiling WM on linux, there is little UI to be scaled. The only metric I am worried about is max text at my personally readable size. I could change the font sizes on a 4k monitor, but websites are the only non-text UI I interact with and they don't care about your OS settings. Zooming is hit or miss on if it breaks the layout or not. I don't doubt MacOS would be better in general, but for me a QHD 32" is plug and play, most websites work well and no settings faff or zooming.
it doesn't work great, elements are comically too big on 32" 4K or just too big on 27" 4K, you need to scale it to 1080p but then it's too small. MacOS is made for 5K 27" monitors for high DPI (Retina) resolutions or non-high DPI 27" 2560x1440. The only high-DPI 4K screen that works great OOB is the 21.5" 4K Apple display.
32" 4K feels like the sweet spot now, 32" 8K would be a good future upgrade, but we need DisplayPort and HDMI to catch up. 120hz is very nice for desktop usage, as is HDR. Now that my main rig is a 55" 4K 120hz HDR OLED, most other monitors look bad. 14" is still the best size MBP, as sitting closer with the high PPI screen works well to have text take up about the same amount of my FOV. 27" feels small, esp at 16:9. 16:10 was awesome and I'm glad that it and 4:3 are coming back. 16:9 was made for watching movies. 16:10 allows 16:19 content to fit with a menu bar + browser top bar + bottom bar, or just gives extra vertical space. Those ultrawide monitors, especially the curved ones, are just gimmicky. Just give me a gigantic 16:10 or 4:3 rectangle, tyvm.
That being said, I've always wanted a wearable monitor so I can lay in bed (or stand, or lay in my hammock, or just have some variety). The chair is bad, and I've spent way too many years (literally) in it. I need options.
I'm a terminal nerd, though, so I don't care too much about all the 4k etc.