Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Argument: A modern VR stack is much more complex, and does much more, than just displaying images on two screens.

Counterargument: The 16 things that happen other than just displaying images on the screen aren't relevant, have been done before, or has equivalent complexity to other systems.

Well OK. I just can't argue with that.

"A modern CPU SOC is no more than a souped up 6502."

That's true, if you ignore the integrated video, complex cache management, integration of networking/sound/northbridge/southbridge, secure enclaves, and significantly higher performance characteristics that result in subtle changes driving unexpected complexity. All of those things have been done elsewhere.

So if that's your perspective then we'll just have to agree to disagree.

Though I will point out the fact that all of those non-monitor components that you described also require custom drivers, which require their code to be signed, which was ultimately the item the OP took issue with. I'm frankly surprised that after acknowledging the amount of re-implementation VR requires, across numerous non-monitor disciplines, fusing the data in 11ms, for total motion-to-photon latency of 20ms or less, you still feel this is "common and straightforward."

But OK. I don't know your coding skill level, so this may be true.

And per this point:

> interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.

Valve has still not released an equivalent to Oculus's asynchronous spacewarp. If you feel it is "pretty doable" you would do a huge service to the SteamVR community if you could implement it and provide the code to Valve.

See https://developer.oculus.com/blog/asynchronous-spacewarp/ for details.



I would like to apologize for my previous post, I feel that it is unnecessarily long, and a bit inaccurate/exaggerated.

Let me be clear: I pretty much agree with everything you said. Only your original statement was what I felt a bit of a stretch:

> The "monitor you wear on your face" trope is simply inaccurate, and essentially a misunderstanding of the state of VR today

After reading a bit more into it, I feel that Oculus took the correct software approach to bring up its hardware on Windows. What happened appears to have been more of an oversight, one that most people probably could have felt for.

Custom (in-kernel) drivers are indeed probably a necessity to achieve the best possible experience, with the lowest attainable latency. However, they are not actually needed for basic support [1], which is where I think our misunderstanding comes form.

I realize that a tremendous amount of work has gone into making VR as realistic as it could get, and I am not trying to lessen it at all, which is what I think you wanted to point out with your original remark.

As much as I would like to have a go at implementing that kind of feature (and experiment with VR headsets in general), I don't really have the hardware nor the time to do so, unfortunately :)

--

[1] I don't know the latency involved with userspace-based USB libraries, but it seems to be low enough that Valve is using it to support the vive, at least on Linux (and for now).


Thanks, no apologies needed. I didn't mean to come off snarky either. And I obviously am not averse to unnecessarily long messages.

As an aside, Valve's tracking solution is much less USB-intensive than Oculus's.

In Valve's Lighthouse system, sensors on the HMD and controllers use the angle of a laser sweep to calculate their absolute position in a room and provide the dead reckoning needed to correct IMU drift. As a result, the only data being sent over USB is the stream of sensor data and position (I believe sensor fusion still occurs in the SDK, not on device).

Oculus's Constellation system uses IR cameras, synchronized to an IR LED array on the HMD and controllers. The entire 1080p (or 720p, if used over USB2) video images (from 2 through 4 cameras, depending on configuration) are sent via USB to the PC. This is in addition to the IMU data coming from the controllers. The SDK performs image processing to recognize the position of the LEDs in the images, triangulate their position, perform sensor fusion, and produce an absolute position.

The net result is roughly equivalent tracking between the two systems, but the USB and CPU overhead for Rift is greater (it's estimated that 1%-2% of CPU is used for image processing per sensor, but the Oculus SDK appears to have some performance advantages that allow equivalent performance on apps despite this overhead).

There is great debate over which is the more "advanced" solution. Lighthouse is wickedly clever, allowing a performant solution over larger tracking volumes with fewer cables and sensors.

Constellation is pretty brute-force, but requires highly accurate image recognition algorithms that (some say) give Oculus a leg-up in next generation tracking with no external sensors (see the Santa Cruz prototype[1] which is a PC-free device that uses 4 cameras on the HMD and on-board image processing to determine absolute position using only real-world cues). It also opens the door to full-body tracking using similar outside-in sensors.

But overall, the Valve solution definitely lends itself to a Linux implementation better than Oculus's, simply due to the lower I/O requirements. It also helps that Valve has published the Lighthouse calculations (which is just basic math), while Oculus has kept its image recognition algorithms as trade secrets.

[1] https://arstechnica.com/gaming/2017/10/wireless-oculus-vr-gr...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: