Hacker News new | past | comments | ask | show | jobs | submit login

Hmmm... Kinect creates a depth map. Not sure why they didn't use the same technology.



It does, you can see it in the first video (it appears to be quite more than a depth map and actually extracting geometry from it, which fits with what the SDK news has been).

Pure speculation, but I would bet that it generates the room geometry at startup and then doesn't change it later. The device has limited processing power and needs to hit a high frame rate, so doing that makes sense.

A person walking into the scene wouldn't be accounted for in that case. It's also dependent on scans of the environment and the quality of geometry reconstruction, so things like a gap between objects that you didn't see from a good angle could easily run into that issue.


This is incorrect -- it does constantly rescan the room and will detect someone moving through the environment. It doesn't do that super quickly, though, and it can be difficult to capture the geometry of a moving person accurately.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: