Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: fast visual-inertial odometry/SLAM for AR/VR/Robotics (github.com/ucla-vision)
102 points by xfei91 on Sept 4, 2019 | hide | past | favorite | 22 comments


This is part of my research as a graduate student at UCLA Vision Lab. The SLAM system is Extended Kalman Filter (EKF) based, has features (landmarks) in the state, and jointly estimates the pose of the camera and the location of the landmarks. It runs at 140 Hz on a PC and is much faster than (some if not all) existing open-source VIO systems.


Awesome; this is great work!

One little nit, however: it should be important to note that this is not an open-source VIO system. It's licensed for research purposes only; anything else requires a commercial license from UCLA.


Thanks for pointing that out. First time doing "open-source" (well it seems it's not really open-source according to the modern definition). I'd like to use a more permissive license, but it's up to UCLA.


A middle ground might be found in licensing as open source the code that is needed to replicate any of your scientific findings. UCLA could keep proprietary the tooling required for commercial applications.


No worries, you're sharing the code, and that's the most important thing!


It is open source, but it is not free and open source [1].

[1] https://en.wikipedia.org/wiki/Free_and_open-source_software

EDIT: nevermind, I guess it's just source-available software.



"Open source" has always meant "source code is available" to most people. The need to differentiate between "open source software" and "free software" (in the Stallman sense) is the very reason that the term "free software" was coined.

The fact that the Open Source Initiative published a document controlling the use of a certification logo doesn't mean they own the term.


This shit is awesome. Thanks for sharing!


This is terrific, cheers!


My team and I worked with Eagle and his team from RealityCap back in 2015 on the monocular-SLAM iOS implementation of this for our AR application. Great people and we were glad to see when they were picked up by the RealSense team.

Glad to see they were able to push some of the code open source.


That looks awesome! I have Realsense d435i and would love to test it out. I'll have an in depth look at it later but I'd love to share it in my mostly open source list: https://github.com/msadowski/awesome-weekly-robotics


That will be great!


I am very interested in using this for one of my projects. I am just curious why you are using the D435. Is it because it has an IMU on board, you are not using the depth information from the sensor right? That would be important for my use case.


The original D435 does not have an IMU. But the D435i version has an IMU. We use it for our other projects which require the dense depth. But the SLAM system itself should work with only RGB and IMU after some calibration and parameter tuning.


Ah ok. I've got a T265 which I hope will work as it has an IMU as well and can output the image frames as far as I know. And while the T265 does do the tracking already I need to have a global reference frame as I would like to drive on a predfined track.


I love all sensor fusion, thanks for sharing this

What are the pros & cons of using ROS as the basis of systems like this?


ROS makes the inter-process communication much easier if the SLAM system is incorporated as one component of a much bigger system. But you don't have to use ROS for that. We actually provide the ability to run it without ROS. Also, with ROS, it's easier to communicate with sensors given that the sensor drivers have been wrapped into ROS nodes.


What happens to the autocalibration process when the IMU & video sources don't agree, i.e. if they get misaligned?


Thats the whole point of the autocalibration process. The autocalibration process figures out how the IMU & video sources are aligned.


or if the IMU breaks, for example, and starts outputting bad data -- will the autocalibration algo know to discard it?


The auto-calibration simply finds the spatial alignment between the camera and the IMU. If bad data are present, one needs some outlier rejection mechanism to filter out them. Auto-calibration alone does not provide that ability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: