This is part of my research as a graduate student at UCLA Vision Lab. The SLAM system is Extended Kalman Filter (EKF) based, has features (landmarks) in the state, and jointly estimates the pose of the camera and the location of the landmarks. It runs at 140 Hz on a PC and is much faster than (some if not all) existing open-source VIO systems.
One little nit, however: it should be important to note that this is not an open-source VIO system. It's licensed for research purposes only; anything else requires a commercial license from UCLA.
Thanks for pointing that out. First time doing "open-source" (well it seems it's not really open-source according to the modern definition). I'd like to use a more permissive license, but it's up to UCLA.
A middle ground might be found in licensing as open source the code that is needed to replicate any of your scientific findings. UCLA could keep proprietary the tooling required for commercial applications.
"Open source" has always meant "source code is available" to most people. The need to differentiate between "open source software" and "free software" (in the Stallman sense) is the very reason that the term "free software" was coined.
The fact that the Open Source Initiative published a document controlling the use of a certification logo doesn't mean they own the term.
My team and I worked with Eagle and his team from RealityCap back in 2015 on the monocular-SLAM iOS implementation of this for our AR application. Great people and we were glad to see when they were picked up by the RealSense team.
Glad to see they were able to push some of the code open source.
That looks awesome! I have Realsense d435i and would love to test it out. I'll have an in depth look at it later but I'd love to share it in my mostly open source list: https://github.com/msadowski/awesome-weekly-robotics
I am very interested in using this for one of my projects. I am just curious why you are using the D435. Is it because it has an IMU on board, you are not using the depth information from the sensor right? That would be important for my use case.
The original D435 does not have an IMU. But the D435i version has an IMU. We use it for our other projects which require the dense depth. But the SLAM system itself should work with only RGB and IMU after some calibration and parameter tuning.
Ah ok. I've got a T265 which I hope will work as it has an IMU as well and can output the image frames as far as I know. And while the T265 does do the tracking already I need to have a global reference frame as I would like to drive on a predfined track.
ROS makes the inter-process communication much easier if the SLAM system is incorporated as one component of a much bigger system. But you don't have to use ROS for that. We actually provide the ability to run it without ROS. Also, with ROS, it's easier to communicate with sensors given that the sensor drivers have been wrapped into ROS nodes.
The auto-calibration simply finds the spatial alignment between the camera and the IMU. If bad data are present, one needs some outlier rejection mechanism to filter out them. Auto-calibration alone does not provide that ability.