This project is based on REVO work by Schenk Fabian, Fraundorfer Friedrich.
- Combining Edge Images and Depth Maps for Robust Visual Odometry, Schenk Fabian, Fraundorfer Friedrich, BMVC 2017, pdf,video
- Robust Edge-based Visual Odometry using Machine-Learned Edges, Schenk Fabian, Fraundorfer Friedrich, IROS 2017, pdf, video
In this work, a robust edge-based visual odometry (REVO) system is presented for RGBD sensors. Edges are more stable under varying lighting conditions than raw intensity values, which leads to higher accuracy and robustness in scenes, where feature- or photoconsistency-based approaches often fail. The results show that this method performs best in terms of trajectory accuracy for most of the sequences indicating that edges are suitable for a multitude of scenes.
REVO is licensed under the GNU General Public License Version 3 (GPLv3).
This framework is built and tested on the following system.
- Ubuntu
- OpenCV > 4
- Eigen > 3.3
- Sophus included in this repository(/thirdparty/Sophus)
- Pangolin (for graphical viewer)
Set the optional packages in the cmake-gui
- Intel RealSense ZR300 (see below)
- Orbbec Astra Pro (see below)
git clone REVO
cd REVO
mkdir build
cd build
cmake . ..
make -j
This project works on input RGBD files and tested on TUM dataset, of which generated by Microsoft Kinnect RGBD cameras. Download the sequence you want to test and generate an "associate.txt" file to align the rgb image and depth image. To generate an "associate.txt" file, first download the "associate.py" script from TUM RGBD Tools (actually already included in the "dataset" directory) and then run
python associate.py DATASET_XXX/rgb.txt DATASET_XXX/depth.txt > associate.txt
in the dataset directory.
After generating the "associate.txt" file, move it to your dataset directory. Then specify the path to dataset and associate.txt in the dataset_tumX.yaml settings file(in /config). To run the dataset: In the "REVO" directory
build/REVO config/revo_settings.yaml config/dataset_tum1.yaml
There're many setting parameters in these two '.yaml' files.
- In "/config/revo_settings.yaml", "DO_GENERATE_DENSE_PCL" is for image pyrimid to improve sparse area and "DO_GAUSSIAN_SMOOTHING_BEFORE_CANNY" is for Gaussian blur before canny edge detection. They have been all set to 1, and you can change their value to 0 to compare the result.
- In "/config/data_tum1.yaml", you must modify your path to the mainfolder, subfolder of dataset and the associate.txt. Also, you should check the type of camera(like freiburg1, observed by the dataset name) and modify the camera parameters if necessary. The camera parameters can be searched on the above TUM website.
For evaluation of the absolute trajectory error (ATE) and relative pose error (RPE) download the corresponding scripts from TUM RGBD Tools.
REVO supports three different sensors at the moment:
For the Intel sensor set "WITH_REALSENSE", for the Orbbec Astra Pro set "WITH_ORBBEC_FFMPEG" (recommended) or "WITH_ORBBEC_UVC" (not recommended, requires third party tools) and for the non-pro Orbbec Astra set "WITH_ORBBEC_OPENNI"! Note: Make sure that you set the USB rules in a way that the sensor is accessible for every user (default is root only).
REVO can be compiled for all three sensors only if WITH_REALSENSE, WITH_ORBBEC_FFMPEG and WITH_ORBBEC_OPENNI are set. If WITH_ORRBEC_UVC is set, there is a conflict with the librealsense! To solve this issue, use WITH_ORBBEC_FFMPEG!
The sensor to be used is determined from the INPUT_TYPE set in the second config file. For Orbbec Astra Pro INPUT_TYPE: 1, for Intel Realsense INPUT_TYPE: 2 and for Orbbec Astra INPUT_TYPE: 3.
Example config files for all three sensors can be found in the config directory!
Install librealsense, set the intrinsic parameters in the config file. This framework was tested with the Intel RealSense ZR300.
The (non-pro) Orbbec Astra Sensor can be fully accessed by Orbbec's OpenNI driver. First download the openni driver and choose the correct *.zip file that matches your architecture, e.g. OpenNI-Linux_x64-2.3.zip. Extract it and copy libOpenNI2.so and the "Include" and "OpenNI2" folder to REVO_FOLDER/orbbec_astra_pro/drivers.
The standard OpenNI driver can only access the depth stream of the Orbbec Astra Pro Sensor, thus we have to access the color stream via FFMPEG. Install the newest FFMPEG version
sudo apt install ffmpeg
or download from FFMPEG Github.
The standard OpenNI driver can only access the depth stream of the Orbbec Astra Pro Sensor, thus we have to access the color stream like a common webcam. Note: We use libuvc because the standard webcam interface of OpenCV buffers the images and doesn't always return the newest image.
First download the openni driver and choose the correct *.zip file that matches your architecture, e.g. OpenNI-Linux_x64-2.3.zip. Extract it and copy libOpenNI2.so and the "Include" and "OpenNI2" folder to REVO_FOLDER/orbbec_astra_pro/drivers.
Then install Olaf Kaehler's fork of libuvc by performing the following steps in the main directory.
cd ThirdParty
git clone https://github.com/olafkaehler/libuvc
cd libuvc
mkdir build
cd build
cmake . ..
make -j
make install
There was a problem with the old REVO version and a new Sophus version that introduced orthogonality checks for rotation matrices. If you face such an error, simply check out the current version of REVO.
If WITH_ORRBEC_UVC is set, there is a conflict with the librealsense! To solve this issue, use WITH_ORBBEC_FFMPEG!