Skip to content

Commit

Permalink
pySLAM v2.3.0. Big file reorganization and refactoring.
Browse files Browse the repository at this point in the history
  • Loading branch information
luigifreda committed Dec 20, 2024
1 parent a5d2b4f commit e4737c5
Show file tree
Hide file tree
Showing 157 changed files with 234 additions and 642 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ matches.txt
map.png
.vscode
.project
videos/webcam
data/videos/webcam

kf_info.log
local_mapping.log
Expand Down
98 changes: 49 additions & 49 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,47 @@
# pySLAM v2.2.6
# pySLAM v2.3.0

Author: **[Luigi Freda](https://www.luigifreda.com)**

<!-- TOC -->

- [pySLAM v2.2.6](#pyslam-v226)
- [1. Install](#1-install)
- [1.1. Main requirements](#11-main-requirements)
- [1.2. Ubuntu](#12-ubuntu)
- [1.3. MacOS](#13-macos)
- [1.4. Docker](#14-docker)
- [1.5. How to install non-free OpenCV modules](#15-how-to-install-non-free-opencv-modules)
- [1.6. Troubleshooting and performance issues](#16-troubleshooting-and-performance-issues)
- [2. Usage](#2-usage)
- [2.1. Feature tracking](#21-feature-tracking)
- [2.2. Loop closing](#22-loop-closing)
- [2.2.1. Vocabulary management](#221-vocabulary-management)
- [2.2.2. Vocabulary-free loop closing](#222-vocabulary-free-loop-closing)
- [2.3. Volumetric reconstruction pipeline](#23-volumetric-reconstruction-pipeline)
- [2.4. Depth prediction](#24-depth-prediction)
- [2.5. Save and reload a map](#25-save-and-reload-a-map)
- [2.6. Relocalization in a loaded map](#26-relocalization-in-a-loaded-map)
- [2.7. Trajectory saving](#27-trajectory-saving)
- [2.8. SLAM GUI](#28-slam-gui)
- [2.9. Monitor the logs for tracking, local mapping, and loop closing simultaneously](#29-monitor-the-logs-for-tracking-local-mapping-and-loop-closing-simultaneously)
- [3. Supported components and models](#3-supported-components-and-models)
- [3.1. Supported local features](#31-supported-local-features)
- [3.2. Supported matchers](#32-supported-matchers)
- [3.3. Supported global descriptors and local descriptor aggregation methods](#33-supported-global-descriptors-and-local-descriptor-aggregation-methods)
- [3.3.1. Local descriptor aggregation methods](#331-local-descriptor-aggregation-methods)
- [3.3.2. Global descriptors](#332-global-descriptors)
- [3.4. Supported depth prediction models](#34-supported-depth-prediction-models)
- [4. Datasets](#4-datasets)
- [4.1. KITTI Datasets](#41-kitti-datasets)
- [4.2. TUM Datasets](#42-tum-datasets)
- [4.3. EuRoC Datasets](#43-euroc-datasets)
- [4.4. Replica Datasets](#44-replica-datasets)
- [5. Camera Settings](#5-camera-settings)
- [6. Comparison pySLAM vs ORB-SLAM3](#6-comparison-pyslam-vs-orb-slam3)
- [7. Contributing to pySLAM](#7-contributing-to-pyslam)
- [8. References](#8-references)
- [9. Credits](#9-credits)
- [10. TODOs](#10-todos)
- [pySLAM v2.3.0](#pyslam-v230)
- [Install](#install)
- [Main requirements](#main-requirements)
- [Ubuntu](#ubuntu)
- [MacOS](#macos)
- [Docker](#docker)
- [How to install non-free OpenCV modules](#how-to-install-non-free-opencv-modules)
- [Troubleshooting and performance issues](#troubleshooting-and-performance-issues)
- [Usage](#usage)
- [Feature tracking](#feature-tracking)
- [Loop closing](#loop-closing)
- [Vocabulary management](#vocabulary-management)
- [Vocabulary-free loop closing](#vocabulary-free-loop-closing)
- [Volumetric reconstruction pipeline](#volumetric-reconstruction-pipeline)
- [Depth prediction](#depth-prediction)
- [Save and reload a map](#save-and-reload-a-map)
- [Relocalization in a loaded map](#relocalization-in-a-loaded-map)
- [Trajectory saving](#trajectory-saving)
- [SLAM GUI](#slam-gui)
- [Monitor the logs for tracking, local mapping, and loop closing simultaneously](#monitor-the-logs-for-tracking-local-mapping-and-loop-closing-simultaneously)
- [Supported components and models](#supported-components-and-models)
- [Supported local features](#supported-local-features)
- [Supported matchers](#supported-matchers)
- [Supported global descriptors and local descriptor aggregation methods](#supported-global-descriptors-and-local-descriptor-aggregation-methods)
- [Local descriptor aggregation methods](#local-descriptor-aggregation-methods)
- [Global descriptors](#global-descriptors)
- [Supported depth prediction models](#supported-depth-prediction-models)
- [Datasets](#datasets)
- [KITTI Datasets](#kitti-datasets)
- [TUM Datasets](#tum-datasets)
- [EuRoC Datasets](#euroc-datasets)
- [Replica Datasets](#replica-datasets)
- [Camera Settings](#camera-settings)
- [Comparison pySLAM vs ORB-SLAM3](#comparison-pyslam-vs-orb-slam3)
- [Contributing to pySLAM](#contributing-to-pyslam)
- [References](#references)
- [Credits](#credits)
- [TODOs](#todos)

<!-- /TOC -->

Expand Down Expand Up @@ -104,19 +104,19 @@ Then, use the available specific install procedure according to your OS. The pro
* Kornia 0.7.3
* Rerun

If you encounter any issues or performance problems, refer to the [TROUBLESHOOTING](./TROUBLESHOOTING.md) file for assistance.
If you encounter any issues or performance problems, refer to the [TROUBLESHOOTING](./docs/TROUBLESHOOTING.md) file for assistance.


### Ubuntu

Follow the instructions reported [here](./PYTHON-VIRTUAL-ENVS.md) for creating a new virtual environment `pyslam` with **venv**. The procedure has been tested on *Ubuntu 18.04*, *20.04*, *22.04* and *24.04*.
Follow the instructions reported [here](./docs/PYTHON-VIRTUAL-ENVS.md) for creating a new virtual environment `pyslam` with **venv**. The procedure has been tested on *Ubuntu 18.04*, *20.04*, *22.04* and *24.04*.

If you prefer **conda**, run the scripts described in this other [file](./CONDA.md).
If you prefer **conda**, run the scripts described in this other [file](./docs/CONDA.md).


### MacOS

Follow the instructions in this [file](./MAC.md). The reported procedure was tested under *Sequoia 15.1.1* and *Xcode 16.1*.
Follow the instructions in this [file](./docs/MAC.md). The reported procedure was tested under *Sequoia 15.1.1* and *Xcode 16.1*.


### Docker
Expand All @@ -130,7 +130,7 @@ If you prefer docker or you have an OS that is not supported yet, you can use [r

The provided install scripts will install a recent opencv version (>=**4.10**) with non-free modules enabled (see the provided scripts [install_pip3_packages.sh](./install_pip3_packages.sh) and [install_opencv_python.sh](./install_opencv_python.sh)). To quickly verify your installed opencv version run:
`$ . pyenv-activate.sh `
`$ ./opencv_check.py`
`$ ./scripts/opencv_check.py`
or use the following command:
`$ python3 -c "import cv2; print(cv2.__version__)"`
How to check if you have non-free OpenCV module support (no errors imply success):
Expand All @@ -139,7 +139,7 @@ How to check if you have non-free OpenCV module support (no errors imply success

### Troubleshooting and performance issues

If you run into issues or errors during the installation process or at run-time, please, check the [TROUBLESHOOTING.md](./TROUBLESHOOTING.md) file.
If you run into issues or errors during the installation process or at run-time, please, check the [docs/TROUBLESHOOTING.md](./docs/TROUBLESHOOTING.md) file.

---
## Usage
Expand All @@ -149,22 +149,22 @@ Once you have run the script `install_all_venv.sh` (follow the instructions abov
$ . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal.
$ ./main_vo.py
```
This will process a default [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folder `videos`) by using its corresponding camera calibration file (available in the folder `settings`), and its groundtruth (available in the same `videos` folder). If matplotlib windows are used, you can stop `main_vo.py` by focusing/clicking on one of them and pressing the key 'Q'.
This will process a default [KITTI](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) video (available in the folder `data/videos`) by using its corresponding camera calibration file (available in the folder `settings`), and its groundtruth (available in the same `data/videos` folder). If matplotlib windows are used, you can stop `main_vo.py` by focusing/clicking on one of them and pressing the key 'Q'.
**Note**: As explained above, the basic script `main_vo.py` **strictly requires a ground truth**.

In order to process a different **dataset**, you need to set the file `config.yaml`:
* Select your dataset `type` in the section `DATASET` (further details in the section *[Datasets](#datasets)* below for further details). This identifies a corresponding dataset section (e.g. `KITTI_DATASET`, `TUM_DATASET`, etc).
* Select the `sensor_type` (`mono`, `stereo`, `rgbd`) in the chosen dataset section.
* Select the camera `settings` file in the dataset section (further details in the section *[Camera Settings](#camera-settings)* below).
* The `groudtruth_file` accordingly (further details in the section *[Datasets](#datasets)* below and check the files `ground_truth.py` and `convert_groundtruth.py`).
* The `groudtruth_file` accordingly (further details in the section *[Datasets](#datasets)* below and check the files `io/ground_truth.py` and `io/convert_groundtruth.py`).

Similarly, you can test `main_slam.py` by running:
```bash
$ . pyenv-activate.sh # Activate pyslam python virtual environment. This is only needed once in a new terminal.
$ ./main_slam.py
```

This will process a default [KITTI]((http://www.cvlibs.net/datasets/kitti/eval_odometry.php)) video (available in the folder `videos`) by using its corresponding camera calibration file (available in the folder `settings`). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'.
This will process a default [KITTI]((http://www.cvlibs.net/datasets/kitti/eval_odometry.php)) video (available in the folder `data/videos`) by using its corresponding camera calibration file (available in the folder `settings`). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'.
**Note**: Due to information loss in video compression, `main_slam.py` tracking may peform worse with the available KITTI videos than with the original KITTI image sequences. The available videos are intended to be used for a first quick test. Please, download and use the original KITTI image sequences as explained [below](#datasets).

### Feature tracking
Expand Down Expand Up @@ -441,7 +441,7 @@ $ python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > asso
### EuRoC Datasets

1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (check this direct [link](http://robotics.ethz.ch/~asl-datasets/ijrr_euroc_mav_dataset/))
2. Use the script `groundtruth/generate_euroc_groundtruths_as_tum.sh` to generate the TUM-like groundtruth files `path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'` that are required by the `EurocGroundTruth` class.
2. Use the script `io/generate_euroc_groundtruths_as_tum.sh` to generate the TUM-like groundtruth files `path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum'` that are required by the `EurocGroundTruth` class.
3. Select the corresponding calibration settings file (parameter `EUROC_DATASET: cam_settings:` in the file `config.yaml`).


Expand Down
2 changes: 1 addition & 1 deletion config.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
import os
import yaml
import numpy as np
from utils_sys import Printer, locally_configure_qt_environment
from utilities.utils_sys import Printer, locally_configure_qt_environment
import math


Expand Down
18 changes: 13 additions & 5 deletions config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,14 @@ CORE_LIB_PATHS:
orb_features: thirdparty/orbslam2_features/lib
pyslam_utils: cpp/utils/lib
thirdparty: thirdparty # considering the folders in thirdparty as modules
utilities: utilities
depth_estimation: depth_estimation
local_features: local_features
loop_closing: loop_closing
slam: slam
viz: viz
io: io
dense: dense

LIB_PATHS:
# The following libs are explicitely imported on demand by using, for instance:
Expand Down Expand Up @@ -47,10 +55,10 @@ LIB_PATHS:
DATASET:
# select your dataset (decomment only one of the following lines)
#type: EUROC_DATASET
#type: KITTI_DATASET
type: KITTI_DATASET
#type: TUM_DATASET
#type: REPLICA_DATASET
type: VIDEO_DATASET
#type: VIDEO_DATASET
#type: FOLDER_DATASET
#type: LIVE_DATASET # Not recommended for current development stage

Expand Down Expand Up @@ -121,15 +129,15 @@ VIDEO_DATASET:
type: video
sensor_type: mono # Here, 'sensor_type' can be only 'mono'
#
#base_path: ./videos/kitti00
#base_path: ./data/videos/kitti00
#settings: settings/KITTI00-02.yaml
#name: video.mp4
#
base_path: ./videos/kitti06
base_path: ./data/videos/kitti06
settings: settings/KITTI04-12.yaml
name: video_color.mp4
#
#base_path: ./videos/webcam
#base_path: ./data/videos/webcam
#settings: settings/WEBCAM.yaml
#name: video.mp4
#
Expand Down
4 changes: 2 additions & 2 deletions parameters.py → config_parameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ class Parameters:
kGBAUseRobustKernel = True

# Volume Integration
kUseVolumetricIntegration = False # To enable/disable volumetric integration (dense mapping)
kUseVolumetricIntegration = True # To enable/disable volumetric integration (dense mapping)
kVolumetricIntegrationDebugAndPrintToFile = True
kVolumetricIntegrationExtractMesh = False # Extract mesh or point cloud as output
kVolumetricIntegrationVoxelLength = 0.015 # [m]
Expand All @@ -180,7 +180,7 @@ class Parameters:
kVolumetricIntegrationDepthTruncOutdoor = 10.0 # [m]
kVolumetricIntegrationMinNumLBATimes = 1 # We integrate only the keyframes that have been processed by LBA at least kVolumetricIntegrationMinNumLBATimes times.
kVolumetricIntegrationOutputTimeInterval = 1.0 # [s]
kVolumetricIntegrationUseDepthEstimator = False # Use depth estimator for volumetric integration in the back-end.
kVolumetricIntegrationUseDepthEstimator = True # Use depth estimator for volumetric integration in the back-end.
# Since the depth inference time is above 1 second, this is very slow.
# NOTE: the depth estimator estimates a metric depth (with an absolute scale). You can't combine it with a MONOCULAR SLAM since the SLAM map scale will be not consistent.
kVolumetricIntegrationDepthEstimatorType = "DEPTH_RAFT_STEREO" # "DEPTH_PRO","DEPTH_ANYTHING_V2, "DEPTH_SGBM", "DEPTH_RAFT_STEREO", "DEPTH_CRESTEREO_PYTORCH" (see depth_estimator_factory.py)
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
6 changes: 3 additions & 3 deletions volumetric_integrator.py → dense/volumetric_integrator.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@

from timer import TimerFps

from parameters import Parameters
from config_parameters import Parameters

import traceback

Expand Down Expand Up @@ -66,8 +66,8 @@
kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kDataFolder = kRootFolder + '/data'
kLogsFolder = kRootFolder + '/logs'
kDataFolder = kRootFolder + '/../data'
kLogsFolder = kRootFolder + '/../logs'


kVolumetricIntegratorProcessName = 'VolumetricIntegratorProcess'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


# Base class for depth estimators via inference.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


def enforce_megengine_linking():
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


# Stereo depth prediction using the Crestereo model with pytorch.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


# Monocular depth estimator using the DepthAnythingV2 model.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


# Moncocular depth estimator using the DepthPro model.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


@register_class
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@

kScriptPath = os.path.realpath(__file__)
kScriptFolder = os.path.dirname(kScriptPath)
kRootFolder = kScriptFolder
kRootFolder = kScriptFolder + '/..'


class DepthEstimatorRaftStereoConfiguration:
Expand Down
File renamed without changes.
File renamed without changes.
4 changes: 2 additions & 2 deletions PYTHON-VIRTUAL-ENVS.md → docs/PYTHON-VIRTUAL-ENVS.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# pySLAM2 Virtual Environment
# Install pyslam virtual environment

<!-- TOC -->

- [pySLAM2 Virtual Environment](#pyslam2-virtual-environment)
- [Install pyslam virtual environment](#install-pyslam-virtual-environment)
- [Installation](#installation)
- [Usage](#usage)
- [Create a `pyslam` python virtual environment](#create-a-pyslam-python-virtual-environment)
Expand Down
Loading

0 comments on commit e4737c5

Please sign in to comment.