Skip to content

Easily download and evaluate pre-trained Visual Place Recognition methods. Code built for the ICCV 2023 paper "EigenPlaces: Training Viewpoint Robust Models for Visual Place Recognition"

Notifications You must be signed in to change notification settings

gmberton/VPR-methods-evaluation

Repository files navigation

VPR-methods-evaluation

This repo allows you to easily test almost any SOTA VPR model within a minute. The architecture code and weights are from the respective authors of the papers, ensuring reliability.

Basic use on an unlabelled dataset

Simply run this to try a method on a (unlabelled) toy dataset contained in assets. This is super lightweight and will take a few seconds even running on a CPU of a laptop.

git clone --recursive https://github.com/gmberton/VPR-methods-evaluation
cd VPR-methods-evaluation
python3 main.py --method=cosplace --backbone=ResNet18 --descriptors_dimension=512 \
    --database_folder=toy_dataset/database --queries_folder=toy_dataset/queries \
    --no_labels --image_size 200 200 --num_preds_to_save 3 \
    --log_dir toy_experiment

which will generate images like this within the logs/toy_experiment visual predictions for each query like this one

You can also use this on your own dataset (or any two directories) simply changing the paths parameters.

Basic use on a labelled dataset

The code is designed to be readily used with our VPR-datasets-downloader repo, so that using a few simple commands you can download a dataset and test any model on it. The VPR-datasets-downloader code allows you to download multiple VPR datasets that are automatically formatted in the same format as used by this repo.

The format needed by this repo to compute recalls is that each image should use as filename path/to/image/@UTM_east@UTM_north@[email protected]

mkdir VPR-codebase
cd VPR-codebase

git clone https://github.com/gmberton/VPR-datasets-downloader
cd VPR-datasets-downloader
python3 download_st_lucia.py

cd ..

git clone --recursive https://github.com/gmberton/VPR-methods-evaluation
cd VPR-methods-evaluation
python3 main.py --method=cosplace --backbone=ResNet18 --descriptors_dimension=512 \
    --database_folder=../VPR-datasets-downloader/datasets/st_lucia/images/test/database \
    --queries_folder=../VPR-datasets-downloader/datasets/st_lucia/images/test/queries

This should produce this as output R@1: 98.8, R@5: 99.7, R@10: 99.9, R@20: 100.0, which will be saved in a log file under ./logs/

You can easily change the paths for different datasets, and you can use any supported method. Note that each method has weights only for certain architectures. For example NetVLAD only has weights for VGG16 with descriptors_dimension 32768 and 4069 (with PCA).

NB: make sure to use the git clone --recursive, otherwise some third party (like AP-GeM) models can't be used.

Visualize predictions

Predictions can be easily visualized through the num_preds_to_save parameter. For example running this

python3 main.py --method=cosplace --backbone=ResNet18 --descriptors_dimension=512 \
    --num_preds_to_save=3 --log_dir=cosplace_on_stlucia \
    --database_folder=../VPR-datasets-downloader/datasets/st_lucia/images/test/database \
    --queries_folder=../VPR-datasets-downloader/datasets/st_lucia/images/test/queries

will generate under the path ./logs/cosplace_on_stlucia/*/preds images such as

Given that saving predictions for each query might take long, you can also pass the parameter --save_only_wrong_preds which will save only predictions for wrongly predicted queries (i.e. where the first prediction is wrong).

Supported models

NetVLAD, AP-GeM, SFRS, CosPlace, Conv-AP, MixVPR, EigenPlaces, AnyLoc, SALAD, EigenPlaces-indoor, SALAD-indoor, CricaVPR, CliqueMining.

Unsupported models / contributing

There are some models that we tried to add but couldn't get to work, mostly due to issues in their codebases, namely VLAD-BuFF, Bag-of-Queries, DINO-Mix. We'd gladly accept PRs from anyone who can get them to work.

To get a model to work simply add it to parser.py and add it to vpr_models/__init__.py: if the model is easy to download (e.g. through torch.hub.load), adding a few (4-5) lines of code should be enough to make it work. There is no need to test it on a VPR dataset, just make it run on the toy dataset and then we'll test it on the VPR datasets ourselves to ensure correctness.

Acknowledgements / Cite / BibTex

If you use this repository please cite our ICCV 2023 EigenPlaces paper, for which we started this repo to ensure fair comparisons between VPR models:

@inproceedings{Berton_2023_EigenPlaces,
  title={EigenPlaces: Training Viewpoint Robust Models for Visual Place Recognition},
  author={Berton, Gabriele and Trivigno, Gabriele and Caputo, Barbara and Masone, Carlo},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2023},
  month={October},
  pages={11080-11090}
}

Kudos to the authors of NetVLAD, AP-GeM, SFRS, CosPlace, Conv-AP, MixVPR, EigenPlaces, AnyLoc, SALAD, EigenPlaces-indoor and SALAD-indoor for open sourcing their models' weights. The code for each model has been taken from their respective repositories, except for the code for NetVLAD which has been taken from hloc. Make sure to cite them if you use each model's code.

About

Easily download and evaluate pre-trained Visual Place Recognition methods. Code built for the ICCV 2023 paper "EigenPlaces: Training Viewpoint Robust Models for Visual Place Recognition"

Topics

Resources

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •  

Languages