Skip to content

Latest commit

 

History

History
88 lines (65 loc) · 2.75 KB

README.md

File metadata and controls

88 lines (65 loc) · 2.75 KB

What new in this repo?

  • change loss compute to batch operater, this can 3X times faster than origin.
  • modify the customer op to support torch1.8
  • optimizer dataloader logit

the train time on single A100 with batch_size 128:

截屏2022-10-20 下午8 34 29

LCCNet

Official PyTorch implementation of the paper “LCCNet: Lidar and Camera Self-Calibration Using Cost Volume Network”. A video of the demonstration of the method can be found on https://www.youtube.com/watch?v=UAAGjYT708A

Table of Contents

Requirements

  • python 3.6 (recommend to use Anaconda)
  • PyTorch==1.0.1.post2
  • Torchvision==0.2.2
  • Install requirements and dependencies
pip install -r requirements.txt

Pre-trained model

Pre-trained models can be downloaded from google drive

Evaluation

  1. Download KITTI odometry dataset.
  2. Change the path to the dataset in evaluate_calib.py.
data_folder = '/path/to/the/KITTI/odometry_color/'
  1. Create a folder named pretrained to store the pre-trained models in the root path.
  2. Download pre-trained models and modify the weights path in evaluate_calib.py.
weights = [
   './pretrained/kitti_iter1.tar',
   './pretrained/kitti_iter2.tar',
   './pretrained/kitti_iter3.tar',
   './pretrained/kitti_iter4.tar',
   './pretrained/kitti_iter5.tar',
]
  1. Run evaluation.
python evaluate_calib.py

Train

python train_with_sacred.py

Citation

Thank you for citing our paper if you use any of this code or datasets.

@article{lv2020lidar,
  title={Lidar and Camera Self-Calibration using CostVolume Network},
  author={Lv, Xudong and Wang, Boya and Ye, Dong and Wang, Shuo},
  journal={arXiv preprint arXiv:2012.13901},
  year={2020}
}

Acknowledgments

We are grateful to Daniele Cattaneo for his CMRNet github repository. We use it as our initial code base.