Skip to content

Latest commit

 

History

History
45 lines (36 loc) · 1.97 KB

README.md

File metadata and controls

45 lines (36 loc) · 1.97 KB

CUDE-MonoDepthCL

alt text alt text We propose the CUDE framework for Continual Unsupervised Depth estimation with multiple tasks with domain and depth-range shifts, across various weather and lighting conditions, and sim-to-real, indoor-to-outdoor, outdoor-to-indoor scenarios. We also propose MonodepthCL - a method to mitigate catastrophic forgeting in CUDE.

Install

MonoDepthCL was trained on TeslaV100 GPU for 20 epochs for each task with AdamW optimizer at a resolution of (192 x 640) with a batchsize 12. The docker environment used can be setup as follows:

git clone https://github.com/NeurAI-Lab/MIMDepth.git
cd MIMDepth
make docker-build

Training

MonodepthCL is trained in a self-supervised manner from videos. For training, utilize a .yaml config file or a .ckpt model checkpoint file with train.py.

python train.py <config_file.yaml or model_checkpoint.ckpt>

The splits used within the CUDE benchmark can be found in splits folder.

Cite Our Work

If you find the code useful in your research, please consider citing our paper:

@inproceedings{cude2024wacv,
	author={H. {Chawla} and A. {Varma} and E. {Arani} and B. {Zonooz}},
	booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
	title={Continual Learning of Unsupervised Monocular Depth from Videos},
	year={2023}
}

License

This project is licensed under the terms of the MIT license.