This is the official code for ICRA 2023 paper Image Masking for Robust Self-Supervised Monocular Depth Estimation by Hemang Chawla, Kishaan Jeeveswaran, Elahe Arani and Bahram Zonooz.
We propose MIMDepth, a method that adapts masked image modeling (MIM) for self-supervised monocular depth estimation
MIMDepth was trained on TeslaV100 GPU for 20 epochs with AdamW optimizer at a resolution of (192 x 640) with a batchsize 12. The docker environment used can be setup as follows:
git clone https://github.com/NeurAI-Lab/MIMDepth.git
cd MIMDepth
make docker-build
MIMDepth is training in a self-supervised manner from videos.
For training, utilize a .yaml
config file or a .ckpt
model checkpoint file with scripts/train.py
.
python scripts/train.py <config_file.yaml or model_checkpoint.ckpt>
Example config file to train MIMDepth can be found in configs folder.
A trained model can be evaluated by providing a .ckpt
model checkpoint.
python scripts/eval.py --checkpoint <model_checkpoint.ckpt>
For running inference on a single image or folder,
python scripts/infer.py --checkpoint <checkpoint.ckpt> --input <image or folder> --output <image or folder> [--image_shape <input shape (h,w)>]
Pretrained Models for MT-SfMLearner and MIMDepth can be found here.
If you find the code useful in your research, please consider citing our paper:
@inproceedings{chawla2023image, author={H. {Chawla} and K. {Jeeveswaran} and E. {Arani} and B. {Zonooz}}, booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)}, title={Image Masking for Robust Self-Supervised Monocular Depth Estimation}, location={London, UK}, publisher={IEEE}, year={2023} }
This project is licensed under the terms of the MIT license.