mkdir nyu_depth_v2
wget http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat
python extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ./nyu_depth_v2/official_splits/
Download sync.zip provided by the authors of BTS from this url and unzip in ./nyu_depth_v2
folder.
Your datasets directory should be:
│nyu_depth_v2/
├──official_splits/
│ ├── test
│ ├── train
├──sync/
│kitti/
├──data_depth_annotated/
├──raw_data/
├──val_selection_cropped/
NYUv2 | RMSE | d1 | d2 | d3 | REL | Fine-tuned Model |
---|---|---|---|---|---|---|
MetaPrompts | 0.223 | 0.976 | 0.997 | 0.999 | 0.061 | Google drive |
KITTI | RMSE | d1 | d2 | d3 | REL | Fine-tuned Model |
---|---|---|---|---|---|---|
MetaPrompts | 1.928 | 0.981 | 0.998 | 1.000 | 0.047 | Google drive |
Run the following instuction to train the MetaPrompts-Depth model. We recommend using 8 NVIDIA V100 GPUs to train the model with a total batch size of 24.
Train the MetaPrompts-Depth model with 8 NVIDIA V100 GPUs on NYUv2 dataset:
bash train.sh <LOG_DIR>
Train the MetaPrompts-Depth model with 8 NVIDIA V100 GPUs on KITTI dataset:
bash train_kitti.sh <LOG_DIR>
Command format:
bash test.sh/test_kitti.sh <CHECKPOINT_PATH>