Skip to content

Latest commit

 

History

History
55 lines (55 loc) · 8.98 KB

File metadata and controls

55 lines (55 loc) · 8.98 KB

Depth

  • (arXiv 2020.11) Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers, [Paper], [Code]
  • (arXiv 2021.03) Vision Transformers for Dense Prediction, [Paper], [Code]
  • (arXiv 2021.03) Transformers Solve the Limited Receptive Field for Monocular Depth Prediction, [Paper], [Code]
  • (arXiv 2021.09) Improving 360 Monocular Depth Estimation via Non-local Dense Prediction Transformer and Joint Supervised and Self-supervised Learning, [Paper]
  • (arXiv 2022.02) GLPanoDepth: Global-to-Local Panoramic Depth Estimation, [Paper]
  • (arXiv 2022.02) Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics, [Paper]
  • (arXiv 2022.03) OmniFusion: 360 Monocular Depth Estimation via Geometry-Aware Fusion, [Paper]
  • (arXiv 2022.03) PanoFormer: Panorama Transformer for Indoor 360 Depth Estimation, [Paper]
  • (arXiv 2022.03) DepthGAN: GAN-based Depth Generation of Indoor Scenes from Semantic Layouts, [Paper]
  • (arXiv 2022.03) DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation, [Paper], [Code]
  • (arXiv 2022.04) BinsFormer: Revisiting Adaptive Bins for Monocular Depth Estimation, [Paper], [Code]
  • (arXiv 2022.04) SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation, [Paper], [Project]
  • (arXiv 2022.04) Multi-Frame Self-Supervised Depth with Transformers, [Paper], [Project]
  • (arXiv 2022.05) SideRT: A Real-time Pure Transformer Architecture for Single Image Depth Estimation, [Paper]
  • (arXiv 2022.05) Depth Estimation with Simplified Transformer, [Paper]
  • (arXiv 2022.05) MonoFormer: Towards Generalization of self-supervised monocular depth estimation with Transformers, [Paper]
  • (arXiv 2022.06) SparseFormer: Attention-based Depth Completion Network, [Paper]
  • (arXiv 2022.06) Forecasting of depth and ego-motion with transformers and self-supervision, [Paper]
  • (arXiv 2022.07) Depthformer : Multiscale Vision Transformer For Monocular Depth Estimation With Local Global Information Fusion, [Paper], [Code]
  • (arXiv 2022.08) MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer, [Paper], [Code]
  • (arXiv 2022.09) TODE-Trans: Transparent Object Depth Estimation with Transformer, [Paper], [Code]
  • (arXiv 2022.10) Context-Enhanced Stereo Transformer, [Paper], [Code]
  • (arXiv 2022.11) Hybrid Transformer Based Feature Fusion for Self-Supervised Monocular Depth Estimation, [Paper]
  • (arXiv 2022.11) Lite-Mono: A Lightweight CNN and Transformer Architecture for Self-Supervised Monocular Depth Estimation, [Paper], [Code]
  • (arXiv 2022.12) Event-based Monocular Dense Depth Estimation with Recurrent Transformers, [Paper]
  • (arXiv 2022.12) ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient Self-Supervised Monocular Depth Estimation, [Paper]
  • (arXiv 2023.01) Dyna-DepthFormer: Multi-frame Transformer for Self-Supervised Depth Estimation in Dynamic Scenes, [Paper]
  • (arXiv 2023.01) SwinDepth: Unsupervised Depth Estimation using Monocular Sequences via Swin Transformer and Densely Cascaded Network, [Paper]
  • (arXiv 2023.02) URCDC-Depth: Uncertainty Rectified Cross-Distillation with CutFlip for Monocular Depth Estimation, [Paper], [Code]
  • (arXiv 2023.03) STDepthFormer: Predicting Spatio-temporal Depth from Video with a Self-supervised Transformer Model, [Paper], [Code]
  • (arXiv 2023.03) DwinFormer: Dual Window Transformers for End-to-End Monocular Depth Estimation, [Paper]
  • (arXiv 2023.03) DEHRFormer: Real-time Transformer for Depth Estimation and Haze Removal from Varicolored Haze Scenes, [Paper]
  • (arXiv 2023.03) Channel-Aware Distillation Transformer for Depth Estimation on Nano Drones, [Paper]
  • (arXiv 2023.04) EGformer: Equirectangular Geometry-biased Transformer for 360 Depth Estimation, [Paper]
  • (arXiv 2023.04) CompletionFormer: Depth Completion with Convolutions and Vision Transformers, [Paper], [Code]
  • (arXiv 2023.08) Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN, [Paper]
  • (arXiv 2023.08) Semi-Supervised Semantic Depth Estimation using Symbiotic Transformer and NearFarMix Augmentation, [Paper]
  • (arXiv 2023.09) SQLdepth: Generalizable Self-Supervised Fine-Structured Monocular Depth Estimation, [Paper], [Code]
  • (arXiv 2023.10) GSDC Transformer: An Efficient and Effective Cue Fusion for Monocular Multi-Frame Depth Estimation, [Paper]
  • (arXiv 2023.10) FocDepthFormer: Transformer with LSTM for Depth Estimation from Focus, [Paper]
  • (arXiv 2023.10) Metrically Scaled Monocular Depth Estimation through Sparse Priors for Underwater Robots, [Paper], [Code]
  • (arXiv 2023.12) Transformers in Unsupervised Structure-from-Motion, [Paper], [Code]
  • (arXiv.2024.02) CLIP Can Understand Depth, [Paper]
  • (arXiv.2024.03) Depth Estimation Algorithm Based on Transformer-Encoder and Feature Fusion, [Paper]
  • (arXiv.2024.03) METER: a mobile vision transformer architecture for monocular depth estimation, [Paper]
  • (arXiv.2024.03) SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications, [Paper]
  • (arXiv.2024.03) UniDepth: Universal Monocular Metric Depth Estimation, [Paper], [Code]
  • (arXiv.2024.04) WorDepth: Variational Language Prior for Monocular Depth Estimation, [Paper], [Code]
  • (arXiv.2024.04) SGFormer: Spherical Geometry Transformer for 360 Depth Estimation, [Paper]
  • (arXiv.2024.06) ToSA: Token Selective Attention for Efficient Vision Transformers, [Paper]
  • (arXiv.2024.07) Towards Scale-Aware Full Surround Monodepth with Transformers, [Paper]
  • (arXiv.2024.07) UMono: Physical Model Informed Hybrid CNN-Transformer Framework for Underwater Monocular Depth Estimation, [Paper]
  • (arXiv.2024.09) SDformer: Efficient End-to-End Transformer for Depth Completion, [Paper], [Code]
  • (arXiv.2024.09) Depth Matters: Exploring Deep Interactions of RGB-D for Semantic Segmentation in Traffic Scenes, [Paper]