Skip to content

Latest commit

 

History

History
129 lines (85 loc) · 7.51 KB

README.md

File metadata and controls

129 lines (85 loc) · 7.51 KB

MAVOS

Efficient Video Object Segmentation via Modulated Cross-Attention Memory

Oryx Video-ChatGPT

Mohamed bin Zayed University of AI, ETH Zurich, University of California - Merced, Yonsei University, Google Research, Linköping University

paper video

Latest Updates

  • 2024/09/02: MAVOS checkpoints, training, and evaluation code are now available.
  • 2024/08/30: MAVOS has been accepted at WACV 2025! 🎊
  • 2024/03/27: Our technical report on MAVOS has been published on arXiv.

Abstract Recently, transformer-based approaches have shown promising results for semi-supervised video object segmentation. However, these approaches typically struggle on long videos due to increased GPU memory demands, as they frequently expand the memory bank every few frames. We propose a transformer-based approach, named MAVOS, that introduces an optimized and dynamic long-term modulated cross-attention (MCA) memory to model temporal smoothness without requiring frequent memory expansion. The proposed MCA effectively encodes both local and global features at various levels of granularity while efficiently maintaining consistent speed regardless of the video length. Extensive experiments on multiple benchmarks, LVOS, Long-Time Video, and DAVIS 2017, demonstrate the effectiveness of our proposed contributions leading to real-time inference and markedly reduced memory demands without any degradation in segmentation accuracy on long videos. Compared to the best existing transformer-based approach, our MAVOS increases the speed by 7.6x, while significantly reducing the GPU memory by 87% with comparable segmentation performance on short and long video datasets. Notably on the LVOS dataset, our MAVOS achieves a J&F score of 63.3% while operating at 37 frames per second (FPS) on a single V100 GPU. Our code and models will be publicly released.

Intro

  • MAVOS is a transformer-based VOS method that achieves real-time FPS and reduced GPU memory for long videos.

  • MAVOS increases the speed by 7.6x over the baseline DeAOT, while significantly reducing the GPU memory by 87% on long videos with comparable segmentation performance on short and long video datasets.

Examples

basket_players.mp4
dancing_girls.mp4

Requirements

  • Python3
  • pytorch >= 1.7.0 and torchvision
  • opencv-python
  • Pillow
  • Pytorch Correlation. Recommend to install from source instead of using pip:
    git clone https://github.com/ClementPinard/Pytorch-Correlation-extension.git
    cd Pytorch-Correlation-extension
    python setup.py install
    cd -

Model Zoo

Pre-trained models of our project can be found in MODEL_ZOO.md.

Getting Started

  1. Prepare a valid environment follow the requirements.

  2. We use the pre-trained weights of DeAOT-L on static images as baseline (Recommended). No need for pretraining. If you want to pre-train MAVOS from scratch, consider the following dataset preperation:

    1. Prepare datasets:

      Please follow the below instruction to prepare datasets in each corresponding folder.

      • Static

        datasets/Static: pre-training dataset with static images. Guidance can be found in AFB-URR, which we referred to in the implementation of the pre-training.

      • YouTube-VOS

        A commonly-used large-scale VOS dataset.

        datasets/YTB/2019: version 2019, download link. train is required for training.

      • DAVIS

        A commonly-used small-scale VOS dataset.

        datasets/DAVIS: TrainVal (480p) contains both the training and validation split. Test-Dev (480p) contains the Test-dev split. The full-resolution version is also supported for training and evaluation but not required.

    2. Prepare ImageNet pre-trained encoders

      Select and download below checkpoints into pretrain_models:

  3. Training: the training script will fine-tune the pre-trained models using 4 GPUs on both YouTube-VOS 2019 train and DAVIS-2017 train, resulting in a model that can generalize to different domains.

  4. Evaluation : the evaluation script will evaluate the models on LVOS, DAVIS, and LTV. The results will be packed into Zip files. For calculating scores, please use official LVOS toolkit (for Val), DAVIS toolkit (for Val). For the Long-Time Video dataset, use the same DAVIS toolkit and replace --davis_path to long_video videos with the corresponding annotations.

Results on Long videos benchmarks

LVOS val set

LTV

Acknowledgment

Our code base is based on the AOT repository. We thank the authors for their open-source implementation.

The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at Alvis partially funded by the Swedish Research Council through grant agreement no. 2022-06725, the LUMI supercomputer hosted by CSC (Finland) and the LUMI consortium, and by the Berzelius resource provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre.

Citations

Please consider citing our paper in your publications if it helps your research.


@article{Shaker2024MAVOS,
  title={Efficient Video Object Segmentation via Modulated Cross-Attention Memory},
  author={Shaker, Abdelrahman and Wasim, Syed and Danelljan, Martin and Khan, Salman and Yang, Ming-Hsuan and Khan, Fahad Shahbaz},
  journal={arXiv:2403.17937},
  year={2024}
}

License

This project is released under the BSD-3-Clause license. See LICENSE for additional details.