Skip to content

Latest commit

 

History

History
 
 

feelvos

No Maintenance Intended TensorFlow Requirement: 1.x TensorFlow 2 Not Supported

FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation

FEELVOS is a fast model for video object segmentation which does not rely on fine-tuning on the first frame.

For details, please refer to our paper. If you find the code useful, please also consider citing it.

  • FEELVOS:
@inproceedings{feelvos2019,
    title={FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation},
    author={Paul Voigtlaender and Yuning Chai and Florian Schroff and Hartwig Adam and Bastian Leibe and Liang-Chieh Chen},
    booktitle={CVPR},
    year={2019}
}

Dependencies

FEELVOS requires a good GPU with around 12 GB of memory and depends on the following libraries

  • TensorFlow
  • Pillow
  • Numpy
  • Scipy
  • Scikit Learn Image
  • tf Slim (which is included in the "tensorflow/models/research/" checkout)
  • DeepLab (which is included in the "tensorflow/models/research/" checkout)
  • correlation_cost (optional, see below)

For detailed steps to install Tensorflow, follow the Tensorflow installation instructions. A typical user can install Tensorflow using the following command:

pip install tensorflow-gpu

The remaining libraries can also be installed with pip using:

pip install pillow scipy scikit-image

Dependency on correlation_cost

For fast cross-correlation, we use correlation cost as an external dependency. By default FEELVOS will use a slow and memory hungry fallback implementation without correlation_cost. If you care for performance, you should set up correlation_cost by following the instructions in correlation_cost/README and afterwards setting USE_CORRELATION_COST = True in utils/embedding_utils.py.

Pre-trained Models

We provide 2 pre-trained FEELVOS models, both are based on Xception-65:

Additionally, we provide a DeepLab checkpoint for Xception-65 pre-trained on ImageNet and COCO, which can be used as an initialization for training FEELVOS.

Pre-computed Segmentation Masks

We provide pre-computed segmentation masks for FEELVOS both for training with and without YouTube-VOS data for the following datasets:

  • DAVIS 2017 validation set
  • DAVIS 2017 test-dev set
  • YouTube-Objects dataset

Local Inference

For a demo of local inference on DAVIS 2017 run

# From tensorflow/models/research/feelvos
sh eval.sh

Local Training

For a demo of local training on DAVIS 2017 run

# From tensorflow/models/research/feelvos
sh train.sh

Contacts (Maintainers)

License

All the codes in feelvos folder is covered by the LICENSE under tensorflow/models. Please refer to the LICENSE for details.