Skip to content

Latest commit

 

History

History
76 lines (53 loc) · 5.59 KB

README.md

File metadata and controls

76 lines (53 loc) · 5.59 KB

TANet

Introduction

@article{liu2020tam,
  title={TAM: Temporal Adaptive Module for Video Recognition},
  author={Liu, Zhaoyang and Wang, Limin and Wu, Wayne and Qian, Chen and Lu, Tong},
  journal={arXiv preprint arXiv:2005.06803},
  year={2020}
}

Model Zoo

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tanet_r50_dense_1x1x8_100e_kinetics400_rgb short-side 320 8 TANet ImageNet 76.28 92.60 76.22 92.53 x 7124 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) gpu_mem(M) ckpt log json
tanet_r50_1x1x8_50e_sthv1_rgb height 100 8 TANet ImageNet 47.45/49.69 76.00/77.62 7127 ckpt log ckpt
tanet_r50_1x1x16_50e_sthv1_rgb height 100 8 TANet ImageNet 47.73/50.41 77.31/78.47 7127 ckpt log ckpt

Notes:

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 8 GPUs x 8 videos/gpu and lr=0.04 for 16 GPUs x 16 videos/gpu.
  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.
  3. The values in columns named after "reference" are the results got by testing on our dataset, using the checkpoints provided by the author with same model settings. The checkpoints for reference repo can be downloaded here.
  4. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format 'video_id, num_frames, label_index') and the label map are also available.

For more details on data preparation, you can refer to corresponding parts in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TANet model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tanet/tanet_r50_dense_1x1x8_100e_kinetics400_rgb.py \
    --work-dir work_dirs/tanet_r50_dense_1x1x8_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TANet model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tanet/tanet_r50_dense_1x1x8_100e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.