[ALGORITHM]
@inproceedings{lin2019tsm,
title={TSM: Temporal Shift Module for Efficient Video Understanding},
author={Lin, Ji and Gan, Chuang and Han, Song},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
year={2019}
}
[BACKBONE]
@article{NonLocal2018,
author = {Xiaolong Wang and Ross Girshick and Abhinav Gupta and Kaiming He},
title = {Non-local Neural Networks},
journal = {CVPR},
year = {2018}
}
config | resolution | gpus | backbone | pretrain | top1 acc | top5 acc | reference top1 acc | reference top5 acc | inference_time(video/s) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tsm_r50_1x1x8_50e_kinetics400_rgb | 340x256 | 8 | ResNet50 | ImageNet | 70.24 | 89.56 | 70.36 | 89.49 | 74.0 (8x1 frames) | 7079 | ckpt | log | json |
tsm_r50_1x1x8_50e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 70.59 | 89.52 | x | x | x | 7079 | ckpt | log | json |
tsm_r50_gpu_normalize_1x1x8_50e_kinetics400_rgb.py | short-side 256 | 8 | ResNet50 | ImageNet | 70.48 | 89.40 | x | x | x | 7076 | ckpt | log | json |
tsm_r50_video_1x1x8_50e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 70.25 | 89.66 | 70.36 | 89.49 | 74.0 (8x1 frames) | 7077 | ckpt | log | json |
tsm_r50_dense_1x1x8_100e_kinetics400_rgb | 340x256 | 8x4 | ResNet50 | ImageNet | 72.9 | 90.44 | 72.22 | 90.37 | 11.5 (8x10 frames) | 7079 | ckpt | log | json |
tsm_r50_dense_1x1x8_100e_kinetics400_rgb | short-side 256 | 8 | ResNet50 | ImageNet | 73.38 | 91.02 | x | x | x | 7079 | ckpt | log | json |
tsm_r50_1x1x16_50e_kinetics400_rgb | 340x256 | 8 | ResNet50 | ImageNet | 72.09 | 90.37 | 70.67 | 89.98 | 47.0 (16x1 frames) | 10404 | ckpt | log | json |
tsm_r50_1x1x16_50e_kinetics400_rgb | short-side 256 | 8x4 | ResNet50 | ImageNet | 71.89 | 90.73 | x | x | x | 10398 | ckpt | log | json |
tsm_nl_embedded_gaussian_r50_1x1x8_50e_kinetics400_rgb | short-side 320 | 8x4 | ResNet50 | ImageNet | 72.03 | 90.25 | 71.81 | 90.36 | x | 8931 | ckpt | log | json |
tsm_nl_gaussian_r50_1x1x8_50e_kinetics400_rgb | short-side 320 | 8x4 | ResNet50 | ImageNet | 70.70 | 89.90 | x | x | x | 10125 | ckpt | log | json |
tsm_nl_dot_product_r50_1x1x8_50e_kinetics400_rgb | short-side 320 | 8x4 | ResNet50 | ImageNet | 71.60 | 90.34 | x | x | x | 8358 | ckpt | log | json |
tsm_mobilenetv2_dense_1x1x8_100e_kinetics400_rgb | short-side 320 | 8 | MobileNetV2 | ImageNet | 68.46 | 88.64 | x | x | x | 3385 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc (efficient/accurate) | top5 acc (efficient/accurate) | reference top1 acc (efficient/accurate) | reference top5 acc (efficient/accurate) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|---|
tsm_r50_1x1x8_50e_sthv1_rgb | height 100 | 8 | ResNet50 | ImageNet | 45.58 / 47.70 | 75.02 / 76.12 | 45.50 / 47.33 | 74.34 / 76.60 | 7077 | ckpt | log | json |
tsm_r50_flip_1x1x8_50e_sthv1_rgb | height 100 | 8 | ResNet50 | ImageNet | 47.10 / 48.51 | 76.02 / 77.56 | 45.50 / 47.33 | 74.34 / 76.60 | 7077 | ckpt | log | json |
tsm_r50_1x1x16_50e_sthv1_rgb | height 100 | 8 | ResNet50 | ImageNet | 47.62 / 49.28 | 76.63 / 77.82 | 47.05 / 48.61 | 76.40 / 77.96 | 10390 | ckpt | log | json |
tsm_r101_1x1x8_50e_sthv1_rgb | height 100 | 8 | ResNet50 | ImageNet | 45.72 / 48.43 | 74.67 / 76.72 | 46.64 / 48.13 | 75.40 / 77.31 | 9800 | ckpt | log | json |
config | resolution | gpus | backbone | pretrain | top1 acc (efficient/accurate) | top5 acc (efficient/accurate) | reference top1 acc (efficient/accurate) | reference top5 acc (efficient/accurate) | gpu_mem(M) | ckpt | log | json |
---|---|---|---|---|---|---|---|---|---|---|---|---|
tsm_r50_1x1x8_50e_sthv2_rgb | height 240 | 8 | ResNet50 | ImageNet | 57.86 / 61.12 | 84.67 / 86.26 | 57.98 / 60.69 | 84.57 / 86.28 | 7069 | ckpt | log | json |
tsm_r50_1x1x16_50e_sthv2_rgb | height 240 | 8 | ResNet50 | ImageNet | 59.93 / 62.04 | 86.10 / 87.35 | 58.90 / 60.98 | 85.29 / 86.60 | 10400 | ckpt | log | json |
tsm_r101_1x1x8_50e_sthv2_rgb | height 240 | 8 | ResNet101 | ImageNet | 58.59 / 61.51 | 85.07 / 86.90 | 58.89 / 61.36 | 85.14 / 87.00 | 9784 | ckpt | log | json |
Notes:
- The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.
- The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.
- The values in columns named after "reference" are the results got by training on the original repo, using the same model settings. The checkpoints for reference repo can be downloaded here.
- There are two kinds of test settings for Something-Something dataset, efficient setting (center crop x 1 clip) and accurate setting (Three crop x 2 clip), which is referred from the original repo. We use efficient setting as default provided in config files, and it can be changed to accurate setting by
...
test_cfg = dict(average_clips='prob')
...
test_pipeline = [
dict(
type='SampleFrames',
clip_len=1,
frame_interval=1,
num_clips=16, # `num_clips = 8` when using 8 segments
twice_sample=True, # set `twice_sample=True` for twice sample in accurate setting
test_mode=True),
dict(type='RawFrameDecode'),
dict(type='Resize', scale=(-1, 256)),
# dict(type='CenterCrop', crop_size=224), it is used for efficient setting
dict(type='ThreeCrop', crop_size=256), # it is used for accurate setting
dict(type='Normalize', **img_norm_cfg),
dict(type='FormatShape', input_format='NCHW'),
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
dict(type='ToTensor', keys=['imgs'])
]
For more details on data preparation, you can refer to Kinetics400, Something-Something V1 and Something-Something V2 in Data Preparation.
You can use the following command to train a model.
python tools/train.py ${CONFIG_FILE} [optional arguments]
Example: train TSM model on Kinetics-400 dataset in a deterministic option with periodic validation.
python tools/train.py configs/recognition/tsm/tsm_r50_1x1x8_50e_kinetics400_rgb.py \
--work-dir work_dirs/tsm_r50_1x1x8_100e_kinetics400_rgb \
--validate --seed 0 --deterministic
For more details, you can refer to Training setting part in getting_started.
You can use the following command to test a model.
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]
Example: test TSM model on Kinetics-400 dataset and dump the result to a json file.
python tools/test.py configs/recognition/tsm/tsm_r50_1x1x8_50e_kinetics400_rgb.py \
checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
--out result.json
For more details, you can refer to Test a dataset part in getting_started.