Shortcuts

Action Recognition Models

C3D

Learning Spatiotemporal Features with 3D Convolutional Networks

Abstract

We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.

Results and Models

UCF-101

config resolution gpus backbone pretrain top1 acc top5 acc testing protocol inference_time(video/s) gpu_mem(M) ckpt log json
c3d_sports1m_16x1x1_45e_ucf101_rgb.py 128x171 8 c3d sports1m 83.27 95.90 10 clips x 1 crop x 6053 ckpt log json

Note

  1. The author of C3D normalized UCF-101 with volume mean and used SVM to classify videos, while we normalized the dataset with RGB mean value and used a linear classifier.

  2. The gpus indicates the number of gpu (32G V100) we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  3. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

For more details on data preparation, you can refer to UCF-101 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train C3D model on UCF-101 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/c3d/c3d_sports1m_16x1x1_45e_ucf101_rgb.py \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test C3D model on UCF-101 dataset and dump the result to a json file.

python tools/test.py configs/recognition/c3d/c3d_sports1m_16x1x1_45e_ucf101_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy

For more details, you can refer to Test a dataset part in getting_started.

Citation

@ARTICLE{2014arXiv1412.0767T,
author = {Tran, Du and Bourdev, Lubomir and Fergus, Rob and Torresani, Lorenzo and Paluri, Manohar},
title = {Learning Spatiotemporal Features with 3D Convolutional Networks},
keywords = {Computer Science - Computer Vision and Pattern Recognition},
year = 2014,
month = dec,
eid = {arXiv:1412.0767}
}

CSN

Video Classification With Channel-Separated Convolutional Networks

Abstract

Group convolution has been shown to offer great computational savings in various 2D convolutional architectures for image classification. It is natural to ask: 1) if group convolution can help to alleviate the high computational cost of video classification networks; 2) what factors matter the most in 3D group convolutional networks; and 3) what are good computation/accuracy trade-offs with 3D group convolutional networks. This paper studies the effects of different design choices in 3D group convolutional networks for video classification. We empirically demonstrate that the amount of channel interactions plays an important role in the accuracy of 3D group convolutional networks. Our experiments suggest two main findings. First, it is a good practice to factorize 3D convolutions by separating channel interactions and spatiotemporal interactions as this leads to improved accuracy and lower computational cost. Second, 3D channel-separated convolutions provide a form of regularization, yielding lower training accuracy but higher test accuracy compared to 3D convolutions. These two empirical findings lead us to design an architecture – Channel-Separated Convolutional Network (CSN) – which is simple, efficient, yet accurate. On Sports1M, Kinetics, and Something-Something, our CSNs are comparable with or better than the state-of-the-art while being 2-3 times more efficient.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
ircsn_bnfrozen_r50_32x2x1_180e_kinetics400_rgb short-side 320 x ResNet50 None 73.6 91.3 x x ckpt log json
ircsn_ig65m_pretrained_bnfrozen_r50_32x2x1_58e_kinetics400_rgb short-side 320 x ResNet50 IG65M 79.0 94.2 x x infer_ckpt x x
ircsn_bnfrozen_r152_32x2x1_180e_kinetics400_rgb short-side 320 x ResNet152 None 76.5 92.1 x x infer_ckpt x x
ircsn_sports1m_pretrained_bnfrozen_r152_32x2x1_58e_kinetics400_rgb short-side 320 x ResNet152 Sports1M 78.2 93.0 x x infer_ckpt x x
ircsn_ig65m_pretrained_bnfrozen_r152_32x2x1_58e_kinetics400_rgb.py short-side 320 8x4 ResNet152 IG65M 82.76/82.6 95.68/95.3 x 8516 ckpt/infer_ckpt log json
ipcsn_bnfrozen_r152_32x2x1_180e_kinetics400_rgb short-side 320 x ResNet152 None 77.8 92.8 x x infer_ckpt x x
ipcsn_sports1m_pretrained_bnfrozen_r152_32x2x1_58e_kinetics400_rgb short-side 320 x ResNet152 Sports1M 78.8 93.5 x x infer_ckpt x x
ipcsn_ig65m_pretrained_bnfrozen_r152_32x2x1_58e_kinetics400_rgb short-side 320 x ResNet152 IG65M 82.5 95.3 x x infer_ckpt x x
ircsn_ig65m_pretrained_r152_32x2x1_58e_kinetics400_rgb.py short-side 320 8x4 ResNet152 IG65M 80.14 94.93 x 8517 ckpt log json

Note

  1. The gpus indicates the number of gpu (32G V100) we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

  4. The infer_ckpt means those checkpoints are ported from VMZ.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train CSN model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/csn/ircsn_ig65m_pretrained_r152_32x2x1_58e_kinetics400_rgb.py \
    --work-dir work_dirs/ircsn_ig65m_pretrained_r152_32x2x1_58e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test CSN model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/csn/ircsn_ig65m_pretrained_r152_32x2x1_58e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{inproceedings,
author = {Wang, Heng and Feiszli, Matt and Torresani, Lorenzo},
year = {2019},
month = {10},
pages = {5551-5560},
title = {Video Classification With Channel-Separated Convolutional Networks},
doi = {10.1109/ICCV.2019.00565}
}
@inproceedings{ghadiyaram2019large,
  title={Large-scale weakly-supervised pre-training for video action recognition},
  author={Ghadiyaram, Deepti and Tran, Du and Mahajan, Dhruv},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={12046--12055},
  year={2019}
}

I3D

Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset

Non-local Neural Networks

Abstract

The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
i3d_r50_32x2x1_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 72.68 90.78 1.7 (320x3 frames) 5170 ckpt log json
i3d_r50_32x2x1_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 73.27 90.92 x 5170 ckpt log json
i3d_r50_video_32x2x1_100e_kinetics400_rgb short-side 256p 8 ResNet50 ImageNet 72.85 90.75 x 5170 ckpt log json
i3d_r50_dense_32x2x1_100e_kinetics400_rgb 340x256 8x2 ResNet50 ImageNet 72.77 90.57 1.7 (320x3 frames) 5170 ckpt log json
i3d_r50_dense_32x2x1_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 73.48 91.00 x 5170 ckpt log json
i3d_r50_lazy_32x2x1_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 72.32 90.72 1.8 (320x3 frames) 5170 ckpt log json
i3d_r50_lazy_32x2x1_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 73.24 90.99 x 5170 ckpt log json
i3d_nl_embedded_gaussian_r50_32x2x1_100e_kinetics400_rgb short-side 256p 8x4 ResNet50 ImageNet 74.71 91.81 x 6438 ckpt log json
i3d_nl_gaussian_r50_32x2x1_100e_kinetics400_rgb short-side 256p 8x4 ResNet50 ImageNet 73.37 91.26 x 4944 ckpt log json
i3d_nl_dot_product_r50_32x2x1_100e_kinetics400_rgb short-side 256p 8x4 ResNet50 ImageNet 73.92 91.59 x 4832 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train I3D model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
    --work-dir work_dirs/i3d_r50_32x2x1_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test I3D model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{inproceedings,
  author = {Carreira, J. and Zisserman, Andrew},
  year = {2017},
  month = {07},
  pages = {4724-4733},
  title = {Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset},
  doi = {10.1109/CVPR.2017.502}
}
@article{NonLocal2018,
  author =   {Xiaolong Wang and Ross Girshick and Abhinav Gupta and Kaiming He},
  title =    {Non-local Neural Networks},
  journal =  {CVPR},
  year =     {2018}
}

Omni-sourced Webly-supervised Learning for Video Recognition

Omni-sourced Webly-supervised Learning for Video Recognition

Dataset

Abstract

We introduce OmniSource, a novel framework for leveraging web data to train video recognition models. OmniSource overcomes the barriers between data formats, such as images, short videos, and long untrimmed videos for webly-supervised learning. First, data samples with multiple formats, curated by task-specific data collection and automatically filtered by a teacher model, are transformed into a unified form. Then a joint-training strategy is proposed to deal with the domain gaps between multiple data sources and formats in webly-supervised learning. Several good practices, including data balancing, resampling, and cross-dataset mixup are adopted in joint training. Experiments show that by utilizing data from multiple sources and formats, OmniSource is more data-efficient in training. With only 3.5M images and 800K minutes videos crawled from the internet without human labeling (less than 2% of prior works), our models learned with OmniSource improve Top-1 accuracy of 2D- and 3D-ConvNet baseline models by 3.0% and 3.9%, respectively, on the Kinetics-400 benchmark. With OmniSource, we establish new records with different pretraining strategies for video recognition. Our best models achieve 80.4%, 80.5%, and 83.6 Top-1 accuracies on the Kinetics-400 benchmark respectively for training-from-scratch, ImageNet pre-training and IG-65M pre-training.

Results and Models

Kinetics-400 Model Release

We currently released 4 models trained with OmniSource framework, including both 2D and 3D architectures. We compare the performance of models trained with or without OmniSource in the following table.

Model Modality Pretrained Backbone Input Resolution Top-1 (Baseline / OmniSource (Delta)) Top-5 (Baseline / OmniSource (Delta))) Download
TSN RGB ImageNet ResNet50 3seg 340x256 70.6 / 73.6 (+ 3.0) 89.4 / 91.0 (+ 1.6) Baseline / OmniSource
TSN RGB IG-1B ResNet50 3seg short-side 320 73.1 / 75.7 (+ 2.6) 90.4 / 91.9 (+ 1.5) Baseline / OmniSource
SlowOnly RGB Scratch ResNet50 4x16 short-side 320 72.9 / 76.8 (+ 3.9) 90.9 / 92.5 (+ 1.6) Baseline / OmniSource
SlowOnly RGB Scratch ResNet101 8x8 short-side 320 76.5 / 80.4 (+ 3.9) 92.7 / 94.4 (+ 1.7) Baseline / OmniSource
  1. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

Benchmark on Mini-Kinetics

We release a subset of web dataset used in the OmniSource paper. Specifically, we release the web data in the 200 classes of Mini-Kinetics. The statistics of those datasets is detailed in preparing_omnisource. To obtain those data, you need to fill in a data request form. After we received your request, the download link of these data will be send to you. For more details on the released OmniSource web dataset, please refer to preparing_omnisource.

We benchmark the OmniSource framework on the released subset, results are listed in the following table (we report the Top-1 and Top-5 accuracy on Mini-Kinetics validation). The benchmark can be used as a baseline for video recognition with web data.

TSN-8seg-ResNet50

Model Modality Pretrained Backbone Input Resolution top1 acc top5 acc ckpt json log
tsn_r50_1x1x8_100e_minikinetics_rgb RGB ImageNet ResNet50 3seg short-side 320 77.4 93.6 ckpt json log
tsn_r50_1x1x8_100e_minikinetics_googleimage_rgb RGB ImageNet ResNet50 3seg short-side 320 78.0 93.6 ckpt json log
tsn_r50_1x1x8_100e_minikinetics_webimage_rgb RGB ImageNet ResNet50 3seg short-side 320 78.6 93.6 ckpt json log
tsn_r50_1x1x8_100e_minikinetics_insvideo_rgb RGB ImageNet ResNet50 3seg short-side 320 80.6 95.0 ckpt json log
tsn_r50_1x1x8_100e_minikinetics_kineticsraw_rgb RGB ImageNet ResNet50 3seg short-side 320 78.6 93.2 ckpt json log
tsn_r50_1x1x8_100e_minikinetics_omnisource_rgb RGB ImageNet ResNet50 3seg short-side 320 81.3 94.8 ckpt json log

SlowOnly-8x8-ResNet50

Model Modality Pretrained Backbone Input Resolution top1 acc top5 acc ckpt json log
slowonly_r50_8x8x1_256e_minikinetics_rgb RGB None ResNet50 8x8 short-side 320 78.6 93.9 ckpt json log
slowonly_r50_8x8x1_256e_minikinetics_googleimage_rgb RGB None ResNet50 8x8 short-side 320 80.8 95.0 ckpt json log
slowonly_r50_8x8x1_256e_minikinetics_webimage_rgb RGB None ResNet50 8x8 short-side 320 81.3 95.2 ckpt json log
slowonly_r50_8x8x1_256e_minikinetics_insvideo_rgb RGB None ResNet50 8x8 short-side 320 82.4 95.6 ckpt json log
slowonly_r50_8x8x1_256e_minikinetics_kineticsraw_rgb RGB None ResNet50 8x8 short-side 320 80.3 94.5 ckpt json log
slowonly_r50_8x8x1_256e_minikinetics_omnisource_rgb RGB None ResNet50 8x8 short-side 320 82.9 95.8 ckpt json log

We also list the benchmark in the original paper which run on Kinetics-400 for comparison:

Model Baseline +GG-img +[GG-IG]-img +IG-vid +KRaw OmniSource
TSN-3seg-ResNet50 70.6 / 89.4 71.5 / 89.5 72.0 / 90.0 72.0 / 90.3 71.7 / 89.6 73.6 / 91.0
SlowOnly-4x16-ResNet50 73.8 / 90.9 74.5 / 91.4 75.2 / 91.6 75.2 / 91.7 74.5 / 91.1 76.6 / 92.5

Citation

@article{duan2020omni,
  title={Omni-sourced Webly-supervised Learning for Video Recognition},
  author={Duan, Haodong and Zhao, Yue and Xiong, Yuanjun and Liu, Wentao and Lin, Dahua},
  journal={arXiv preprint arXiv:2003.13042},
  year={2020}
}

R2plus1D

A closer look at spatiotemporal convolutions for action recognition

Abstract

In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block “R(2+1)D” which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
r2plus1d_r34_8x8x1_180e_kinetics400_rgb short-side 256 8x4 ResNet34 None 67.30 87.65 x 5019 ckpt log json
r2plus1d_r34_video_8x8x1_180e_kinetics400_rgb short-side 256 8 ResNet34 None 67.3 87.8 x 5019 ckpt log json
r2plus1d_r34_8x8x1_180e_kinetics400_rgb short-side 320 8x2 ResNet34 None 68.68 88.36 1.6 (80x3 frames) 5019 ckpt log json
r2plus1d_r34_32x2x1_180e_kinetics400_rgb short-side 320 8x2 ResNet34 None 74.60 91.59 0.5 (320x3 frames) 12975 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train R(2+1)D model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_kinetics400_rgb.py \
    --work-dir work_dirs/r2plus1d_r34_3d_8x8x1_180e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test R(2+1)D model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/r2plus1d/r2plus1d_r34_8x8x1_180e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips=prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{tran2018closer,
  title={A closer look at spatiotemporal convolutions for action recognition},
  author={Tran, Du and Wang, Heng and Torresani, Lorenzo and Ray, Jamie and LeCun, Yann and Paluri, Manohar},
  booktitle={Proceedings of the IEEE conference on Computer Vision and Pattern Recognition},
  pages={6450--6459},
  year={2018}
}

SlowFast

SlowFast Networks for Video Recognition

Abstract

We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
slowfast_r50_4x16x1_256e_kinetics400_rgb short-side 256 8x4 ResNet50 None 74.75 91.73 x 6203 ckpt log json
slowfast_r50_video_4x16x1_256e_kinetics400_rgb short-side 256 8 ResNet50 None 73.95 91.50 x 6203 ckpt log json
slowfast_r50_4x16x1_256e_kinetics400_rgb short-side 320 8x2 ResNet50 None 76.0 92.54 1.6 ((32+4)x10x3 frames) 6203 ckpt log json
slowfast_prebn_r50_4x16x1_256e_kinetics400_rgb short-side 320 8x2 ResNet50 None 76.34 92.67 x 6203 ckpt log json
slowfast_r50_8x8x1_256e_kinetics400_rgb short-side 320 8x3 ResNet50 None 76.94 92.8 1.3 ((32+8)x10x3 frames) 9062 ckpt log json
slowfast_r50_8x8x1_256e_kinetics400_rgb_steplr short-side 320 8x4 ResNet50 None 76.34 92.61 9062 ckpt log json
slowfast_multigrid_r50_8x8x1_358e_kinetics400_rgb short-side 320 8x2 ResNet50 None 76.07 92.21 x 9062 ckpt log json
slowfast_prebn_r50_8x8x1_256e_kinetics400_rgb_steplr short-side 320 8x4 ResNet50 None 76.58 92.85 9062 ckpt log json
slowfast_r101_r50_4x16x1_256e_kinetics400_rgb short-side 256 8x1 ResNet101 + ResNet50 None 76.69 93.07 16628 ckpt log json
slowfast_r101_8x8x1_256e_kinetics400_rgb short-side 256 8x4 ResNet101 None 77.90 93.51 25994 ckpt log json
slowfast_r152_r50_4x16x1_256e_kinetics400_rgb short-side 256 8x1 ResNet152 + ResNet50 None 77.13 93.20 10077 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
slowfast_r50_16x8x1_22e_sthv1_rgb height 100 8 ResNet50 Kinetics400 49.67 79.00 x 9293 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train SlowFast model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/slowfast/slowfast_r50_4x16x1_256e_kinetics400_rgb.py \
    --work-dir work_dirs/slowfast_r50_4x16x1_256e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test SlowFast model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/slowfast/slowfast_r50_4x16x1_256e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips=prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{feichtenhofer2019slowfast,
  title={Slowfast networks for video recognition},
  author={Feichtenhofer, Christoph and Fan, Haoqi and Malik, Jitendra and He, Kaiming},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={6202--6211},
  year={2019}
}

SlowOnly

Slowfast networks for video recognition

Abstract

We present SlowFast networks for video recognition. Our model involves (i) a Slow pathway, operating at low frame rate, to capture spatial semantics, and (ii) a Fast pathway, operating at high frame rate, to capture motion at fine temporal resolution. The Fast pathway can be made very lightweight by reducing its channel capacity, yet can learn useful temporal information for video recognition. Our models achieve strong performance for both action classification and detection in video, and large improvements are pin-pointed as contributions by our SlowFast concept. We report state-of-the-art accuracy on major video recognition benchmarks, Kinetics, Charades and AVA.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
slowonly_r50_4x16x1_256e_kinetics400_rgb short-side 256 8x4 ResNet50 None 72.76 90.51 x 3168 ckpt log json
slowonly_r50_video_4x16x1_256e_kinetics400_rgb short-side 320 8x2 ResNet50 None 72.90 90.82 x 8472 ckpt log json
slowonly_r50_8x8x1_256e_kinetics400_rgb short-side 256 8x4 ResNet50 None 74.42 91.49 x 5820 ckpt log json
slowonly_r50_4x16x1_256e_kinetics400_rgb short-side 320 8x2 ResNet50 None 73.02 90.77 4.0 (40x3 frames) 3168 ckpt log json
slowonly_r50_8x8x1_256e_kinetics400_rgb short-side 320 8x3 ResNet50 None 74.93 91.92 2.3 (80x3 frames) 5820 ckpt log json
slowonly_imagenet_pretrained_r50_4x16x1_150e_kinetics400_rgb short-side 320 8x2 ResNet50 ImageNet 73.39 91.12 x 3168 ckpt log json
slowonly_imagenet_pretrained_r50_8x8x1_150e_kinetics400_rgb short-side 320 8x4 ResNet50 ImageNet 75.55 92.04 x 5820 ckpt log json
slowonly_nl_embedded_gaussian_r50_4x16x1_150e_kinetics400_rgb short-side 320 8x2 ResNet50 ImageNet 74.54 91.73 x 4435 ckpt log json
slowonly_nl_embedded_gaussian_r50_8x8x1_150e_kinetics400_rgb short-side 320 8x4 ResNet50 ImageNet 76.07 92.42 x 8895 ckpt log json
slowonly_r50_4x16x1_256e_kinetics400_flow short-side 320 8x2 ResNet50 ImageNet 61.79 83.62 x 8450 ckpt log json
slowonly_r50_8x8x1_196e_kinetics400_flow short-side 320 8x4 ResNet50 ImageNet 65.76 86.25 x 8455 ckpt log json

Kinetics-400 Data Benchmark

In data benchmark, we compare two different data preprocessing methods: (1) Resize video to 340x256, (2) Resize the short edge of video to 320px, (3) Resize the short edge of video to 256px.

config resolution gpus backbone Input pretrain top1 acc top5 acc testing protocol ckpt log json
slowonly_r50_randomresizedcrop_340x256_4x16x1_256e_kinetics400_rgb 340x256 8x2 ResNet50 4x16 None 71.61 90.05 10 clips x 3 crops ckpt log json
slowonly_r50_randomresizedcrop_320p_4x16x1_256e_kinetics400_rgb short-side 320 8x2 ResNet50 4x16 None 73.02 90.77 10 clips x 3 crops ckpt log json
slowonly_r50_randomresizedcrop_256p_4x16x1_256e_kinetics400_rgb short-side 256 8x4 ResNet50 4x16 None 72.76 90.51 10 clips x 3 crops ckpt log json

Kinetics-400 OmniSource Experiments

config resolution backbone pretrain w. OmniSource top1 acc top5 acc ckpt log json
slowonly_r50_4x16x1_256e_kinetics400_rgb short-side 320 ResNet50 None :x: 73.0 90.8 ckpt log json
x x ResNet50 None :heavy_check_mark: 76.8 92.5 ckpt x x
slowonly_r101_8x8x1_196e_kinetics400_rgb x ResNet101 None :x: 76.5 92.7 ckpt x x
x x ResNet101 None :heavy_check_mark: 80.4 94.4 ckpt x x

Kinetics-600

config resolution gpus backbone pretrain top1 acc top5 acc ckpt log json
slowonly_r50_video_8x8x1_256e_kinetics600_rgb short-side 256 8x4 ResNet50 None 77.5 93.7 ckpt log json

Kinetics-700

config resolution gpus backbone pretrain top1 acc top5 acc ckpt log json
slowonly_r50_video_8x8x1_256e_kinetics700_rgb short-side 256 8x4 ResNet50 None 65.0 86.1 ckpt log json

GYM99

config resolution gpus backbone pretrain top1 acc mean class acc ckpt log json
slowonly_imagenet_pretrained_r50_4x16x1_120e_gym99_rgb short-side 256 8x2 ResNet50 ImageNet 79.3 70.2 ckpt log json
slowonly_k400_pretrained_r50_4x16x1_120e_gym99_flow short-side 256 8x2 ResNet50 Kinetics 80.3 71.0 ckpt log json
1: 1 Fusion 83.7 74.8

Jester

config resolution gpus backbone pretrain top1 acc ckpt log json
slowonly_imagenet_pretrained_r50_8x8x1_64e_jester_rgb height 100 8 ResNet50 ImageNet 97.2 ckpt log json

HMDB51

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
slowonly_imagenet_pretrained_r50_8x4x1_64e_hmdb51_rgb 8 ResNet50 ImageNet 37.52 71.50 5812 ckpt log json
slowonly_k400_pretrained_r50_8x4x1_40e_hmdb51_rgb 8 ResNet50 Kinetics400 65.95 91.05 5812 ckpt log json

UCF101

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
slowonly_imagenet_pretrained_r50_8x4x1_64e_ucf101_rgb 8 ResNet50 ImageNet 71.35 89.35 5812 ckpt log json
slowonly_k400_pretrained_r50_8x4x1_40e_ucf101_rgb 8 ResNet50 Kinetics400 92.78 99.42 5812 ckpt log json

Something-Something V1

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
slowonly_imagenet_pretrained_r50_8x4x1_64e_sthv1_rgb 8 ResNet50 ImageNet 47.76 77.49 7759 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to corresponding parts in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train SlowOnly model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/slowonly/slowonly_r50_4x16x1_256e_kinetics400_rgb.py \
    --work-dir work_dirs/slowonly_r50_4x16x1_256e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test SlowOnly model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/slowonly/slowonly_r50_4x16x1_256e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips=prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{feichtenhofer2019slowfast,
  title={Slowfast networks for video recognition},
  author={Feichtenhofer, Christoph and Fan, Haoqi and Malik, Jitendra and He, Kaiming},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  pages={6202--6211},
  year={2019}
}

TANet

TAM: Temporal Adaptive Module for Video Recognition

Abstract

Video data is with complex temporal dynamics due to various factors such as camera motion, speed variation, and different activities. To effectively capture this diverse motion pattern, this paper presents a new temporal adaptive module ({\bf TAM}) to generate video-specific temporal kernels based on its own feature map. TAM proposes a unique two-level adaptive modeling scheme by decoupling the dynamic kernel into a location sensitive importance map and a location invariant aggregation weight. The importance map is learned in a local temporal window to capture short-term information, while the aggregation weight is generated from a global view with a focus on long-term structure. TAM is a modular block and could be integrated into 2D CNNs to yield a powerful video architecture (TANet) with a very small extra computational cost. The extensive experiments on Kinetics-400 and Something-Something datasets demonstrate that our TAM outperforms other temporal modeling methods consistently, and achieves the state-of-the-art performance under the similar complexity.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tanet_r50_dense_1x1x8_100e_kinetics400_rgb short-side 320 8 TANet ImageNet 76.28 92.60 76.22 92.53 x 7124 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) gpu_mem(M) ckpt log json
tanet_r50_1x1x8_50e_sthv1_rgb height 100 8 TANet ImageNet 47.34/49.58 75.72/77.31 7127 ckpt log ckpt
tanet_r50_1x1x16_50e_sthv1_rgb height 100 8 TANet ImageNet 49.05/50.91 77.90/79.13 7127 ckpt log ckpt

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 8 GPUs x 8 videos/gpu and lr=0.04 for 16 GPUs x 16 videos/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The values in columns named after “reference” are the results got by testing on our dataset, using the checkpoints provided by the author with same model settings. The checkpoints for reference repo can be downloaded here.

  4. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to corresponding parts in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TANet model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tanet/tanet_r50_dense_1x1x8_100e_kinetics400_rgb.py \
    --work-dir work_dirs/tanet_r50_dense_1x1x8_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TANet model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tanet/tanet_r50_dense_1x1x8_100e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Citation

@article{liu2020tam,
  title={TAM: Temporal Adaptive Module for Video Recognition},
  author={Liu, Zhaoyang and Wang, Limin and Wu, Wayne and Qian, Chen and Lu, Tong},
  journal={arXiv preprint arXiv:2005.06803},
  year={2020}
}

TimeSformer

Is Space-Time Attention All You Need for Video Understanding?

Abstract

We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named “TimeSformer,” adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that “divided attention,” where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long).

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
timesformer_divST_8x32x1_15e_kinetics400_rgb short-side 320 8 TimeSformer ImageNet-21K 77.92 93.29 x 17874 ckpt log json
timesformer_jointST_8x32x1_15e_kinetics400_rgb short-side 320 8 TimeSformer ImageNet-21K 77.01 93.08 x 25658 ckpt log json
timesformer_spaceOnly_8x32x1_15e_kinetics400_rgb short-side 320 8 TimeSformer ImageNet-21K 76.93 92.90 x 12750 ckpt log json

Note

  1. The gpus indicates the number of gpu (32G V100) we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.005 for 8 GPUs x 8 videos/gpu and lr=0.00375 for 8 GPUs x 6 videos/gpu.

  2. We keep the test setting with the original repo (three crop x 1 clip).

  3. The pretrained model vit_base_patch16_224.pth used by TimeSformer was converted from vision_transformer.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TimeSformer model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/timesformer/timesformer_divST_8x32x1_15e_kinetics400_rgb.py \
    --work-dir work_dirs/timesformer_divST_8x32x1_15e_kinetics400_rgb.py \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TimeSformer model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/timesformer/timesformer_divST_8x32x1_15e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Citation

@misc{bertasius2021spacetime,
    title   = {Is Space-Time Attention All You Need for Video Understanding?},
    author  = {Gedas Bertasius and Heng Wang and Lorenzo Torresani},
    year    = {2021},
    eprint  = {2102.05095},
    archivePrefix = {arXiv},
    primaryClass = {cs.CV}
}

TIN

Temporal Interlacing Network

Abstract

For a long time, the vision community tries to learn the spatio-temporal representation by combining convolutional neural network together with various temporal models, such as the families of Markov chain, optical flow, RNN and temporal convolution. However, these pipelines consume enormous computing resources due to the alternately learning process for spatial and temporal information. One natural question is whether we can embed the temporal information into the spatial one so the information in the two domains can be jointly learned once-only. In this work, we answer this question by presenting a simple yet powerful operator – temporal interlacing network (TIN). Instead of learning the temporal features, TIN fuses the two kinds of information by interlacing spatial representations from the past to the future, and vice versa. A differentiable interlacing target can be learned to control the interlacing process. In this way, a heavy temporal model is replaced by a simple interlacing operator. We theoretically prove that with a learnable interlacing target, TIN performs equivalently to the regularized temporal convolution network (r-TCN), but gains 4% more accuracy with 6x less latency on 6 challenging benchmarks. These results push the state-of-the-art performances of video understanding by a considerable margin. Not surprising, the ensemble model of the proposed TIN won the 1st place in the ICCV19 - Multi Moments in Time challenge.

Results and Models

Something-Something V1

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc gpu_mem(M) ckpt log json
tin_r50_1x1x8_40e_sthv1_rgb height 100 8x4 ResNet50 ImageNet 44.25 73.94 44.04 72.72 6181 ckpt log json

Something-Something V2

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc gpu_mem(M) ckpt log json
tin_r50_1x1x8_40e_sthv2_rgb height 240 8x4 ResNet50 ImageNet 56.70 83.62 56.48 83.45 6185 ckpt log json

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tin_tsm_finetune_r50_1x1x8_50e_kinetics400_rgb short-side 256 8x4 ResNet50 TSM-Kinetics400 70.89 89.89 6187 ckpt log json

Here, we use finetune to indicate that we use TSM model trained on Kinetics-400 to finetune the TIN model on Kinetics-400.

Note

  1. The reference topk acc are got by training the original repo ##1aacd0c with no AverageMeter issue. The AverageMeter issue will lead to incorrect performance, so we fix it before running.

  2. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  3. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  4. The values in columns named after “reference” are the results got by training on the original repo, using the same model settings.

  5. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Kinetics400, Something-Something V1 and Something-Something V2 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TIN model on Something-Something V1 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tin/tin_r50_1x1x8_40e_sthv1_rgb.py \
    --work-dir work_dirs/tin_r50_1x1x8_40e_sthv1_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TIN model on Something-Something V1 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tin/tin_r50_1x1x8_40e_sthv1_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Citation

@article{shao2020temporal,
    title={Temporal Interlacing Network},
    author={Hao Shao and Shengju Qian and Yu Liu},
    year={2020},
    journal={AAAI},
}

TPN

Temporal Pyramid Network for Action Recognition

Abstract

Visual tempo characterizes the dynamics and the temporal scale of an action. Modeling such visual tempos of different actions facilitates their recognition. Previous works often capture the visual tempo through sampling raw videos at multiple rates and constructing an input-level frame pyramid, which usually requires a costly multi-branch network to handle. In this work we propose a generic Temporal Pyramid Network (TPN) at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks in a plug-and-play manner. Two essential components of TPN, the source of features and the fusion of features, form a feature hierarchy for the backbone so that it can capture action instances at various tempos. TPN also shows consistent improvements over other challenging baselines on several action recognition datasets. Specifically, when equipped with TPN, the 3D ResNet-50 with dense sampling obtains a 2% gain on the validation set of Kinetics-400. A further analysis also reveals that TPN gains most of its improvements on action classes that have large variances in their visual tempos, validating the effectiveness of TPN.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tpn_slowonly_r50_8x8x1_150e_kinetics_rgb short-side 320 8x2 ResNet50 None 73.58 91.35 x x x 6916 ckpt log json
tpn_imagenet_pretrained_slowonly_r50_8x8x1_150e_kinetics_rgb short-side 320 8 ResNet50 ImageNet 76.59 92.72 75.49 92.05 x 6916 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tpn_tsm_r50_1x1x8_150e_sthv1_rgb height 100 8x6 ResNet50 TSM 51.50 79.15 8828 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The values in columns named after “reference” are the results got by testing the checkpoint released on the original repo and codes, using the same dataset with ours.

  4. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Kinetics400, Something-Something V1 and Something-Something V2 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TPN model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tpn/tpn_slowonly_r50_8x8x1_150e_kinetics_rgb.py \
    --work-dir work_dirs/tpn_slowonly_r50_8x8x1_150e_kinetics_rgb [--validate --seed 0 --deterministic]

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TPN model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tpn/tpn_slowonly_r50_8x8x1_150e_kinetics_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{yang2020tpn,
  title={Temporal Pyramid Network for Action Recognition},
  author={Yang, Ceyuan and Xu, Yinghao and Shi, Jianping and Dai, Bo and Zhou, Bolei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
}

TRN

Temporal Relational Reasoning in Videos

Abstract

Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos.

Results and Models

Something-Something V1

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) gpu_mem(M) ckpt log json
trn_r50_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 31.62 / 33.88 60.01 / 62.12 11010 ckpt log json

Something-Something V2

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) gpu_mem(M) ckpt log json
trn_r50_1x1x8_50e_sthv2_rgb height 256 8 ResNet50 ImageNet 48.39 / 51.28 76.58 / 78.65 11010 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. There are two kinds of test settings for Something-Something dataset, efficient setting (center crop x 1 clip) and accurate setting (Three crop x 2 clip).

  3. In the original repository, the author augments data with random flipping on something-something dataset, but the augmentation method may be wrong due to the direct actions, such as push left to right. So, we replaced flip with flip with label mapping, and change the testing method TenCrop, which has five flipped crops, to Twice Sample & ThreeCrop.

  4. We use ResNet50 instead of BNInception as the backbone of TRN. When Training TRN-ResNet50 on sthv1 dataset in the original repository, we get top1 (top5) accuracy 30.542 (58.627) vs. ours 31.62 (60.01).

For more details on data preparation, you can refer to

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TRN model on sthv1 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/trn/trn_r50_1x1x8_50e_sthv1_rgb.py \
    --work-dir work_dirs/trn_r50_1x1x8_50e_sthv1_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TRN model on sthv1 dataset and dump the result to a json file.

python tools/test.py configs/recognition/trn/trn_r50_1x1x8_50e_sthv1_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Citation

@article{zhou2017temporalrelation,
    title = {Temporal Relational Reasoning in Videos},
    author = {Zhou, Bolei and Andonian, Alex and Oliva, Aude and Torralba, Antonio},
    journal={European Conference on Computer Vision},
    year={2018}
}

TSM

TSM: Temporal Shift Module for Efficient Video Understanding

Abstract

The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN’s complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition.

Results and Models

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsm_r50_1x1x8_50e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 70.24 89.56 70.36 89.49 74.0 (8x1 frames) 7079 ckpt log json
tsm_r50_1x1x8_50e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 70.59 89.52 x x x 7079 ckpt log json
tsm_r50_1x1x8_50e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 70.73 89.81 x x x 7079 ckpt log json
tsm_r50_1x1x8_100e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 71.90 90.03 x x x 7079 ckpt log json
tsm_r50_gpu_normalize_1x1x8_50e_kinetics400_rgb.py short-side 256 8 ResNet50 ImageNet 70.48 89.40 x x x 7076 ckpt log json
tsm_r50_video_1x1x8_50e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 70.25 89.66 70.36 89.49 74.0 (8x1 frames) 7077 ckpt log json
tsm_r50_dense_1x1x8_50e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 73.46 90.84 x x x 7079 ckpt log json
tsm_r50_dense_1x1x8_100e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 74.55 91.74 x x x 7079 ckpt log json
tsm_r50_1x1x16_50e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 72.09 90.37 70.67 89.98 47.0 (16x1 frames) 10404 ckpt log json
tsm_r50_1x1x16_50e_kinetics400_rgb short-side 256 8x4 ResNet50 ImageNet 71.89 90.73 x x x 10398 ckpt log json
tsm_r50_1x1x16_100e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 72.80 90.75 x x x 10398 ckpt log json
tsm_nl_embedded_gaussian_r50_1x1x8_50e_kinetics400_rgb short-side 320 8x4 ResNet50 ImageNet 72.03 90.25 71.81 90.36 x 8931 ckpt log json
tsm_nl_gaussian_r50_1x1x8_50e_kinetics400_rgb short-side 320 8x4 ResNet50 ImageNet 70.70 89.90 x x x 10125 ckpt log json
tsm_nl_dot_product_r50_1x1x8_50e_kinetics400_rgb short-side 320 8x4 ResNet50 ImageNet 71.60 90.34 x x x 8358 ckpt log json
tsm_mobilenetv2_dense_1x1x8_100e_kinetics400_rgb short-side 320 8 MobileNetV2 ImageNet 68.46 88.64 x x x 3385 ckpt log json
tsm_mobilenetv2_dense_1x1x8_kinetics400_rgb_port short-side 320 8 MobileNetV2 ImageNet 69.89 89.01 x x x 3385 infer_ckpt x x

Diving48

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsm_r50_video_1x1x8_50e_diving48_rgb 8 ResNet50 ImageNet 75.99 97.16 7070 ckpt log json
tsm_r50_video_1x1x16_50e_diving48_rgb 8 ResNet50 ImageNet 81.62 97.66 7070 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) reference top1 acc (efficient/accurate) reference top5 acc (efficient/accurate) gpu_mem(M) ckpt log json
tsm_r50_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 45.58 / 47.70 75.02 / 76.12 45.50 / 47.33 74.34 / 76.60 7077 ckpt log json
tsm_r50_flip_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 47.10 / 48.51 76.02 / 77.56 45.50 / 47.33 74.34 / 76.60 7077 ckpt log json
tsm_r50_randaugment_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 47.16 / 48.90 76.07 / 77.92 45.50 / 47.33 74.34 / 76.60 7077 ckpt log json
tsm_r50_ptv_randaugment_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 47.65 / 48.66 76.67 / 77.41 45.50 / 47.33 74.34 / 76.60 7077 ckpt log json
tsm_r50_ptv_augmix_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 46.26 / 47.68 75.92 / 76.49 45.50 / 47.33 74.34 / 76.60 7077 ckpt log json
tsm_r50_flip_randaugment_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 47.85 / 50.31 76.78 / 78.18 45.50 / 47.33 74.34 / 76.60 7077 ckpt log json
tsm_r50_1x1x16_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 47.77 / 49.03 76.82 / 77.83 47.05 / 48.61 76.40 / 77.96 10390 ckpt log json
tsm_r101_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 46.09 / 48.59 75.41 / 77.10 46.64 / 48.13 75.40 / 77.31 9800 ckpt log json

Something-Something V2

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) reference top1 acc (efficient/accurate) reference top5 acc (efficient/accurate) gpu_mem(M) ckpt log json
tsm_r50_1x1x8_50e_sthv2_rgb height 256 8 ResNet50 ImageNet 59.11 / 61.82 85.39 / 86.80 xx / 61.2 xx / xx 7069 ckpt log json
tsm_r50_1x1x16_50e_sthv2_rgb height 256 8 ResNet50 ImageNet 61.06 / 63.19 86.66 / 87.93 xx / 63.1 xx / xx 10400 ckpt log json
tsm_r101_1x1x8_50e_sthv2_rgb height 256 8 ResNet101 ImageNet 60.88 / 63.84 86.56 / 88.30 xx / 63.3 xx / xx 9727 ckpt log json

MixUp & CutMix on Something-Something V1

config resolution gpus backbone pretrain top1 acc (efficient/accurate) top5 acc (efficient/accurate) delta top1 acc (efficient/accurate) delta top5 acc (efficient/accurate) ckpt log json
tsm_r50_mixup_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 46.35 / 48.49 75.07 / 76.88 +0.77 / +0.79 +0.05 / +0.70 ckpt log json
tsm_r50_cutmix_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 45.92 / 47.46 75.23 / 76.71 +0.34 / -0.24 +0.21 / +0.59 ckpt log json

Jester

config resolution gpus backbone pretrain top1 acc (efficient/accurate) ckpt log json
tsm_r50_1x1x8_50e_jester_rgb height 100 8 ResNet50 ImageNet 96.5 / 97.2 ckpt log json

HMDB51

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsm_k400_pretrained_r50_1x1x8_25e_hmdb51_rgb 8 ResNet50 Kinetics400 72.68 92.03 10388 ckpt log json
tsm_k400_pretrained_r50_1x1x16_25e_hmdb51_rgb 8 ResNet50 Kinetics400 74.77 93.86 10388 ckpt log json

UCF101

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsm_k400_pretrained_r50_1x1x8_25e_ucf101_rgb 8 ResNet50 Kinetics400 94.50 99.58 10389 ckpt log json
tsm_k400_pretrained_r50_1x1x16_25e_ucf101_rgb 8 ResNet50 Kinetics400 94.58 99.37 10389 ckpt log json

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The values in columns named after “reference” are the results got by training on the original repo, using the same model settings. The checkpoints for reference repo can be downloaded here.

  4. There are two kinds of test settings for Something-Something dataset, efficient setting (center crop x 1 clip) and accurate setting (Three crop x 2 clip), which is referred from the original repo. We use efficient setting as default provided in config files, and it can be changed to accurate setting by

...
test_pipeline = [
    dict(
        type='SampleFrames',
        clip_len=1,
        frame_interval=1,
        num_clips=16,   ## `num_clips = 8` when using 8 segments
        twice_sample=True,    ## set `twice_sample=True` for twice sample in accurate setting
        test_mode=True),
    dict(type='RawFrameDecode'),
    dict(type='Resize', scale=(-1, 256)),
    ## dict(type='CenterCrop', crop_size=224), it is used for efficient setting
    dict(type='ThreeCrop', crop_size=256),  ## it is used for accurate setting
    dict(type='Normalize', **img_norm_cfg),
    dict(type='FormatShape', input_format='NCHW'),
    dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]),
    dict(type='ToTensor', keys=['imgs'])
]
  1. When applying Mixup and CutMix, we use the hyper parameter alpha=0.2.

  2. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

  3. The infer_ckpt means those checkpoints are ported from TSM.

For more details on data preparation, you can refer to corresponding parts in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TSM model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tsm/tsm_r50_1x1x8_50e_kinetics400_rgb.py \
    --work-dir work_dirs/tsm_r50_1x1x8_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TSM model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tsm/tsm_r50_1x1x8_50e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{lin2019tsm,
  title={TSM: Temporal Shift Module for Efficient Video Understanding},
  author={Lin, Ji and Gan, Chuang and Han, Song},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2019}
}
@article{NonLocal2018,
  author =   {Xiaolong Wang and Ross Girshick and Abhinav Gupta and Kaiming He},
  title =    {Non-local Neural Networks},
  journal =  {CVPR},
  year =     {2018}
}

TSN

Temporal segment networks: Towards good practices for deep action recognition

Abstract

Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( 69.4%) and UCF101 (94.2%). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.

Results and Models

UCF-101

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x3_75e_ucf101_rgb [1] 8 ResNet50 ImageNet 83.03 96.78 8332 ckpt log json

[1] We report the performance on UCF-101 split1.

Diving48

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_video_1x1x8_100e_diving48_rgb 8 ResNet50 ImageNet 71.27 95.74 5699 ckpt log json
tsn_r50_video_1x1x16_100e_diving48_rgb 8 ResNet50 ImageNet 76.75 96.95 5705 ckpt log json

HMDB51

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x8_50e_hmdb51_imagenet_rgb 8 ResNet50 ImageNet 48.95 80.19 21535 ckpt log json
tsn_r50_1x1x8_50e_hmdb51_kinetics400_rgb 8 ResNet50 Kinetics400 56.08 84.31 21535 ckpt log json
tsn_r50_1x1x8_50e_hmdb51_mit_rgb 8 ResNet50 Moments 54.25 83.86 21535 ckpt log json

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_1x1x3_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 70.60 89.26 x x 4.3 (25x10 frames) 8344 ckpt log json
tsn_r50_1x1x3_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 70.42 89.03 x x x 8343 ckpt log json
tsn_r50_dense_1x1x5_50e_kinetics400_rgb 340x256 8x3 ResNet50 ImageNet 70.18 89.10 69.15 88.56 12.7 (8x10 frames) 7028 ckpt log json
tsn_r50_320p_1x1x3_100e_kinetics400_rgb short-side 320 8x2 ResNet50 ImageNet 70.91 89.51 x x 10.7 (25x3 frames) 8344 ckpt log json
tsn_r50_320p_1x1x3_110e_kinetics400_flow short-side 320 8x2 ResNet50 ImageNet 55.70 79.85 x x x 8471 ckpt log json
tsn_r50_320p_1x1x3_kinetics400_twostream [1: 1]* x x ResNet50 ImageNet 72.76 90.52 x x x x x x x
tsn_r50_1x1x8_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 71.80 90.17 x x x 8343 ckpt log json
tsn_r50_320p_1x1x8_100e_kinetics400_rgb short-side 320 8x3 ResNet50 ImageNet 72.41 90.55 x x 11.1 (25x3 frames) 8344 ckpt log json
tsn_r50_320p_1x1x8_110e_kinetics400_flow short-side 320 8x4 ResNet50 ImageNet 57.76 80.99 x x x 8473 ckpt log json
tsn_r50_320p_1x1x8_kinetics400_twostream [1: 1]* x x ResNet50 ImageNet 74.64 91.77 x x x x x x x
tsn_r50_video_320p_1x1x3_100e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 71.11 90.04 x x x 8343 ckpt log json
tsn_r50_dense_1x1x8_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 70.77 89.3 68.75 88.42 12.2 (8x10 frames) 8344 ckpt log json
tsn_r50_video_1x1x8_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 71.14 89.63 x x x 21558 ckpt log json
tsn_r50_video_dense_1x1x8_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 70.40 89.12 x x x 21553 ckpt log json

Here, We use [1: 1] to indicate that we combine rgb and flow score with coefficients 1: 1 to get the two-stream prediction (without applying softmax).

Using backbones from 3rd-party in TSN

It’s possible and convenient to use a 3rd-party backbone for TSN under the framework of MMAction2, here we provide some examples for:

config resolution gpus backbone pretrain top1 acc top5 acc ckpt log json
tsn_rn101_32x4d_320p_1x1x3_100e_kinetics400_rgb short-side 320 8x2 ResNeXt101-32x4d [MMCls] ImageNet 73.43 91.01 ckpt log json
tsn_dense161_320p_1x1x3_100e_kinetics400_rgb short-side 320 8x2 Densenet-161 [TorchVision] ImageNet 72.78 90.75 ckpt log json
tsn_swin_transformer_video_320p_1x1x3_100e_kinetics400_rgb short-side 320 8 Swin Transformer Base [timm] ImageNet 77.51 92.92 ckpt log json
  1. Note that some backbones in TIMM are not supported due to multiple reasons. Please refer to to PR ##880 for details.

Kinetics-400 Data Benchmark (8-gpus, ResNet50, ImageNet pretrain; 3 segments)

In data benchmark, we compare:

  1. Different data preprocessing methods: (1) Resize video to 340x256, (2) Resize the short edge of video to 320px, (3) Resize the short edge of video to 256px;

  2. Different data augmentation methods: (1) MultiScaleCrop, (2) RandomResizedCrop;

  3. Different testing protocols: (1) 25 frames x 10 crops, (2) 25 frames x 3 crops.

config resolution training augmentation testing protocol top1 acc top5 acc ckpt log json
tsn_r50_multiscalecrop_340x256_1x1x3_100e_kinetics400_rgb 340x256 MultiScaleCrop 25x10 frames 70.60 89.26 ckpt log json
x 340x256 MultiScaleCrop 25x3 frames 70.52 89.39 x x x
tsn_r50_randomresizedcrop_340x256_1x1x3_100e_kinetics400_rgb 340x256 RandomResizedCrop 25x10 frames 70.11 89.01 ckpt log json
x 340x256 RandomResizedCrop 25x3 frames 69.95 89.02 x x x
tsn_r50_multiscalecrop_320p_1x1x3_100e_kinetics400_rgb short-side 320 MultiScaleCrop 25x10 frames 70.32 89.25 ckpt log json
x short-side 320 MultiScaleCrop 25x3 frames 70.54 89.39 x x x
tsn_r50_randomresizedcrop_320p_1x1x3_100e_kinetics400_rgb short-side 320 RandomResizedCrop 25x10 frames 70.44 89.23 ckpt log json
x short-side 320 RandomResizedCrop 25x3 frames 70.91 89.51 x x x
tsn_r50_multiscalecrop_256p_1x1x3_100e_kinetics400_rgb short-side 256 MultiScaleCrop 25x10 frames 70.42 89.03 ckpt log json
x short-side 256 MultiScaleCrop 25x3 frames 70.79 89.42 x x x
tsn_r50_randomresizedcrop_256p_1x1x3_100e_kinetics400_rgb short-side 256 RandomResizedCrop 25x10 frames 69.80 89.06 ckpt log json
x short-side 256 RandomResizedCrop 25x3 frames 70.48 89.89 x x x

Kinetics-400 OmniSource Experiments

config resolution backbone pretrain w. OmniSource top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_1x1x3_100e_kinetics400_rgb 340x256 ResNet50 ImageNet :x: 70.6 89.3 4.3 (25x10 frames) 8344 ckpt log json
x 340x256 ResNet50 ImageNet :heavy_check_mark: 73.6 91.0 x 8344 ckpt x x
x short-side 320 ResNet50 IG-1B [1] :x: 73.1 90.4 x 8344 ckpt x x
x short-side 320 ResNet50 IG-1B [1] :heavy_check_mark: 75.7 91.9 x 8344 ckpt x x

[1] We obtain the pre-trained model from torch-hub, the pretrain model we used is resnet50_swsl

Kinetics-600

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_video_1x1x8_100e_kinetics600_rgb short-side 256 8x2 ResNet50 ImageNet 74.8 92.3 11.1 (25x3 frames) 8344 ckpt log json

Kinetics-700

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_video_1x1x8_100e_kinetics700_rgb short-side 256 8x2 ResNet50 ImageNet 61.7 83.6 11.1 (25x3 frames) 8344 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 18.55 44.80 17.53 44.29 10978 ckpt log json
tsn_r50_1x1x16_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 15.77 39.85 13.33 35.58 5691 ckpt log json

Something-Something V2

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x8_50e_sthv2_rgb height 256 8 ResNet50 ImageNet 28.59 59.56 x x 10966 ckpt log json
tsn_r50_1x1x16_50e_sthv2_rgb height 256 8 ResNet50 ImageNet 20.89 49.16 x x 8337 ckpt log json

Moments in Time

config resolution gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x6_100e_mit_rgb short-side 256 8x2 ResNet50 ImageNet 26.84 51.6 8339 ckpt log json

Multi-Moments in Time

config resolution gpus backbone pretrain mAP gpu_mem(M) ckpt log json
tsn_r101_1x1x5_50e_mmit_rgb short-side 256 8x2 ResNet101 ImageNet 61.09 10467 ckpt log json

ActivityNet v1.3

config resolution gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_320p_1x1x8_50e_activitynet_video_rgb short-side 320 8x1 ResNet50 Kinetics400 73.93 93.44 5692 ckpt log json
tsn_r50_320p_1x1x8_50e_activitynet_clip_rgb short-side 320 8x1 ResNet50 Kinetics400 76.90 94.47 5692 ckpt log json
tsn_r50_320p_1x1x8_150e_activitynet_video_flow 340x256 8x2 ResNet50 Kinetics400 57.51 83.02 5780 ckpt log json
tsn_r50_320p_1x1x8_150e_activitynet_clip_flow 340x256 8x2 ResNet50 Kinetics400 59.51 82.69 5780 ckpt log json

HVU

config[1] tag category resolution gpus backbone pretrain mAP HATNet[2] HATNet-multi[2] ckpt log json
tsn_r18_1x1x8_100e_hvu_action_rgb action short-side 256 8x2 ResNet18 ImageNet 57.5 51.8 53.5 ckpt log json
tsn_r18_1x1x8_100e_hvu_scene_rgb scene short-side 256 8 ResNet18 ImageNet 55.2 55.8 57.2 ckpt log json
tsn_r18_1x1x8_100e_hvu_object_rgb object short-side 256 8 ResNet18 ImageNet 45.7 34.2 35.1 ckpt log json
tsn_r18_1x1x8_100e_hvu_event_rgb event short-side 256 8 ResNet18 ImageNet 63.7 38.5 39.8 ckpt log json
tsn_r18_1x1x8_100e_hvu_concept_rgb concept short-side 256 8 ResNet18 ImageNet 47.5 26.1 27.3 ckpt log json
tsn_r18_1x1x8_100e_hvu_attribute_rgb attribute short-side 256 8 ResNet18 ImageNet 46.1 33.6 34.9 ckpt log json
- Overall short-side 256 - ResNet18 ImageNet 52.6 40.0 41.3 - - -

[1] For simplicity, we train a specific model for each tag category as the baselines for HVU.

[2] The performance of HATNet and HATNet-multi are from the paper Large Scale Holistic Video Understanding. The proposed HATNet is a 2 branch Convolution Network (one 2D branch, one 3D branch) and share the same backbone(ResNet18) with us. The inputs of HATNet are 16 or 32 frames long video clips (which is much larger than us), while the input resolution is coarser (112 instead of 224). HATNet is trained on each individual task (each tag category) while HATNet-multi is trained on multiple tasks. Since there is no released codes or models for the HATNet, we just include the performance reported by the original paper.

Note

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The values in columns named after “reference” are the results got by training on the original repo, using the same model settings.

  4. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TSN model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
    --work-dir work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TSN model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Citation

@inproceedings{wang2016temporal,
  title={Temporal segment networks: Towards good practices for deep action recognition},
  author={Wang, Limin and Xiong, Yuanjun and Wang, Zhe and Qiao, Yu and Lin, Dahua and Tang, Xiaoou and Van Gool, Luc},
  booktitle={European conference on computer vision},
  pages={20--36},
  year={2016},
  organization={Springer}
}

X3D

X3D: Expanding Architectures for Efficient Video Recognition

Abstract

This paper presents X3D, a family of efficient video networks that progressively expand a tiny 2D image classification architecture along multiple network axes, in space, time, width and depth. Inspired by feature selection methods in machine learning, a simple stepwise network expansion approach is employed that expands a single axis in each step, such that good accuracy to complexity trade-off is achieved. To expand X3D to a specific target complexity, we perform progressive forward expansion followed by backward contraction. X3D achieves state-of-the-art performance while requiring 4.8x and 5.5x fewer multiply-adds and parameters for similar accuracy as previous work. Our most surprising finding is that networks with high spatiotemporal resolution can perform well, while being extremely light in terms of network width and parameters. We report competitive accuracy at unprecedented efficiency on video classification and detection benchmarks.

Results and Models

Kinetics-400

config resolution backbone top1 10-view top1 30-view reference top1 10-view reference top1 30-view ckpt
x3d_s_13x6x1_facebook_kinetics400_rgb short-side 320 X3D_S 72.7 73.2 73.1 [SlowFast] 73.5 [SlowFast] ckpt[1]
x3d_m_16x5x1_facebook_kinetics400_rgb short-side 320 X3D_M 75.0 75.6 75.1 [SlowFast] 76.2 [SlowFast] ckpt[1]

[1] The models are ported from the repo SlowFast and tested on our data. Currently, we only support the testing of X3D models, training will be available soon.

Note

  1. The values in columns named after “reference” are the results got by testing the checkpoint released on the original repo and codes, using the same dataset with ours.

  2. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test X3D model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/x3d/x3d_s_13x6x1_facebook_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips prob

For more details, you can refer to Test a dataset part in getting_started.

Citation

@misc{feichtenhofer2020x3d,
      title={X3D: Expanding Architectures for Efficient Video Recognition},
      author={Christoph Feichtenhofer},
      year={2020},
      eprint={2004.04730},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

ResNet for Audio

Audiovisual SlowFast Networks for Video Recognition

Abstract

We present Audiovisual SlowFast Networks, an archi- tecture for integrated audiovisual perception. AVSlowFast has Slow and Fast visual pathways that are deeply inte- grated with a Faster Audio pathway to model vision and sound in a unified representation. We fuse audio and vi- sual features at multiple layers, enabling audio to con- tribute to the formation of hierarchical audiovisual con- cepts. To overcome training difficulties that arise from dif- ferent learning dynamics for audio and visual modalities, we introduce DropPathway, which randomly drops the Au- dio pathway during training as an effective regularization technique. Inspired by prior studies in neuroscience, we perform hierarchical audiovisual synchronization to learn joint audiovisual features. We report state-of-the-art results on six video action classification and detection datasets, perform detailed ablation studies, and show the gener- alization of AVSlowFast to learn self-supervised audiovi- sual features. Code will be made available at: https: //github.com/facebookresearch/SlowFast.

Results and Models

Kinetics-400

config n_fft gpus backbone pretrain top1 acc/delta top5 acc/delta inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r18_64x1x1_100e_kinetics400_audio_feature 1024 8 ResNet18 None 19.7 35.75 x 1897 ckpt log json
tsn_r18_64x1x1_100e_kinetics400_audio_feature + tsn_r50_video_320p_1x1x3_100e_kinetics400_rgb 1024 8 ResNet(18+50) None 71.50(+0.39) 90.18(+0.14) x x x x x

Note

  1. The gpus indicates the number of gpus we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.

  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

  3. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format ‘video_id, num_frames, label_index’) and the label map are also available.

For more details on data preparation, you can refer to Prepare audio in [Data Preparation](data_preparation.md).

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train ResNet model on Kinetics-400 audio dataset in a deterministic option with periodic validation.

python tools/train.py configs/audio_recognition/tsn_r50_64x1x1_100e_kinetics400_audio_feature.py \
    --work-dir work_dirs/tsn_r50_64x1x1_100e_kinetics400_audio_feature \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test ResNet model on Kinetics-400 audio dataset and dump the result to a json file.

python tools/test.py configs/audio_recognition/tsn_r50_64x1x1_100e_kinetics400_audio_feature.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.

Fusion

For multi-modality fusion, you can use the simple script, the standard usage is:

python tools/analysis/report_accuracy.py --scores ${AUDIO_RESULT_PKL} ${VISUAL_RESULT_PKL} --datalist data/kinetics400/kinetics400_val_list_rawframes.txt --coefficient 1 1
  • AUDIO_RESULT_PKL: The saved output file of tools/test.py by the argument --out.

  • VISUAL_RESULT_PKL: The saved output file of tools/test.py by the argument --out.

Citation

@article{xiao2020audiovisual,
  title={Audiovisual SlowFast Networks for Video Recognition},
  author={Xiao, Fanyi and Lee, Yong Jae and Grauman, Kristen and Malik, Jitendra and Feichtenhofer, Christoph},
  journal={arXiv preprint arXiv:2001.08740},
  year={2020}
}
Read the Docs v: 0.x
Versions
latest
stable
1.x
0.x
dev-1.x
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.