AttentionNAS: Spatiotemporal Attention Cell Search for Video Classification
Abstract
Convolutional operations have two limitations: (1) do not explicitly model where to focus as the same filter is applied to all the positions, and (2) are unsuitable for modeling long-range dependencies as they only operate on a small neighborhood. While both limitations can be alleviated by attention operations, many design choices remain to be determined to use attention, especially when applying attention to videos. Towards a principled way of applying attention to videos, we address the task of spatiotemporal attention cell search. We propose a novel search space for spatiotemporal attention cells, which allows the search algorithm to flexibly explore various design choices in the cell. The discovered attention cells can be seamlessly inserted into existing backbone networks, e.g., I3D or S3D, and improve video classification accuracy by more than 2% on both Kinetics-600 and MiT datasets. The discovered attention cells outperform non-local blocks on both datasets, and demonstrate strong generalization across different modalities, backbones, and datasets. Inserting our attention cells into I3D-R50 yields state-of-the-art performance on both datasets.
BibTeX
@conference{Wang-2020-126796,author = {Xiaofang Wang and Xuehan Xiong and Maxim Neumann and A. J. Piergiovanni and Michael S. Ryoo and Anelia Angelova and Kris M. Kitani and Wei Hua},
title = {AttentionNAS: Spatiotemporal Attention Cell Search for Video Classification},
booktitle = {Proceedings of (ECCV) European Conference on Computer Vision},
year = {2020},
month = {August},
pages = {449 - 465},
}