Depth-wise Decomposition for Accelerating Separable Convolutions in Efficient Convolutional Neural Networks - Robotics Institute Carnegie Mellon University

Depth-wise Decomposition for Accelerating Separable Convolutions in Efficient Convolutional Neural Networks

Yihui He, Jianing Qian, and Jianren Wang
Workshop Paper, CVPR '19 Efficient Deep Learning for Computer Vision Workshop, June, 2019

Abstract

Very deep convolutional neural networks (CNNs) have been firmly established as the primary methods for many computer vision tasks. However, most state-of-the-art CNNs are large, which results in high inference latency. Recently, depth-wise separable convolution has been proposed for image recognition tasks on computationally limited platforms such as robotics and self-driving cars. Though it is much faster than its counterpart, regular convolution, accuracy is sacrificed. In this paper, we propose a novel decomposition approach based on SVD, namely depth-wise decomposition, for expanding regular convolutions into depth-wise separable convolutions while maintaining high accuracy. We show our approach can be further generalized to the multi-channel and multi-layer cases, based on Generalized Singular Value Decomposition (GSVD) [59]. We conduct thorough experiments with the latest ShuffleNet V2 model [47] on both random synthesized dataset and a large-scale image recognition dataset: ImageNet [10]. Our approach outperforms channel decomposition [73] on all datasets. More importantly, our approach improves the Top-1 accuracy of ShuffleNet V2 by

2%.

BibTeX

@workshop{He-2019-126886,
author = {Yihui He and Jianing Qian and Jianren Wang},
title = {Depth-wise Decomposition for Accelerating Separable Convolutions in Efficient Convolutional Neural Networks},
booktitle = {Proceedings of CVPR '19 Efficient Deep Learning for Computer Vision Workshop},
year = {2019},
month = {June},
}