Hydranets: Specialized dynamic architectures for efficient inference
Abstract
There is growing interest in improving the design of deep network architectures to be both accurate and low cost. This paper explores semantic specialization as a mechanism for improving the computational efficiency (accuracy-per-unit-cost) of inference in the context of image classification. Specifically, we propose a network architecture template called HydraNet, which enables state-of-the-art architectures for image classification to be transformed into dynamic architectures which exploit conditional execution for efficient inference. HydraNets are wide networks containing distinct components specialized to compute features for visually similar classes, but they retain efficiency by dynamically selecting only a small number of components to evaluate for any one input image. This design is made possible by a soft gating mechanism that encourages component specialization during training and accurately performs component selection during inference. We evaluate the HydraNet approach on both the CIFAR-100 and ImageNet classification tasks. On CIFAR, applying the HydraNet template to the ResNet and DenseNet family of models reduces inference cost by 2-4x while retaining the accuracy of the baseline architectures. On ImageNet, applying the HydraNet template improves accuracy up to 2.5% when compared to an efficient baseline architecture with similar inference cost.
BibTeX
@conference{Mullapudi-2018-122642,author = {Ravi Teja Mullapudi and William R. Mark and Noam Shazeer and Kayvon Fatahalian},
title = {Hydranets: Specialized dynamic architectures for efficient inference},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2018},
month = {June},
pages = {8080 - 8089},
}