The Statistical Recurrent Unit
Conference Paper, Proceedings of (ICML) International Conference on Machine Learning, pp. 2671 - 2680, August, 2017
Abstract
Sophisticated gated recurrent neural network architectures like LSTMs and GRUs have been shown to be highly effective in a myriad of applications. We develop an un-gated unit, the statistical recurrent unit (SRU), that is able to learn long term dependencies in data by only keeping moving averages of statistics. The SRU's architecture is simple, un-gated, and contains a comparable number of parameters to LSTMs; yet, SRUs perform favorably to more sophisticated LSTM and GRU alternatives, often outperforming one or both in various tasks. We show the efficacy of SRUs as compared to LSTMs and GRUs in an unbiased manner by optimizing respective architectures' hyperparameters for both synthetic and real-world tasks.
BibTeX
@conference{Oliva-2017-119743,author = {J. Oliva and B. Poczos and J. Schneider},
title = {The Statistical Recurrent Unit},
booktitle = {Proceedings of (ICML) International Conference on Machine Learning},
year = {2017},
month = {August},
pages = {2671 - 2680},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.