10:00 am to 11:00 am
Gates-Hillman 6115
Abstract: Learning to generate future frames of a video sequence is a challenging research problem with great relevance to reinforcement learning, planning and robotics. Existing approaches either fail to capture the full distribution of outcomes, or yield blurry generations, or both. In this talk I will address two important aspects of video generations: (i) what is the right representation space in which to perform the prediction task? and (ii) how to address and model the inherent uncertainty in video sequences?
With respect to the representation question I will present a model that leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. Given this representation, future frame prediction reduces to a simpler problem of predicting the time-vary components of the learned representation. We show that by applying a standard LSTM model to the time-varying features we are able to generate convincing long-range frame predictions.
In order to address the inherent uncertainty in the dynamics of the world, I will present a new stochastic video generation model that combines a deterministic frame predictor with time dependent stochastic latent variables sampled from a learned prior. The approach is simple and easily trained end-to-end using variational inference. I will present a variety of video generation results on different datasets and show how the learned prior can be interpreted as a predictive model of uncertainty.
Bio: Emily Denton is a PhD student at the Courant Institute of Mathematical Sciences at New York University. She is supported by a Google Fellowship and has previously enjoyed support by the Natural Sciences and Engineering Research Council of Canada (NSERC). Her research focuses on unsupervised learning and video prediction. More broadly, she is interested in integrating predictive models of the world into reinforcement learning as well as fairness and interpretability of machine learning models. Emily has previously interned at DeepMind and Facebook AI Research.
Homepage: www.cs.nyu.edu/~denton