3:00 pm to 4:00 pm
GHC 6501
Abstract: This talk tells two tales about image-classification systems, both of which are motivated by the real-world deployment of such systems.
The first tale introduces a new convolutional neural network architecture, called multi-scale DenseNets, with the ability to adapt dynamically to computational resource limits at inference time. The network uses progressively growing multi-scale convolutions, dense connectivity, and a series of classifiers at intermediate layers of the network. At inference time, it spends less computation on “easy” images, and uses the surplus computation to obtain higher accuracy on “hard” images.
The second tale introduces a practical defense method against adversarial examples. Unlike prior work that focuses on robustness via regularization, we obtain robustness via input transformation. Our defense successfully counters 60% of white-box attacks and 90% of black-box attacks by all popular methods. Moreover, our defense is difficult to attack with current methods, in particular, because it is non-differentiable and randomized.
Bio: Laurens van der Maaten is a Research Scientist at Facebook AI Research in New York. Prior, he worked as an Assistant Professor at Delft University of Technology (The Netherlands) and as a post-doctoral researcher at University of California, San Diego. He received his PhD from Tilburg University (The Netherlands) in 2009. Laurens is interested in a variety of topics in machine learning and computer vision. Specific research topics include learning embeddings for visualization, large-scale learning, visual reasoning, and cost-sensitive learning.
Homepage: http://lvdmaaten.github.io/