Wen-Hsuan Chu - MSR Thesis Talk - Robotics Institute Carnegie Mellon University
Loading Events

MSR Speaking Qualifier

April

28
Tue
Wen-Hsuan Chu PhD Student Robotics Institute,
Carnegie Mellon University
Tuesday, April 28
3:00 pm to 4:00 pm
TBA
Wen-Hsuan Chu – MSR Thesis Talk

ZOOM Link: https://cmu.zoom.us/j/4417558334

Title: Neural Batch Sampling with Reinforcement Learning for Semi-Supervised Anomaly Detection

Abstract:

We are interested in the detection and segmentation of anomalies in images where the anomalies are typically small (i.e., a small tear in woven fabric, bro-ken pin of an IC chip). From a statistical learning point of view, anomalies have low occurrence probability and are not from the main modes of a data distribu-tion. Learning a generative model of anomalous data from a natural distribution of data can be difficult because the data distribution is heavily skewed towards a large amount of non-anomalous data.  When training a generative model onsuch imbalanced data using an iterative learning algorithm like stochastic gradi-ent descent (SGD), we observe an expected yet interesting trend in the loss val-ues (a measure of the learned models performance) after each gradient update across data samples. Naturally, as the model sees more non-anomalous data dur-ing training, the loss values over a non-anomalous data sample decreases, while the loss values on an anomalous data sample fluctuates.

 

In this work, our key hypothesis is that this change in loss values during training can be used as a feature to identify anomalous data.  In particular, we propose a novel semi-supervised learning algorithm for anomaly detection and segmentation using an anomaly classifier that uses as input the loss profile of a data sample processed through an autoencoder. The loss profile is defined as a sequence of reconstruction loss val-ues produced during iterative training. To amplify the difference in loss profiles between anomalous and non-anomalous data, we also introduce a ReinforcementLearning based meta-algorithm, which we call the neural batch sampler, to strate-gically sample training batches during autoencoder training. Experimental results on multiple datasets with a high diversity of textures and objects, often with mul-tiple modes of defects within them, demonstrate the capabilities and effectiveness of our method when compared with existing state-of-the-art baselines.

Committee:
Kris M. Kitani (advisor)
Sebastian Scherer
Xiaofang Wang