Automatic state discovery for unstructured audio scene classification - Robotics Institute Carnegie Mellon University

Automatic state discovery for unstructured audio scene classification

Julian Ramos, Sajid Siddiqi, Artur Dubrawski, Geoffrey Gordon, and Abhishek Sharma
Conference Paper, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '10), pp. 2154 - 2157, March, 2010

Abstract

In this paper we present a novel scheme for unstructured audio scene classification that possesses three highly desirable and powerful features: autonomy, scalability, and robustness. Our scheme is based on our recently introduced machine learning algorithm called Simultaneous Temporal And Contextual Splitting (STACS) that discovers the appropriate number of states and efficiently learns accurate Hidden Markov Model (HMM) parameters for the given data. STACS-based algorithms train HMMs up to five times faster than Baum-Welch, avoid the overfitting problem commonly encountered in learning large state-space HMMs using Expectation Maximization (EM) methods such as Baum-Welch, and achieve superior classification results on a very diverse dataset with minimal pre-processing. Furthermore, our scheme has proven to be highly effective for building real-world applications and has been integrated into a commercial surveillance system as an event detection component.

BibTeX

@conference{Ramos-2010-121927,
author = {Julian Ramos and Sajid Siddiqi and Artur Dubrawski and Geoffrey Gordon and Abhishek Sharma},
title = {Automatic state discovery for unstructured audio scene classification},
booktitle = {Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '10)},
year = {2010},
month = {March},
pages = {2154 - 2157},
}