Algorithms for cooperative multisensor surveillance
Abstract
The Video Surveillance and Monitoring (VSAM) team at Carnegie Mellon University (CMU) has developed an end-to-end, multicamera surveillance system that allows a single human operator to monitor activities in a cluttered environment using a distributed network of active video sensors. Video understanding algorithms have been developed to automatically detect people and vehicles, seamlessly track them using a network of cooperating active sensors, determine their three-dimensional locations with respect to a geospatial site model, and present this information to a human operator who controls the system through a graphical user interface. The goal is to automatically collect and disseminate real-time information to improve the situational awareness of security providers and decision makers. The feasibility of real-time video surveillance has been demonstrated within a multicamera testbed system developed on the campus of CMU. This paper presents an overview of the issues and algorithms involved in creating this semiautonomous, multicamera surveillance system.
BibTeX
@article{Collins-2001-8326,author = {Robert Collins and Alan Lipton and Hironobu Fujiyoshi and Takeo Kanade},
title = {Algorithms for cooperative multisensor surveillance},
journal = {Proceedings of the IEEE},
year = {2001},
month = {October},
volume = {89},
number = {10},
pages = {1456 - 1477},
}