Activity Understanding of Scripted Performances - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

December

8
Wed
Robert Collins Associate Professor Penn State University
Wednesday, December 8
10:50 am to 11:50 am
Activity Understanding of Scripted Performances

Abstract: The PSU Taichi for Smart Health project has been doing a deep-dive into vision-based analysis of 24-form Yang-style Taichi (TaijiQuan). A key property of Taichi, shared by martial arts katas and prearranged form exercises in other sports, is practice of a scripted routine to build both mental and physical competence.  The scripted nature of routines yields strong expectations on pose and motion that can be leveraged to perform “interpretation by alignment”, treating one performance as a reference with annotated labels that can be transferred to new routines by spatio-temporal alignment, analogous to the way a brain atlas is used to guide spatial interpretation of medical images. I’ll discuss some research aspects of this project, including: 1) Collection of a large, multi-modal dataset of 100 sequences of 24-form Yang-style Taichi, performed by 10 subjects with varying levels of experience. Each performance includes simultaneously recorded Vicon 3D marker and joint data, two synchronized and calibrated video views, and foot pressure maps measured by Tekscan F-scan insole sensors; 2) Unsupervised learning of an encoder that maps short temporal windows of 3D body pose+motion into a vectors suitable for comparison by cosine similarity/distance, enabling efficient temporal alignment of routines by dynamic time warping; and 3) Learning regression functions to predict foot pressure maps (dynamics) from body pose (kinematics), and, more generally, developing methods to estimate biomechanical information like center of pressure and base of support, two key components for assessing postural and gait stability, from vision-only sensors. This is joint work with Yanxi Liu (CSE/EE), John Challis (Kinesiology), and student members of the PSU Lab for Perception, Action and Cognition.

 

BioDr. Collins co-directs the Laboratory for Perception, Action, and Cognition (LPAC) in the CSE department at Penn State University (PSU). His research area is computer vision, with an emphasis on video scene understanding, automated surveillance, human activity modeling, and real-time tracking. He has been PI/co-PI of numerous NSF grants on topics ranging from persistent object tracking, detection and tracking of pedestrians, human pose estimation, and analysis of social behavior in crowds. He received his PhD degree in Computer Science from the University of Massachusetts at Amherst. Dr. Collins was an associate research professor at CMU RI from 1996 to 2005.  Highlights of his time at CMU include the DARPA Video Surveillance and Monitoring (VSAM) project, co-PI of the DARPA HumanID project, and developing visualization tools and calibration routines for the 30-camera EyeVision system. His work on appearance-based tracking of moving objects from moving camera platforms has been used to perform ground vehicle tracking from UAVs and to navigate an unmanned water vehicle on “the Mon” river.

 

Homepage:  http://www.cse.psu.edu/~rtc12/

 

 

Sponsored in part by:   Facebook Reality Labs Pittsburgh