Title: Physics-based object-aware ego-centric human pose estimation
Abstract:
We investigate the roles of body kinematics, dynamics, and objects for 3D human pose estimation using a head-mounted camera. Human kinematics models play a key role in encoding the natural range of human motion, while dynamics models can react to the spatial arrangement between humans and objects in the scene. Thus, we propose a method for object-aware 3D egocentric pose estimation that tightly integrates kinematics modeling, dynamics modeling, and scene object information. Unlike prior kinematics or dynamics-based approaches where the two components are used disjointly, we synergize the two approaches via \emph{dynamics-regulated training}. At each timestep, a kinematic model is used to provide a target pose using video evidence and simulation state. Then, a prelearned dynamics model attempts to mimic the kinematic pose in a physics simulator that computes the new simulation state. By comparing the pose instructed by the kinematic model with the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model. By factoring in the 6DoF pose of objects (e.g., chairs, boxes) in the scene, we demonstrate for the first time, the ability to estimate physically-plausible 3D human-object interactions using a single wearable camera. We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios.
Committee:
Prof. Kris Kitani,
Prof. Fernando De La Torre Frade,
Ye Yuan
Zoom Link: https://cmu.zoom.us/j/98722574771?pwd=SFpwclAzL1UxVlpQTTFVRGV2M2JTdz09
Meeting ID: 987 2257 4771
Passcode: 241904