3:30 pm to 12:00 am
Event Location: NSH 3305
Bio: David Hsu is currently an associate professor of computer science at the National University of Singapore and a member of NUS Graduate School for Integrative Sciences & Engineering (NGS). His research spans robotics, computational biology, and geometric computation. His current interest includes robot motion planning under uncertainty.
He received B.Sc. in computer science & mathematics from the University of British Columbia, Canada and Ph.D. in computer science from Stanford University, USA. After leaving Stanford, he worked at Compaq Computer Corp.’s Cambridge Research Laboratory and the University of North Carolina at Chapel Hill. At the National University of Singapore, he held the Sung Kah Kay Assistant Professorship and was a Fellow of the Singapore-MIT Alliance.
Abstract: Motion planning with partially observable, i.e, imperfect, state information is an essential capability for autonomous robots to operate reliably in uncertain and dynamic environment. Partially observable Markov decision processes (POMDPs) are a principled general framework for such planning tasks. They have been successfully applied to some non-trivial robotic tasks, including coastal navigation, grasping, and target tracking, but a major challenge is to scale up POMDP algorithms for more complex robotic systems.
One key obstacle is the “curse of dimensionality”: to solve a POMDP that models a robotic task, the dimensionality of the robot’s belief space grows with its number of states. However, complex robotic systems often have mixed observability: even when a robot’s state is not fully observable, some components of the state may still be fully observable. This leads to structural properties that we can exploit to drastically reduce the dimensionality of the belief space. The result is a much faster POMDP algorithm. I will show experimental results (in simulation) on several robotic tasks with large state spaces, including one that involves multiple robots.
Furthermore, to better understand the computational complexity of POMDP planning, we use the notion of “covering number” and show that an approximately optimal POMDP solution can be computed in time polynomial in the covering number of a belief space. The covering number highlights several general properties that include not only fully observed state variables, but also beliefs with sparse support, smooth beliefs, etc. They all reduce the covering number and thus the complexity of POMDP planning.