2:00 pm to 3:30 pm
Newell-Simon Hall 3305
Areas of Interest:
Human-centric 3D Scene Analysis, Scene synthesis for 3D content creation and learning through simulation, Data visualization
Abstract:
Creating 3D environments is hard. Experts spend much time and effort using complex software to create virtual 3D interiors. This 3D content creation bottleneck limits the use of virtual environments for applications in entertainment, education, research, and design. I address this bottleneck by leveraging the insight that real indoor environments are designed by people for people to inhabit.
In this talk, I will discuss a human-centric representation of the structure and semantics of 3D environments learned from observations of people acting in the real world. First, I will demonstrate how we can use this embodied representation to analyze 3D environments and predict how likely they are to support specific human actions. Then I will show how we can use the same representation to generate 3D environments and human poses depicting common actions. Finally, I will describe my work on using virtual environments to build a simulation platform for research on intelligent embodied agents. With this platform we can leverage computer graphics to generate 3D environments with controlled variation, enabling systematic learning for computer vision, robotics, NLP, and AI.
Bio:
Manolis Savva is a postdoc at Princeton University. He completed his PhD at the Stanford graphics lab, advised by Pat Hanrahan. His research focuses on human-centric 3D scene analysis and generation, and simulation of 3D interior environments. He has also worked in data visualization, grounding of natural language to 3D content, and on establishing several large-scale 3D datasets: ShapeNet, SUNCG, ScanNet, and Matterport3D. More details at: http://graphics.stanford.edu/~msavva
Host: Keenan Crane