Acquisition and Reconstruction of Multidimensional Visual Information - Robotics Institute Carnegie Mellon University
Loading Events

VASC Seminar

July

27
Thu
Prof. Yebin Liu Associate Professor Tsinghua University
Thursday, July 27
3:00 pm to 4:00 pm
Acquisition and Reconstruction of Multidimensional Visual Information

Event Location: NSH 1507
Bio: Yebin Liu is currently an associate professor at Tsinghua University. He received the B.E. degree from the Beijing University of Posts and Telecommunications, China, in 2002, and the PhD degree from the Automation Department, Tsinghua University, Beijing, China, in 2009. He was a Postdoc researcher in the Computer Graphics Group of the Max Planck Institute for Informatik, Germany, in 2010. His research areas include computer vision, computer graphics and computational photography, and published more than 40 papers in top conferences and journals including SIGGRAPH, CVPR, ICCV, ECCV, ICCP, TPAMI, etc. He won the First Prize of National Technology Invention Award in 2012 and is the NSFC Excellent Young Scientist.

Abstract: Acquisition and reconstruction of multidimensional visual information (The Plenoptic Function) are the cross discipline of computer vision, computer graphics and computational photography. This talk will introduce two key feature technologies in this field: the multiscale hybrid camera array and the deep leaning based Plenoptic reconstruction. We will start with the demand of capturing of high spatial resolution video, i.e., Gigapixel video acquisition (Ref1). Specifically, we propose a novel multiscale camera array as well as a cross-scale feature matching and image warping approach to synthesize gigapixel video of outdoor scenes. We further emphasize that the hybrid camera array is a key element in the latest light field camera design as well, which brings forwards another topic of Plenoptic reconstruction, i.e., light field acquisition and reconstruction. In this part, we will introduce our work on view interpolation and reference based super-resolution of light fields using multiscale image inputs (Ref2, Ref3, etc.) via deep leaning based super-resolution approach. Finally, based on the view dimensional acquisition, depth information of the scene can also be obtained to enable our recent work on realtime reconstruction of dynamic non-rigid scenes using a single depth imaging camera (Ref4,Ref5).