Loading Events

VASC Seminar

September

14
Mon
Chris Sweeney PhD Candidate University of California Santa Barbara
Monday, September 14
3:00 pm to 4:00 pm
Removing Common Assumptions from Large Scale Structure-from-Motion

Event Location: NSH 1507
Bio: Chris Sweeney is currently a PhD candidate in the 4 Eyes Lab at the University of California, Santa Barbara. His research interests include multi-view geometry, structure from motion, and augmented reality. His main research interest is in using unorganized photo collections from the internet to create a complete and up-to-date 3D representation of the physical world. Prior to joining UCSB, Chris received Bachelor Degrees in Computer Science and Mathematics at the University of Virginia. He has received an NSF Graduate Research Fellowship, a UCSB Graduate Opportunity Fellowship, a Google Outstanding Research Scholarship, and a Best Paper award at the International Symposium for Mixed and Augmented Reality (ISMAR) 2012 in addition to publications at top-tier venues such as CVPR, ECCV, ISMAR, 3DV, and TVCG. His Theia open source library was named a finalist in the 2015 ACM Open Source Software Competition.

Abstract: Structure-from-Motion (SfM) is a powerful tool for computing 3D reconstructions from images of a scene and has wide applications in computer vision, scene recognition, and augmented and virtual reality. Standard SfM pipelines make strict assumptions about the capturing devices in order to simplify the process for estimating camera geometry and 3D structure. Specifically, most methods require monocular cameras with known focal length calibration. When considering large-scale SfM from internet photo collections, EXIF calibrations cannot used reliably. Further, the requirement of single camera systems limits the scalability of SfM. In this presentation, I show how to remove these constraints with generalized mathematical formulations that do not require calibration and are not restricted to single-camera systems. First, I provide a mathematical representation of the “distributed” camera that extends single-camera frameworks to multi-camera scenarios. This formulation can be used to provide full generalizations to the absolute camera pose and relative camera pose problems. These generalizations are more expressive and extend the traditional single-camera problems to multiple (i.e., distributed) cameras, allowing for greater scalability. Second, I provide two efficient methods for estimating camera focal lengths when calibration is not available. Finally, I show how removing these constraints enables a simpler, more scalable SFM pipeline that is capable of handling uncalibrated cameras at scale.