Structure from motion blur in low light
Abstract
In theory, the precision of structure from motion estimation is known to increase as camera motion increases. In practice, larger camera motions induce motion blur, particularly in low light where longer exposures are needed. If the camera center moves during exposure, the trajectory traces in a motion-blurred image encode the underlying 3D structure of points and the motion of the camera. In this paper, we propose an algorithm to explicitly estimate the 3D structure of point light sources and camera motion from a motion-blurred image in a low light scene with point light sources. The algorithm identifies extremal points of the traces mapped out by the point sources in the image and classifies them into start and end sets. Each trace is charted out incrementally using local curvature, providing correspondences between start and end points. We use these correspondences to obtain an initial estimate of the epipolar geometry embedded in a motion-blurred image. The reconstruction and the 2D traces are used to estimate the motion of the camera during the interval of capture, and multiple view bundle adjustment is applied to refine the estimates.
BibTeX
@conference{Zheng-2011-122198,author = {Yali Zheng and Shohei Nobuhara and Yaser Sheikh},
title = {Structure from motion blur in low light},
booktitle = {Proceedings of (CVPR) Computer Vision and Pattern Recognition},
year = {2011},
month = {June},
pages = {2569 - 2576},
}