Photo-Sketching: Inferring Contour Drawings from Images
Abstract
Edges, boundaries and contours are important subjects of study in both computer graphics and computer vision. On one hand, they are the 2D elements that convey 3D shapes, on the other hand, they are indicative of occlusion events and thus separation of objects or semantic concepts. In this paper, we aim to generate contour drawings, boundary-like drawings that capture the outline of the visual scene. Prior art often cast this problem as boundary detection. However, the set of visual cues presented in the boundary detection output are different from the ones in contour drawings, and also the artistic style is ignored. We address these issues by collecting a new dataset of contour drawings and proposing a learning-based method that resolves diversity in the annotation and, unlike boundary detectors, can work with imperfect alignment of the annotation and the actual ground truth. Our method surpasses previous methods quantitatively and qualitatively. Surprisingly, when our model fine-tunes on BSDS500, we achieve the state-of-the-art performance in salient boundary detection, suggesting contour drawing might be a scalable alternative to boundary annotation, which at the same time is easier and more interesting for annotators to draw.
BibTeX
@conference{Li-2019-110418,author = {Mengtian Li and Zhe Lin and Radomír Měch and Ersin Yumer and Deva Ramanan},
title = {Photo-Sketching: Inferring Contour Drawings from Images},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '19)},
year = {2019},
month = {January},
pages = {1403 - 1412},
keywords = {Contour drawing, sketch generation, boundary detection, scalable data collection},
}