Mirror Mirror: Crowdsourcing Better Portraits - Robotics Institute Carnegie Mellon University

Mirror Mirror: Crowdsourcing Better Portraits

Jun-Yan Zhu, Aseem Agarwala, Alexei A. Efros, Eli Shechtman, and Jue Wang
Journal Article, ACM Transactions on Graphics (TOG), Vol. 33, No. 6, November, 2014

Abstract

We describe a method for providing feedback on portrait expressions, and for selecting the most attractive expressions from large video/photo collections. We capture a video of a subject's face while they are engaged in a task designed to elicit a range of positive emotions. We then use crowdsourcing to score the captured expressions for their attractiveness. We use these scores to train a model that can automatically predict attractiveness of different expressions of a given person. We also train a cross-subject model that evaluates portrait attractiveness of novel subjects and show how it can be used to automatically mine attractive photos from personal photo collections. Furthermore, we show how, with a little bit ($5-worth) of extra crowdsourcing, we can substantially improve the cross-subject model by "fine-tuning" it to a new individual using active learning. Finally, we demonstrate a training app that helps people learn how to mimic their best expressions.

BibTeX

@article{Zhu-2014-125699,
author = {Jun-Yan Zhu and Aseem Agarwala and Alexei A. Efros and Eli Shechtman and Jue Wang},
title = {Mirror Mirror: Crowdsourcing Better Portraits},
journal = {ACM Transactions on Graphics (TOG)},
year = {2014},
month = {November},
volume = {33},
number = {6},
}