FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge - Robotics Institute Carnegie Mellon University

FERA 2017 – Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge

Michel F. Valstar, Enrique Sánchez-Lozano, Jeffrey F. Cohn, Laszlo A. Jeni, Jeffrey M. Girard, Zheng Zhang, Lijun Yin, and Maja Pantic
Conference Paper, Proceedings of 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG '17), pp. 839 - 847, May, 2017

Abstract

The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges.

BibTeX

@conference{Valstar-2017-119662,
author = {Michel F. Valstar and Enrique Sánchez-Lozano and Jeffrey F. Cohn and Laszlo A. Jeni and Jeffrey M. Girard and Zheng Zhang and Lijun Yin and Maja Pantic},
title = {FERA 2017 - Addressing Head Pose in the Third Facial Expression Recognition and Analysis Challenge},
booktitle = {Proceedings of 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG '17)},
year = {2017},
month = {May},
pages = {839 - 847},
}