Loading Events

VASC Seminar

October

18
Tue
Dr. Michel Valstar Research Associate Imperial College London
Tuesday, October 18
3:00 pm to 12:00 am
The Next Generation of Facial Expression Recognition Systems

Event Location: NSH 1507
Bio: Dr. Michel F. Valstar is currently a Visiting Researcher at MIT’s Media Lab, and a Research Associate in the intelligent Behaviour Understanding Group (iBUG) at Imperial College London. He received his masters degree in Electrical Engineering at Delft University of Technology in 2005 and his PhD in computer science at Imperial College London in 2008. Currently he is working in the fields of computer vision and pattern recognition, where his main interest is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. In 2011 he was the main organiser of the first facial expression recognition challenge, FERA2011, and the first Audio-Visual Emotion recognition Challenge, AVEC2011. In 2007 he won the BCS British Machine Intelligence Prize for part of his PhD work. He has published technical papers at authoritative conferences including CVPR, ICCV and SMC-B and his work has received popular press coverage in New Scientist and on BBC Radio.

Abstract: The previous decade has seen a large number of publications on Automatic Facial Expression Recognition systems (AFERS). Also, recently the first AFERS programmes have been made available, either publicly by academics, or for sale by companies, making it clear for everyone what works and what doesn’t. The first facial expression recognition challenge (FERA2011) further serves to shed a light on the efforts in this field, comparing many state of the art approaches on the same challenging dataset. What we now see is the advent of a second generation of AFERS. Building upon the successes of the first generation, these new systems attempt to tackle all the open challenges in this field by combining different approaches, as well as integrating sources of information other than the face (e.g. head actions). In this talk I will describe two recent developments towards such a second generation of AFERS, to wit the novel facial point detection algorithm Local Evidence Aggregation Regressors (LEAR) and a novel dynamic appearance descriptor called LPQ-TOP.