3:00 pm to 4:00 pm
Event Location: NSH 1507
Bio: Roozbeh Mottaghi is a PhD candidate in the department of Computer Science at the University of California, Los Angeles working with Alan Yuille. He received his B.Sc. degree in Computer Engineering from Sharif University of Technology. He holds a Masters degree in Engineering Science (Electrical & Computer Engineering) from Simon Fraser University and another Masters degree in Computer Science from Georgia Institute of Technology. His work is mainly focused on computer vision and machine learning. More specifically, he is interested in object representation models and efficient learning and inference techniques for them.
Abstract: Recent trends in semantic image segmentation have pushed for holistic scene understanding models that also reason about complementary tasks such as scene classification and object detection. In the first part of the talk, I will describe a hybrid human-machine scene understanding model. In this work, we are interested in understanding the roles of different cues in aiding semantic segmentation. Towards this goal, we “plug-in” human subjects for each of the various components in the model to show how much “head room” there is to improve semantic segmentation.
The second part of the talk will be about the models we have developed to better capture deformations of articulated objects. The performance of current object detectors usually degrades for highly flexible objects. In this talk, I will explain how we overcome this shortcoming to achieve state-of-the-art performance on difficult object detection benchmarks such as PASCAL VOC.