Introducing Generative Models to Facilitate Multi-Task Visual Learning - Robotics Institute Carnegie Mellon University

Introducing Generative Models to Facilitate Multi-Task Visual Learning

Master's Thesis, Tech. Report, CMU-RI-TR-21-15, Robotics Institute, Carnegie Mellon University, May, 2021

Abstract

Generative modeling has recently shown great promise in computer vision, but it has mostly focused on synthesizing visually realistic images. During my graduate study and research, motivated by multi-task learning of shareable feature representations, we consider a novel problem of learning a shared generative model that can facilitate multi-task visual learning.

We first start with a simple problem setting--- learning a generative model for the joint task of few-shot recognition and novel-view synthesis: given only one or few images of a novel object from arbitrary views with only category annotation, we aim to simultaneously learn an object classifier and generate images of that type of object from new viewpoints. We focus on the interaction and cooperation between a generative model and a discriminative model, in a way that facilitates knowledge to flow across tasks in complementary directions. To this end, we propose bowtie networks that jointly learn 3D geometric and semantic representations with a feedback loop. Experimental evaluation on challenging fine-grained recognition datasets demonstrates that our synthesized images are realistic from multiple viewpoints and significantly improve recognition performance as ways of data augmentation, especially in the low-data regime.

Then, we further extend the bowtie network and propose a general multi-task oriented generative modeling (MGM) framework, by coupling a discriminative multi-task network with a generative network. While it is challenging to synthesize both RGB images and pixel-level annotations in multi-task scenarios, our framework enables us to use synthesized images paired with only weak annotations (i.e., image-level scene labels) to facilitate multiple visual tasks. Experimental evaluation on challenging multi-task benchmarks, including NYUv2 and Taskonomy, demonstrates that our MGM framework improves the performance of all the tasks by large margins, consistently outperforming state-of-the-art multi-task approaches.

BibTeX

@mastersthesis{Bao-2021-127399,
author = {Zhipeng Bao},
title = {Introducing Generative Models to Facilitate Multi-Task Visual Learning},
year = {2021},
month = {May},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-21-15},
}