Loading Events

VASC Seminar

July

18
Wed
Richard Zemel Professor University of Toronto
Wednesday, July 18
3:00 pm to 4:00 pm
Learning and Inference to Exploit High-Order Potentials

Event Location: NSH 1305
Bio: Richard Zemel received the B.Sc. degree in History & Science from Harvard University in 1984, and the M.S. and Ph.D. degrees in Computer Science from the University of Toronto, in 1989 and 1993, respectively. He is currently Professor of Computer Science at the University of Toronto, where he has been a faculty member since 2000. From 1996-2000, he was an Assistant Professor of Computer Science at the University of Arizona, where he also held a joint appointment in the Center for Cognitive Science. He was a Postdoctoral Fellow at the Salk Institute from 1993-1995, and from 1993-1994 at Carnegie Mellon University. Prior to his academic career, he worked in the artificial intelligence company, Carnegie Group, from 1984-1987. He has received several awards and honors, including a Young Investigator Award from the Office of Naval Research and three Dean’s Excellence Awards at the University of Toronto, and was recently appointed a Fellow of the Canadian Institute for Advanced Research. His research interests include topics in machine learning, vision, and neural coding. His recent research focuses on ranking, active learning, structured output models, and fairness.

Abstract: An important challenge is to develop models that can capture a variety of structure in the real world, and can effectively represent our prior conceptions of this structure, as well as our objectives for the model. Many such structures are high-order, in that they involve interactions between many variables, and as such cannot be efficiently utilized in standard graphical models. We develop methods that make learning and inference with some forms of high-order structures practical. These rely on a common underlying formulation, a factor graph, with a form of max-product for inference. This method is simple, modular, and flexible; it is easy to define new factors, and experimental results are good on a range of problems of various sizes. I will discuss a few projects within this framework, including image labeling, multi-object and top-k classification, and learning using high-order potentials.