4:15 am to 12:00 am
Event Location: McConomy Auditorium, First Floor University Center
Bio: Daphne Koller is the Rajeev Motwani Professor in the Computer Science Department at Stanford University. Her main research interest is in developing and using machine learning and probabilistic methods to model and analyze complex domains. Her current research projects include models in computational biology, computational medicine, and in extracting semantic meaning from sensor data of the physical world. Daphne Koller is the author of over 180 refereed publications, which have appeared in venues spanning Science, Nature Genetics, the Journal of Games and Economic Behavior, and a variety of conferences and journals in AI and Computer Science. She has received 9 best paper or best student paper awards, in conferences whose areas span computer vision (ECCV), artificial intelligence (IJCAI), natural language (EMNLP), machine learning (NIPS and UAI), and computational biology (ISMB). She has given keynote talks at over 10 different major conferences, also spanning a variety of areas. She was the program co-chair of the NIPS 2007 and UAI 2001 conferences, and has served on numerous program committees and as associate editor of the Journal of Artificial Intelligence Research, the Machine Learning Journal, and the Journal of Machine Learning Research. She was awarded the Arthur Samuel Thesis Award in 1994, the Sloan Foundation Faculty Fellowship in 1996, the ONR Young Investigator Award in 1998, the Presidential Early Career Award for Scientists and Engineers (PECASE) in 1999, the IJCAI Computers and Thought Award in 2001, the Cox Medal for excellence in fostering undergraduate research at Stanford in 2003, the MacArthur Foundation Fellowship in 2004, the ACM/Infosys award in 2008, and was inducted into the National Academy of Engineering in 2011.
Abstract: The solution to many complex problems require that we build up a representation that spans multiple levels of abstraction. For example, to obtain a semantic scene understanding from an image, we need to detect and identify objects and assign pixels to objects, understand scene geometry, derive object pose, and reconstruct the relationships between different objects. Fully annotated data for learning richly structured models can only be obtained in very limited quantities; hence, for such applications and many others, we need to learn models from data where many of the relevant variables are unobserved. I will describe novel machine learning methods that can train models using weakly labeled data, thereby making use of much larger amounts of available data, with diverse levels of annotation. These models are inspired by ideas from human learning, in which the complexity of the learned models and the difficulty of the training instances tackled changes over the course of the learning process. We will demonstrate the applicability of these ideas to various problems, focusing on the problem of holistic computer vision.