Data-Driven Geometric Scene Understanding
Event Location: NSH 3305Abstract: In this thesis proposal, we describe a data-driven approach to leverage repositories of 3D models for scene understanding. Our ability to relate what we see in an image to a large collection of 3D models allows us to transfer information from these models, creating a rich understanding of the scene. We [...]
Animal models for robotic design: neuromuscular and biomechanical studies of terrestrial and aerial locomotion
Bio: Andrew A. Biewener received his BS degree in Zoology from Duke University, NC, USA in 1974 and his MA and PhD in Biology from Harvard University, MA, USA in 1982. His academic appointments include being an Instructor (1982-84), Assistant Professor (1984-90), and Professor (1990-1998) at the University of Chicago, where he also served as [...]
Robust Natural Language Direction Following through Unknown Environments
Event Location: GHC 4405Abstract: Commanding robots through unconstrained natural language directions is intuitive, flexible, and does not require specialized interfaces or training. Providing this capability would enable effortless coordination in human robot teams that operate in non-specialized environments. However, natural language direction following through unknown environments requires understanding the structure of language, mapping verbs and [...]
Interactive Perception for Autonomous Manipulation
Event Location: NSH 1305Bio: Dov Katz is a postdoctoral fellow with the National Robotics Engineering Center at Carnegie Mellon University. His research interests include autonomous manipulation, computer vision, and machine learning. He received his MS in 2008 and Ph.D. in 2011 from the University of Massachusetts Amherst, and his BS in 2004 from Tel-Aviv University, [...]
Revealing the invisible
Event Location: NSH 1305Bio: Frédo Durand is an associate professor in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his PhD from Grenoble University, France, in 1999, supervised by Claude Puech and George Drettakis. From 1999 till 2002, [...]
Northwestern University
Force Feedback for Fingertips
Event Location: NSH 1305Bio: Ed Colgate is the Breed University Professor of Design at Northwestern University. His research interests lie in the areas of haptic interface, telemanipulation, prosthetics and physical human-robot interaction. With his colleague Michael Peshkin, Colgate is the inventor of a class of collaborative robots known as “cobots.” He is the founding Editor-in-Chief [...]
Exploring the semantic understanding of abstract scenes
Bio: C. Lawrence Zitnick received the PhD degree in robotics from Carnegie Mellon University in 2003. His thesis focused on a maximum entropy approach to efficient inference. Previously, his work centered on stereo vision, including the development of a commercial portable 3D camera. Currently, he is a senior researcher at the Interactive Visual Media group [...]
Putting the Pieces Together: Assembling Puzzles and Shredded Documents
Event Location: NSH 1305Bio: Andrew Gallagher is a Visiting Research Scientist at Cornell University's School of Electrical and Computer Engineering, beginning in June 2012. Andrew earned the Ph.D. degree in electrical and computer engineering from Carnegie Mellon University in 2009, advised by Prof. Tsuhan Chen. Before that, Andrew received an M.S. degree from Rochester Institute [...]
Human action recognition: recent progress, open questions and future challenges
Event Location: NSH 1507Bio: Ivan Laptev is a full-time researcher in the WILLOW team at INRIA Paris and Ecole Normale Superieure. He has received his PhD in Computer Science from the Royal Institute of Technology (KTH) in 2004 and his Master of Science degree from the same institute in 1997. He has been a research [...]
Model Recommendation for Action Recognition and Other Applications
Event Location: GHC 4405Abstract: The typical approach to learning based vision has been that for each individual application, classifiers or detectors are learned anew from annotated training data for each specific task. However, the classifiers trained in this way tend to be brittle and highly specialized to the datasets from which they are derived, making [...]