Next-Generation Robot Perception: Hierarchical Representations, Certifiable Algorithms, and Self-Supervised Learning
Spatial perception —the robot’s ability to sense and understand the surrounding environment— is a key enabler for robot navigation, manipulation, and human-robot interaction. Recent advances in perception algorithms and systems have enabled robots to create large-scale geometric maps of unknown environments and detect objects of interest. Despite these advances, a large gap still separates robot [...]
Autonomous mobility in Mars exploration: recent achievements and future prospects
Abstract: This talk will summarize key recent advances in autonomous surface and aerial mobility for Mars exploration, then discuss potential future missions and technology needs for Mars and other planetary bodies. Among recent advances, the Perseverance rover that is now operating on Mars includes new autonomous navigation capability that dramatically increases its traverse speed over [...]
Passive Coupling in Robot Swarms
Abstract: In unstructured environments, ant colonies demonstrate remarkable abilities to adaptively form functional structures in response to various obstacles, such as stairs, gaps, and holes. Drawing inspiration from these creatures, robot swarms can collectively exhibit complex behaviors and achieve tasks that individual robots cannot accomplish. Existing modular robot platforms that employ dynamic coupling and decoupling [...]
RI Faculty Business Meeting
Meeting for RI Faculty. Discussions include various department topics, policies, and procedures. Generally meets weekly.
Structures and Environments for Generalist Agents
Abstract: We are entering an era of highly general AI, enabled by supervised models of the Internet. However, it remains an open question how intelligence emerged in the first place, before there was an Internet to imitate. Understanding the emergence of skillful behavior, without expert data to imitate, has been a longstanding goal of reinforcement [...]
Learning novel objects during robot exploration via human-informed few-shot detection
Abstract: Autonomous mobile robots exploring in unfamiliar environments often need to detect target objects during exploration. Most prevalent approach is to use conventional object detection models, by training the object detector on large abundant image-annotation dataset, with a fixed and predefined categories of objects, and in advance of robot deployment. However, it lacks the capability [...]
Learning to Perceive and Predict Everyday Interactions
Abstract: This thesis aims to develop a computer vision system that can understand everyday human interactions with rich spatial information. Such systems can benefit VR/AR to perceive the reality and modify its virtual twin, and robotics to learn manipulation by watching human. Previous methods have been limited to constrained lab environment or pre-selected objects with [...]
Faculty Candidate: Wenshan Wang
Title: Towards General Autonomy: Learning from Simulation, Interaction, and Demonstration Abstract: Today's autonomous systems are still brittle in challenging environments or rely on designers to anticipate all possible scenarios to respond appropriately. On the other hand, leveraging machine learning techniques, robot systems are trained in simulation or the real world for various tasks. Due to [...]
From Videos to 4D Worlds and Beyond
Abstract: Abstract: The world underlying images and videos is 3-dimensional and dynamic, i.e. 4D, with people interacting with each other, objects, and the underlying scene. Even in videos of a static scene, there is always the camera moving about in the 4D world. Accurately recovering this information is essential for building systems that can reason [...]