MSR Thesis Defense
Reinforcement Learning with Spatial Reasoning for Dexterous Robotic Manipulation
Abstract: Robotic manipulation in unstructured environments requires adaptability and the ability to handle a wide variety of objects and tasks. This thesis presents novel approaches for learning robotic manipulation skills using reinforcement learning (RL) with spatially-grounded action spaces, addressing the challenges of high-dimensional, continuous action spaces and alleviating the need for extensive training data. Our [...]
Leveraging Vision, Force Sensing, and Language Feedback for Deformable Object Manipulation
Deformable object manipulation represents a significant challenge in robotics due to its complex dynamics, lack of low-dimensional state representations, and severe self-occlusions. This challenge is particularly critical in assistive tasks, where safe and effective manipulation of various deformable materials can significantly improve the quality of life for individuals with disabilities and address the growing needs [...]
CBGT-Net: A Neuromimetic Architecture for Robust Classification of Streaming Data
Abstract: This research introduces CBGT-Net, a neural network model inspired by the cortico-basal ganglia-thalamic (CBGT) circuits in mammalian brains, which are crucial for critical thinking and decision-making. Unlike traditional neural network models that generate an output for each input or after a fixed sequence of inputs, CBGT-Net learns to produce an output once sufficient evidence [...]
Enhancing Robot Perception and Interaction Through Structured Domain Knowledge
Abstract: Despite the advancements in deep learning driven by increased computational power and large datasets, significant challenges remain. These include difficulty in handling novel entities, limited mechanisms for human experts to update knowledge, and lack of interpretability, all of which are crucial for human-centric applications like assistive robotics. To address these issues, we propose leveraging [...]
Towards Universal Place Recognition
Title: Towards Universal Place Recognition Abstract: Place Recognition is essential for achieving robust robot localization. However, current state-of-art systems remain environment/domain-specific and fragile. By leveraging insights from vision foundation models, we present AnyLoc, a universal VPR solution that performs across diverse environments without retraining or fine-tuning, significantly outperforming supervised baselines. We further introduce MultiLoc, and enable [...]
GNSS-denied Ground Vehicle Localization for Off-road Environments with Bird’s-eye-view Synthesis
Abstract: Global localization is essential for the smooth navigation of autonomous vehicles. To obtain accurate vehicle states, on-board localization systems typically rely on Global Navigation Satellite System (GNSS) modules for consistent and reliable global positioning. However, GNSS signals can be obstructed by natural or artificial barriers, leading to temporary system failures and degraded state estimation. On the [...]
Scaling up Robot Skill Learning with Generative Simulation
Abstract: Generalist robots need to learn a wide variety of skills to perform diverse tasks across multiple environments. Current robot training pipelines rely on humans to either provide kinesthetic demonstrations or program simulation environments with manually-designed reward functions for reinforcement learning. Such human involvement is an important bottleneck towards scaling up robot learning across diverse [...]
Simulation as a Tool for Conspicuity Measurement
Abstract: The use of unmanned aerial vehicles (UAVs) for time critical tasks is becoming increasingly popular. Operators are expected to use information from these swarms to make real-time and informed decisions. Consequently, detecting and recognizing targets from video is extremely pivotal to the success of these systems. At greater altitudes or with more vehicles, this [...]
VP4D: View Planning for 3D and 4D Scene Understanding
Abstract: View planning plays a critical role by gathering views that optimize scene reconstruction. Such reconstruction has played an important part in virtual production and computer animation, where a 3D map of the film set and motion capture of actors lead to an immersive experience. Current methods use uncertainty estimation in neural rendering of view [...]
Automating Annotation Pipelines by leveraging Multi-Modal Data
Abstract: The era of vision-language models (VLMs) trained on large web-scale datasets challenges conventional formulations of “open-world" perception. In this work, we revisit the task of few-shot object detection (FSOD) in the context of recent foundational VLMs. First, we point out that zero-shot VLMs such as GroundingDINO significantly outperform state-of-the-art few-shot detectors (48 vs. 33 AP) [...]