Automating Annotation Pipelines by leveraging Multi-Modal Data

Rashid Auditorium - 4401 Gates and Hillman Centers

Abstract: The era of vision-language models (VLMs) trained on large web-scale datasets challenges conventional formulations of “open-world" perception. In this work, we revisit the task of few-shot object detection (FSOD) in the context of recent foundational VLMs. First, we point out that zero-shot VLMs such as GroundingDINO significantly outperform state-of-the-art few-shot detectors (48 vs. 33 AP) [...]

Leveraging Affordances for Accelerating Online RL

3305 Newell-Simon Hall

Abstract: The inability to explore environments efficiently makes online RL sample-inefficient. Most existing works tackle this problem in a setting devoid of prior information. However, additional affordances may often be cheaply available at the time of training. These affordances include small quantities of demo data, simulators that can reset to arbitrary states and domain specific [...]