PhD Speaking Qualifier
Calendar of Events
S Sun
M Mon
T Tue
W Wed
T Thu
F Fri
S Sat
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
2 events,
PhD Speaking Qualifier
Controllable Visual-Tactile Synthesis
Abstract: Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. The main challenges for multi-modal synthesis lie in the significant scale discrepancy between vision [...]
PhD Speaking Qualifier
Perceiving Particles Inside a Container using Dynamic Touch Sensing
Abstract: Dynamic touch sensing has shown potential for multiple tasks. In this talk, I will present how we utilize dynamic touch sensing to perceive particles inside a container with two tasks: classification of the particles inside a container and property estimation of the particles inside a container. First, we try to recognize what is inside [...]
0 events,
0 events,
0 events,
0 events,
0 events,
1 event,
PhD Speaking Qualifier
Examining the Role of Adaptation in Human-Robot Collaboration
Abstract: Human and AI partners increasingly need to work together to perform tasks as a team. In order to act effectively as teammates, collaborative AI should reason about how their behaviors interplay with the strategies and skills of human team members as they coordinate on achieving joint goals. This talk will discuss a formalism for [...]
1 event,
PhD Speaking Qualifier
A Multi-view Synthetic and Real-world Human Activity Recognition Dataset
Abstract: Advancements in Human Activity Recognition (HAR) partially relies on the creation of datasets that cover a broad range of activities under various conditions. Unfortunately, obtaining and labeling datasets containing human activity is complex, laborious, and costly. One way to mitigate these difficulties with sufficient generality to provide robust activity recognition on unseen data is [...]
0 events,
0 events,
0 events,
0 events,
0 events,
0 events,
1 event,
PhD Speaking Qualifier
Dense 3D Representation Learning for Geometric Reasoning in Manipulation Tasks
Abstract: When solving a manipulation task like "put away the groceries" in real environments, robots must understand what *can* happen in these environments, as well as what *should* happen in order to accomplish the task. This knowledge can enable downstream robot policies to directly reason about which actions they should execute, and rule out behaviors [...]