Warning: You are viewing this site with an outdated/unsupported browser.
Please update your browser or consider using a different one in order to view this site without issue.
For a list of browsers that this site supports, see our Supported Browsers page.
Events for March 2023 › Student Talks › PhD Speaking Qualifier › – Robotics Institute Carnegie Mellon UniversitySkip to content
Abstract: Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. The main challenges for multi-modal synthesis lie in the significant scale discrepancy between vision [...]
Abstract: Dynamic touch sensing has shown potential for multiple tasks. In this talk, I will present how we utilize dynamic touch sensing to perceive particles inside a container with two tasks: classification of the particles inside a container and property estimation of the particles inside a container. First, we try to recognize what is inside [...]
Abstract: Human and AI partners increasingly need to work together to perform tasks as a team. In order to act effectively as teammates, collaborative AI should reason about how their behaviors interplay with the strategies and skills of human team members as they coordinate on achieving joint goals. This talk will discuss a formalism for [...]
Abstract: Advancements in Human Activity Recognition (HAR) partially relies on the creation of datasets that cover a broad range of activities under various conditions. Unfortunately, obtaining and labeling datasets containing human activity is complex, laborious, and costly. One way to mitigate these difficulties with sufficient generality to provide robust activity recognition on unseen data is [...]
Abstract: When solving a manipulation task like "put away the groceries" in real environments, robots must understand what *can* happen in these environments, as well as what *should* happen in order to accomplish the task. This knowledge can enable downstream robot policies to directly reason about which actions they should execute, and rule out behaviors [...]