Eye Gaze for Intelligent Driving
Abstract: Intelligent vehicles have been proposed as one path to increasing traffic safety and reducing on-road crashes. Driving “intelligence” today takes many forms, ranging from simple blind spot occupancy or forward collision warnings to distance-aware cruise and all the way to full driving autonomy in certain situations. Primarily, these methods are outward-facing and operate on [...]
AI-CARING
AI-CARING is an NSF-sponsored institute, led by Georgia Tech, whose mission is to investigate, develop and evaluate AI technologies to help older adults live independently. The Institute focuses on providing reminders to the older adults and alerting caregivers when necessary, assisting older adults with tasks such as meal preparation, motivating them to exercise, providing conversational [...]
Learning to Perceive and Predict Everyday Interactions
Abstract: This thesis aims to build computer systems to understand everyday hand-object interactions in the physical world – both perceiving ongoing interactions in 3D space and predicting possible interactions. This ability is crucial for applications such as virtual reality, robotic manipulations, and augmented reality. The problem is inherently ill-posed due to the challenges of one-to-many [...]
Sensorized Soft Material Systems with Integrated Electronics and Computing
Abstract: The integration of soft and multifunctional materials in emerging technologies is becoming more widespread due to their ability to enhance or improve functionality in ways not possible using typical rigid alternatives. This trend is evident in various fields. For example, wearable technologies are increasingly designed using soft materials to improve modulus compatibility with biological [...]
Deep Learning for Tactile Sensing: Development to Deployment
Abstract: The role of sensing is widely acknowledged for robots interacting with the physical environment. However, few contemporary sensors have gained widespread use among roboticists. This thesis proposes a framework for incorporating sensors into a robot learning paradigm, from development to deployment, through the lens of ReSkin -- a versatile and scalable magnetic tactile sensor. [...]
Learning and Translating Temporal Abstractions of Behaviour across Humans and Robots
Abstract: Humans are remarkably adept at learning to perform tasks by imitating other people demonstrating these tasks. Key to this is our ability to reason abstractly about the high-level strategy of the task at hand (such as the recipe of cooking a dish) and the behaviours needed to solve this task (such as the behaviour [...]
Towards Underwater 3D Visual Perception
Abstract: With modern robotic technologies, seafloor imageries have become more accessible to both researchers and the public. This thesis leverages deep learning and 3D vision techniques to deliver valuable information from seafloor image observations. Despite the widespread use of deep learning and 3D vision algorithms across various fields, underwater imaging presents unique challenges, such as [...]
Assistive value alignment using in-situ naturalistic human behaviors
Abstract: As collaborative robots are increasingly deployed in personal environments, such as the home, it is critical they take actions to complete tasks consistent with personal preferences. Determining personal preferences for completing household chores, however, is challenging. Many household chores, such as setting a table or loading a dishwasher, are sequential and open-vocabulary, creating a [...]
Ice Cream Social
Join RISO at the Ice Cream Social robolounge @5-7 Wednesday September 4th Free Entry
Sampling and Signal-Processing for High-Dimensional Visual Appearance in Computer Graphics and Vision
Abstract: Many problems in computer graphics and vision, such as acquiring images of a scene to enable synthesis of novel views from many directions for virtual reality, computing realistic images by integrating lighting from many different incident directions across a range of scene pixels and viewing angles, or acquiring and modeling the appearance of realistic materials [...]
Teaching Robots to Drive: Scalable Policy Improvement via Human Feedback
Abstract: A long-standing problem in autonomous driving is grappling with the long-tail of rare scenarios for which little or no data is available. Although learning-based methods scale with data, it is unclear that simply ramping up data collection will eventually make this problem go away. Approaches which rely on simulation or world modeling offer some [...]
Exploration for Continually Improving Robots
Abstract: Data-driven learning is a powerful paradigm for enabling robots to learn skills. Current prominent approaches involve collecting large datasets of robot behavior via teleoperation or simulation, to then train policies. For these policies to generalize to diverse tasks and scenes, there is a large burden placed on constructing a rich initial dataset, which is [...]
Unlocking Magic: Personalization of Diffusion Models for Novel Applications
Abstract: Since the recent advent of text-to-image diffusion models for high-quality realistic image generation, a plethora of creative applications have suddenly become within reach. I will present my work at Google where I have attempted to unlock magical applications by proposing simple techniques that act on these large text-to-image diffusion models. Particularly, a large class of [...]
Domesticating Soft Robotics Research and Development with Accessible Biomaterials
Abstract: Current trends in robotics design and engineering are typically focused on high value applications where high performance, precision, and robustness take precedence over cost, accessibility, and environmental impact. In this paradigm, the capability landscape of robotics is largely shaped by access to capital and the promise of economic return. This thesis explores an alternative [...]
Understanding and acting in the 4D world
Abstract: As humans, we are constantly interacting with and observing a three-dimensional dynamic world; where objects around us change state as they move or are moved, and we, ourselves, move for navigation and exploration. Such an interaction between a dynamic environment and a dynamic ego-agent is complex to model as an ego-agent's perception of the [...]
Using mechanical intelligence to create adaptable robots
Abstract: Currently deployed robots are primarily rigid machines that perform repetitive, controlled tasks in highly constrained or open environments such as factory floors, warehouses, or fields. There is an increasing demand for more adaptable, mobile, and flexible robots that can manipulate or move through unstructured and dynamic environments. My vision is to create robots that [...]
Instant Visual 3D Worlds Through Split-Lohmann Displays
Abstract: Split-Lohmann displays provide a novel approach to creating instant visual 3D worlds that support realistic eye accommodation. Unlike commercially available VR headsets that show content at a fixed depth, the proposed display can optically place each pixel region to a different depth, instantly creating eye-tracking-free 3D worlds without using time-multiplexing. This enables real-time streaming [...]
Remote Rendering and 3D Streaming for Resource-Constrained XR Devices
Abstract: An overview of the motivation and challenges for remote rendering and real-time 3D video streaming on XR headsets. Bio: Edward is a third year PhD student in the ECE department interested in computer systems for VR/AR devices. Homepage: https://users.ece.cmu.edu/~elu2/ Sponsored in part by: Meta Reality Labs Pittsburgh
Vectorizing Raster Signals for Spatial Intelligence
Abstract: This seminar will focus on how vectorized representations can be generated from raster signals to enhance spatial intelligence. I will discuss the core methodology behind this transformation, with a focus on applications in AR/VR and robotics. The seminar will also briefly cover follow-up work that explores rigging and re-animating objects from casual single videos [...]