Seminar
Creating robust deep learning models involves effectively managing nuisance variables
Abstract: Over the past decade, we have witnessed significant advances in capabilities of deep neural network models in vision and machine learning. However, issues related to bias, discrimination, and fairness in general, have received a great deal of negative attention (e.g., mistakes in surveillance and animal-human confusion of vision models). But bias in AI models [...]
Reduced-Gravity Flights and Field Testing for Lunar and Planetary Rovers
Abstract: As humanity returns to the Moon and is developing outposts and related infrastructure, we need to understand how robots and work machines will behave in this harsh environment. It is challenging to find representative testing environments on Earth for Lunar and planetary rovers. To investigate the effects of reduced-gravity on interactions with granular terrains, [...]
Shedding Light on 3D Cameras
Abstract: The advent (and commoditization) of low-cost 3D cameras is revolutionizing many application domains, including robotics, autonomous navigation, human computer interfaces, and recently even consumer devices such as cell-phones. Most modern 3D cameras (e.g., LiDAR) are active; they consist of a light source that emits coded light into the scene, i.e., its intensity is modulated over [...]
Where’s RobotGPT?
Abstract: The last years have seen astonishing progress in the capabilities of generative AI techniques, particularly in the areas of language and visual understanding and generation. Key to the success of these models are the use of image and text data sets of unprecedented scale along with models that are able to digest such large [...]
Neural Field Representations of Mobile Computational Photography
Abstract: Burst imaging pipelines allow cellphones to compensate for less-than-ideal optical and sensor hardware by computationally merging multiple lower-quality images into a single high-quality output. The main challenge for these pipelines is compensating for pixel motion, estimating how to align and merge measurements across time while the user's natural hand tremor involuntarily shakes the camera. In [...]
Robot Learning by Understanding Egocentric Videos
Abstract: True gains of machine learning in AI sub-fields such as computer vision and natural language processing have come about from the use of large-scale diverse datasets for learning. In this talk, I will discuss how we can leverage large-scale diverse data in the form of egocentric videos (first-person videos of humans conducting different tasks) [...]
Special Seminar
Speaker: Abhisesh Silwal Title: Robotics and AI for Sustainable Agriculture Abstract: Production agriculture plays a critical role in our lives, providing food security and enabling sustainability. Despite its immense importance, it currently faces many challenges including shortage of farmworkers, increasing production costs, excess use of herbicides just to name a few. Robotics and artificial intelligence-based [...]
Passive Ultra-Wideband Single-Photon Imaging
Abstract: High-speed light sources, fast cameras, and depth sensors have made it possible to image dynamic phenomena occurring in ever smaller time intervals with the help of actively-controlled light sources and synchronization. Unfortunately, while these techniques do capture ultrafast events, they cannot simultaneously capture slower ones too. I will discuss our recent work on passive ultra-wideband [...]
Simulation-Driven Soft Robotics
Abstract: Soft-bodied robots present a compelling solution for navigating tight spaces and interacting with unknown obstacles, with potential applications in inspection, medicine, and AR/VR. Yet, even after a decade, soft robots remain largely in the prototype phase without scaling to the tasks where they show the most promise. These systems are difficult to design and [...]