11/18/2024    Mallory Lindahl

The framework allows the robot to create photorealistic scene representations in low-light environments.

Robots lift heavy loads in warehouses, deliver meals to diners, and even tackle housework. However, a lot of robotic work goes unseen – quite literally. 

Robots and autonomous vehicles are routinely deployed for critical tasks such as exploration, inspection, transportation, and search and rescue missions that the general public does not witness. The environments where these tasks occur are unpredictable, hazardous, and often extremely dark. A team at the Carnegie Mellon Robotics Institute recognized the potential in equipping robots with a perception system to map their environments in poorly illuminated conditions, accurately relight the map and help the robot navigate safely.

Robotics Institute Ph.D. Tianyi Zhang, postdoc William (Weiming) Zhi, graduate student Kaining Huang and director Matthew Johnson-Roberson joined forces to reinvent scene construction to help robots create accurate internal representations of the environment with a moving light source. 

The team identified “illumination-inconsistency” as the key challenge in creating realistic scene models from images taken with a moving light source. Put simply, when a robot operates in poorly lit environments, the moving light causes images of the same areas to appear different. 

They introduce three key contributions to address this problem: a pipeline for managing scenes with inconsistent lighting, which includes light source modeling, camera-light calibration, 3D Gaussian construction, and relighting, Neural Light Simulators (NeLiS), a data-driven model and software for accurately modeling and calibrating light sources, and Dark Gaussian Splatting (DarkGS), an adaptation of the 3D Gaussian Splatting (3DGS) model that creates photorealistic scene representations in low-light conditions with a calibrated light source.

“While existing methods use over-simplified lighting models, our data-driven lighting model can adapt to arbitrary lighting patterns,” said Tianyi Zhang. “This has been the missing building block for robot mapping in dark environments.”

The team uses DarkGS with light calibration to solve current problems by modeling the scene’s lighting properly, and relighting the scene for more consistent images, even in low light.

“We build a map with millions of 3D Gaussian primitives, and each of them is illuminated with a virtual light source that moves with the robot,” said William (Weiming) Zhi. “With accurate lighting modeling, people will feel like they are walking in the environment while viewing this map.”

When deployed in real-world settings, the researchers’ system successfully reconstructed images with photorealistic quality and learned illumination. Though the calibration and testing were done without total darkness, the team was able to significantly improve the robot’s performance in different lighting conditions.

This research shows great potential in crucial real-world applications such as deep sea mapping, subterranean exploration, and search and rescue. All of the research components can be found on the project website.

For More Information: Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu