Carnegie Mellon University
Meeting ID: 950 9088 4062
Passcode: 411959
This thesis makes contributions in the domains of utilizing 3D Imaging radar for robot navigation and mapping. We propose a learning-based method to regress radar measurements in the form of cylindrical depth maps using LiDAR supervision. Due to the limitation of the regression formulation, directions where the radar beam could not reach will still generate a valid depth. Our method additionally learns a 3D filter to remove those pixels. Experiment results show that our system generates visually accurate depth estimation.
We confirm the overall effectiveness of this learned frontend by applying it to common downstream robotics tasks. We show that it is possible to use the learned depth map to retrieve doppler velocity measurements and infer a sensible radar-frame velocity. Applying the depth map to probabilistic occupancy mapping with ground truth trajectory generates point cloud maps that are visually consistent with LiDAR maps. Lastly, we show that by explicitly looking for large 3D planes in the learned depth map, and modeling structural constraints, it is possible to perform indoor SLAM with a noisy odometry source.