3D Segmentation Learning from Sparse Annotations and Hierarchical Descriptors - Robotics Institute Carnegie Mellon University

3D Segmentation Learning from Sparse Annotations and Hierarchical Descriptors

Peng Yin, Lingyun Xu, Jianmin Ji, Sebastian Scherer, and Howie Choset
Journal Article, IEEE Robotics and Automation Letters, Vol. 6, No. 3, pp. 5953 - 5960, July, 2021

Abstract

One of the main obstacles to 3D semantic segmentation is the significant amount of endeavor required to generate expensive point-wise annotations for fully supervised training. To alleviate manual efforts, we propose GIDSeg, a novel approach that can simultaneously learn segmentation from sparse annotations via reasoning global-regional structures and individual-vicinal properties. GIDSeg depicts global- and individual- relation via a dynamic edge convolution network coupled with a kernelized identity descriptor. The ensemble effects are obtained by endowing a fine-grained receptive field to a low-resolution voxelized map. In our GIDSeg, an adversarial learning module is also designed to further enhance the conditional constraint of identity descriptors within the joint feature distribution. Despite the apparent simplicity, our proposed approach achieves superior performance over state-of-the-art for inferencing 3D dense segmentation with only sparse annotations. Particularly, with 5% annotations of raw data, GIDSeg outperforms other 3D segmentation methods.

BibTeX

@article{Yin-2021-127902,
author = {Peng Yin and Lingyun Xu and Jianmin Ji and Sebastian Scherer and Howie Choset},
title = {3D Segmentation Learning from Sparse Annotations and Hierarchical Descriptors},
journal = {IEEE Robotics and Automation Letters},
year = {2021},
month = {July},
volume = {6},
number = {3},
pages = {5953 - 5960},
keywords = {3D Segmentation, Sparse Annotation},
}