Analogy-Forming Transformers for Few-Shot 3D Parsing
Abstract:
How do we build agents that can fast generalize to novel scenarios given only a single example? In this talk, I will present analogy-forming transformers, a semi-parametric model that segments 3D object scenes by retrieving related memories and predicting analogous part structures for the input. This enables a single neural network to continually learn to parse instances of novel object categories simply by expanding its memory, without any weight updates. We show that analogy-forming transformers outperform parametric transformers when only few examples are available and perform on par with them in scenarios with many training examples.
Committee:
Katerina Fragkiadaki
Tom Mitchell
Jean Oh
Mohit Sharma