Learning Shape Representations for Person Re-Identification under Clothing Change
Abstract
Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras. Existing methods for re-ID tend to rely heavily on the assumption that both query and gallery images of the same person have the same clothing. Unfortunately, this assumption may not hold for datasets captured over long periods of time. To tackle the re-ID problem in the context of clothing changes, we propose a novel representation learning method which is able to generate a shape-based feature representation that is invariant to clothing. We call our model the Clothing Agnostic Shape Extraction Network (CASE-Net). CASE-Net learns a representation of a person that depends primarily on shape via adversarial learning and feature disentanglement. Quantitative and qualitative results across 5 datasets (Div-Market, Market1501, three large-scale datesets under clothing changes) show our approach makes significant improvements over prior state-of-the-art approaches.
BibTeX
@conference{Li-2021-125592,author = {Yu-Jhe Li and Xinshuo Weng and Kris Kitani},
title = {Learning Shape Representations for Person Re-Identification under Clothing Change},
booktitle = {Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV '21)},
year = {2021},
month = {January},
pages = {2432 - 2441},
}