Dense Human Pose Estimation From WiFi
Abstract
Advances in computer vision and machine learning techniques have led to significant development in 2D and 3D human pose estimation from RGB cameras, LiDAR, and radars. However, human pose estimation from images is adversely affected by occlusion and lighting, which are common in many scenarios of interest. Radar and LiDAR technologies, on the other hand, need specialized hardware that is expensive and power-intensive. Furthermore, placing these sensors in non-public areas raises significant privacy concerns.
To address these limitations this work expands on the use of the Wi-Fi signal in combination with deep learning architectures, commonly used in computer vision, to estimate dense human pose correspondence. We developed a deep neural network that maps the phase and amplitude of Wi-Fi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable visual performance to image-based approaches, by utilizing Wi-Fi signals as the only input. This paves the way for low-cost, broadly accessible, and privacy-preserving algorithms for human sensing.
BibTeX
@mastersthesis{Geng-2022-133231,author = {Jiaqi Geng},
title = {Dense Human Pose Estimation From WiFi},
year = {2022},
month = {August},
school = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-RI-TR-22-59},
keywords = {Dense Body Pose Estimation, WiFi Signals, UV Coordinates, Channel State Information},
}