Person-in-WiFi: Fine-grained Person Perception using WiFi
Abstract
Fine-grained person perception such as body segmentation and pose estimation has been achieved with many 2D and 3D sensors such as RGB/depth cameras, radars (e.g. RF-Pose), and LiDARs. These solutions require 2D images, depth maps or 3D point clouds of person bodies as input. In this paper, we take one step forward to show that fine-grained person perception is possible even with 1D sensors: WiFi antennas. Specifically, we used two sets of WiFi antennas to acquire signals, i.e., one transmitter set and one receiver set. Each set contains three antennas horizontally lined-up as a regular household WiFi router. The WiFi signal generated by a transmitter antenna, penetrates through and reflects on human bodies, furniture, and walls, and then superposes at a receiver antenna as 1D signal samples. We developed a deep learning approach that uses annotations on 2D images, takes the received 1D WiFi signals as input, and performs body segmentation and pose estimation in an end-to-end manner. To our knowledge, our solution is the first work based on off-the-shelf WiFi antennas and standard IEEE 802.11n WiFi signals. Demonstrating comparable results to image-based solutions, our WiFi-based person perception solution is cheaper and more ubiquitous than radars and LiDARs, while invariant to illumination and has little privacy concern comparing to cameras.
BibTeX
@conference{Wang, Zhou, Panev, Han, Huang-2019-117795,author = {Fei Wang and Sanping Zhou and Stanislav Panev and Jinsong Han and Dong Huang},
title = {Person-in-WiFi: Fine-grained Person Perception using WiFi},
booktitle = {Proceedings of (ICCV) International Conference on Computer Vision},
year = {2019},
month = {October},
pages = {5451 - 5460},
}