Including Sensor Bias in Shape from Motion Calibration and Multisensor Fusion - Robotics Institute Carnegie Mellon University

Including Sensor Bias in Shape from Motion Calibration and Multisensor Fusion

Richard Voyles, James Morrow, and Pradeep Khosla
Conference Paper, Proceedings of IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI '96), pp. 93 - 99, December, 1996

Abstract

Shape from Motion data fusion brings a greater degree of autonomy and sensor integration to intelligent systems in which fusion by constant linear transformations is appropriate. To illustrate this, we apply Shape from Motion techniques to applications involving both similar and disparate sensory information vectors. First, nearly autonomous force/torque sensor calibration is demonstrated through fusion of the individual channels of raw strain gauge data. Gathering only the raw sensor signals, the motion of the force vector (the "motion") and the calibration matrix (the "shape") are simultaneously extracted by singular value decomposition. This calibration example is provided to simply explain the mathematics. Disparate sensory information is fused in a "primordial learning" mobile robot through a similar eigenspace representation. This paper summarizes these shape from motion applications and presents an extension for simultaneously extracting sensor bias.

BibTeX

@conference{Voyles-1996-14265,
author = {Richard Voyles and James Morrow and Pradeep Khosla},
title = {Including Sensor Bias in Shape from Motion Calibration and Multisensor Fusion},
booktitle = {Proceedings of IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI '96)},
year = {1996},
month = {December},
pages = {93 - 99},
}