The Tolerance for Visual Feedback Distortions in a Virtual Environment - Robotics Institute Carnegie Mellon University

The Tolerance for Visual Feedback Distortions in a Virtual Environment

Yoky Matsuoka, Sonya Allin, and Roberta Klatzky
Journal Article, Physiology & Behavior, Vol. 77, No. 5, pp. 651 - 655, December, 2002

Abstract

We are interested in using a virtual environment with a robotic device to extend the strength and mobility of people recovering from strokes by steering them beyond what they had thought they were capable of doing. Previously, we identified just noticeable differences (JND) of a finger's force production and position displacement in a virtual environment. In this paper, we extend this investigation by identifying peoples' tolerance for distortions of visual representations of force production and positional displacement in a virtual environment. We determined that subjects are not capable of reliably detecting inaccuracies in visual representation until there is 36% distortion. This discrepancy between actual and perceived movements is significantly larger than the JNDs reported in the past, indicating that a virtual robotic environment could be a valuable tool for steering actual movements further away from perceived movements. We believe this distorted condition may allow people recovering from strokes, even those who have perceptual or cognitive deficits, to rehabilitate with greater ease.

BibTeX

@article{Matsuoka-2002-8605,
author = {Yoky Matsuoka and Sonya Allin and Roberta Klatzky},
title = {The Tolerance for Visual Feedback Distortions in a Virtual Environment},
journal = {Physiology & Behavior},
year = {2002},
month = {December},
volume = {77},
number = {5},
pages = {651 - 655},
}