Sasayaki: augmented voice web browsing experience - Robotics Institute Carnegie Mellon University

Sasayaki: augmented voice web browsing experience

Daisuke Sato, Shaojian Zhu, Masatomo Kobayashi, Hironobu Takagi, and Chieko Asakawa
Conference Paper, Proceedings of SIGCHI Conference on Human Factors in Computing Systems (CHI '11), pp. 2769 - 2778, May, 2011

Abstract

Auditory user interfaces have great Web-access potential for billions of people with visual impairments, with limited literacy, who are driving, or who are otherwise unable to use a visual interface. However a sequential speech-based representation can only convey a limited amount of information. In addition, typical auditory user interfaces lose the visual cues such as text styles and page structures, and lack effective feedback about the current focus. To address these limitations, we created Sasayaki (from whisper in Japanese), which augments the primary voice output with a secondary whisper of contextually relevant information, automatically or in response to user requests. It also offers new ways to jump to semantically meaningful locations. A prototype was implemented as a plug-in for an auditory Web browser. Our experimental results show that the Sasayaki can reduce the task completion times for finding elements in webpages and increase satisfaction and confidence.

BibTeX

@conference{Sato-2011-126518,
author = {Daisuke Sato and Shaojian Zhu and Masatomo Kobayashi and Hironobu Takagi and Chieko Asakawa},
title = {Sasayaki: augmented voice web browsing experience},
booktitle = {Proceedings of SIGCHI Conference on Human Factors in Computing Systems (CHI '11)},
year = {2011},
month = {May},
pages = {2769 - 2778},
}