Sasayaki: an augmented voice-based web browsing experience - Robotics Institute Carnegie Mellon University

Sasayaki: an augmented voice-based web browsing experience

Shaojian Zhu, Daisuke Sato, Hironobu Takagi, and Chieko Asakawa
Conference Paper, Proceedings of 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '10), pp. 279 - 280, October, 2010

Abstract

While the usability of voice-based Web navigation has been steadily improving, it is still not as easy for users with visual impairments as it is for sighted users. One reason is that sequential voice representation can only convey a limited amount of information at a time. Another challenge comes from the fact that current voice browsers omit various visual cues such as text styles and page structures, and lack meaningful feedback about the current focus. To address these issues, we created Sasayaki, an intelligent voice-based user agent that augments the primary voice output of a voice browser with a secondary voice that whispers contextually relevant information as appropriate or in response to user requests. A prototype has been implemented as a plug-in for a voice browser. The results from a pilot study show that our Sasayaki agent is able to improve users' information search task time and their overall confidence level. We believe that our intelligent voice-based agent has great potential to enrich the Web browsing experiences of users with visual impairments.

BibTeX

@conference{Zhu-2010-126520,
author = {Shaojian Zhu and Daisuke Sato and Hironobu Takagi and Chieko Asakawa},
title = {Sasayaki: an augmented voice-based web browsing experience},
booktitle = {Proceedings of 12th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '10)},
year = {2010},
month = {October},
pages = {279 - 280},
}