The RADAR Test Methodology: Evaluating a Multi-Task Machine Learning System with Humans in the Loop - Robotics Institute Carnegie Mellon University

The RADAR Test Methodology: Evaluating a Multi-Task Machine Learning System with Humans in the Loop

Aaron Steinfeld, Rachael Bennett, Kyle Cunningham, Matt Lahut, Pablo-Alejandro Quinones, Django Wexler, Daniel Siewiorek, Paul Cohen, Julie Fitzgerald, Othar Hansson, Jordan Hayes, Mike Pool, and Mark Drummond
Tech. Report, CMU-CS-06-125, Computer Science Department, Carnegie Mellon University, May, 2006

Abstract

The RADAR project involves a collection of machine learning research thrusts that are integrated into a cognitive personal assistant. Progress is examined with a test developed to measure the impact of learning when used by a human user. Three conditions (conventional tools, Radar without learning, and Radar with learning) are evaluated in a a large-scale, between-subjects study. This paper describes the activities of the RADAR Test with a focus on test design, test harness development, experiment execution, and analysis. Results for the 1.1 version of Radar illustrate the measurement and diagnostic capability of the test. General lessons on such efforts are also discussed.

BibTeX

@techreport{Steinfeld-2006-9480,
author = {Aaron Steinfeld and Rachael Bennett and Kyle Cunningham and Matt Lahut and Pablo-Alejandro Quinones and Django Wexler and Daniel Siewiorek and Paul Cohen and Julie Fitzgerald and Othar Hansson and Jordan Hayes and Mike Pool and Mark Drummond},
title = {The RADAR Test Methodology: Evaluating a Multi-Task Machine Learning System with Humans in the Loop},
year = {2006},
month = {May},
institute = {Carnegie Mellon University},
address = {Pittsburgh, PA},
number = {CMU-CS-06-125},
keywords = {Machine Learning, human-computer interaction, artificial intelligence, multi-agent systems, evaluation, human subject experiments},
}