Using Automated Within-Subject Invisible Experiments to Test the Effectiveness of Automated Vocabulary Assistance - Robotics Institute Carnegie Mellon University

Using Automated Within-Subject Invisible Experiments to Test the Effectiveness of Automated Vocabulary Assistance

Gregory Aist and Jack Mostow
Workshop Paper, ITS '00 Workshop on Applying Machine Learning to ITS Design/Construction, June, 2000

Abstract

Machine learning offers the potential to allow an intelligent tutoring system to learn effective tutoring strategies. A necessary prerequisite to learning an effective strategy is being able to automatically test a strategy's effectiveness. We conducted an automated, within-subject "invisible experiment" to test the effectiveness of a particular form of vocabulary instruction in a Reading Tutor that listens. Both conditions were in the context of assisted oral reading with the computer. The control condition was encountering a word in a story. The experimental condition was first reading a short automatically generated "factoid" about the word, such as "cheetah can be a kind of cat. Is it here?" and then reading the sentence from the story containing the target word. The initial analysis revealed no significant difference between the conditions. Further inspection revealed that sometimes students benefited from receiving help on "hard" or infrequent words. Designing, implementing, and analyzing this experiment shed light not only on the particular vocabulary help tested, but also on the machine-learning-inspired methodology we used to test the effectiveness of this tutorial action.

BibTeX

@workshop{Aist-2000-8063,
author = {Gregory Aist and Jack Mostow},
title = {Using Automated Within-Subject Invisible Experiments to Test the Effectiveness of Automated Vocabulary Assistance},
booktitle = {Proceedings of ITS '00 Workshop on Applying Machine Learning to ITS Design/Construction},
year = {2000},
month = {June},
}