Loading Events

Seminar

April

14
Thu
William Swartout Director of Technology Institute for Creative Technologies and Research Associate Professor of Computer Science University of Southern California
Thursday, April 14
3:30 pm to 12:00 am
What Have We Learned From Virtual Humans?

Event Location: Rashid Auditorium – Gates and Hillman Centers 4401. Open to the public.
Bio: William Swartout is Director of Technology for USC’s Institute for Creative Technologies (ICT) and a research professor of computer science at USC. His particular research interests include virtual humans, explanation and text generation, knowledge acquisition, knowledge representation, intelligent computer based education, and the development of new AI architectures. In 2009, Swartout received the Robert Engelmore Award from the Association for the Advancement of Artificial Intelligence for contributions to knowledge-based systems and explanation, groundbreaking research on virtual human technologies and their applications, and service to the artificial intelligence community. He is a Fellow of the AAAI, has served on their Board of Councilors and is past chair of the Special Interest Group on Artificial Intelligence (SIGART) of the Association for Computing Machinery (ACM). He has served as a member of the Air Force Scientific Advisory Board, the Board on Army Science and Technology of the National Academies and the JFCOM Transformation Advisory Group. He received his Ph.D. and M.S. in computer science from MIT and his bachelor’s degree from Stanford University.

Abstract: For a little over a decade, we have been building virtual humans — computer-generated characters — at the USC Institute for Creative Technologies. In this talk I will outline some of the lessons we have learned from building these characters. Ultimately, our vision is to create virtual humans that look and behave just like real people. They will think on their own, model and exhibit emotions, and interact using natural language along with the full repertoire of verbal and non-verbal communication techniques that people use. Although the realization of that goal is still in the future, making steps toward it has required us to weave together different threads of AI research such as computer vision, natural language understanding and emotion modeling that are often treated as independent areas of investigation. Interestingly, this is not just an exercise in systems integration, but instead has revealed synergies across areas that have allowed us to address problems that are difficult to solve if addressed from one perspective alone. I will illustrate some of these synergies in this talk. I will also discuss the role of story in our work and show how embedding virtual humans in a compelling story or scenario can both make them more feasible to implement and suggest new areas of research. Finally, I will suggest future areas of research in virtual humans and suggest what might be possible in the not too distant future.