Carnegie Mellon University
1:30 pm to 2:30 pm
NSH 1305
Abstract:
Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. Over the past two decades, much attention has been devoted to the decoding problem: how should recorded neural activity be translated into movement of the device in order to achieve the most proficient control? This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device, to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a non-biomimetic device, once the subject has engaged in extended practice and learned how to use it. In this thesis, I first recast the decoder design problem from a physical control system perspective, and investigate how various classes of decoders lead to different types of physical systems for the subject to control. This framework leads to new interpretations of why certain types of decoders have been shown to perform better than others. Based on this framework, I present a formal definition of the usability of a device under the assumption that the brain acts as an optimal controller. Using ideas from optimal control theory, it can be shown that the optimal, post-learning mapping can be written as the solution of a constrained optimization problem which maximizes this usability. I then derive the optimal mappings for particular cases common to most brain-computer interfaces. Results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, this method provides a blueprint for optimal device design in general control-theoretic contexts.
Given the optimal, post-learning mapping, successful implementation of such brain-computer interface depends critically on the subject’s ability to learn how to modulate the neurons controlling the device. However, the subject’s learning process is probably the least understood aspect of the control loop. An effective training schedule should manipulate the difficulty of the task to provide enough information to guide improvement without overwhelming the subject. In this thesis, I introduce a Bayesian framework for modeling the closed-loop BCI learning process that treats the subject as a bandwidth-limited communication channel. I then develop an adaptive algorithm to find the optimal difficulty-schedule for performance improvement. Simulation results demonstrate that this algorithm yields faster learning rates than several other heuristic training schedules, and provides insight into the factors that might affect the learning process.
Thesis Committee Members:
Steven M. Chase, Chair
Robert E. Kass
J. Andrew Bagnell
Patrick J. Loughlin, University of Pittsburgh