Researchers at Carnegie Mellon University are building a computer system called Gabriel that, like the angel that is its namesake, will seemingly look over a person’s shoulder and whisper instructions for tasks as varied as repairing industrial equipment, resuscitating a patient or assembling IKEA furniture.
The National Science Foundation has awarded CMU a four-year, $2.8 million grant to further develop the wearable cognitive assistance system. Gabriel uses a wearable vision system, such as Google Glass, and taps into the ubiquitous power of cloud computing via a CMU innovation called a “cloudlet.”
“Ten years ago, people thought of this as science fiction,” said Mahadev Satyanarayanan, professor of computer science and the principal investigator for Gabriel. “But now it’s on the verge of reality.”
In just the past year, Satyanarayanan and his colleagues have built proof-of-concept implementations that guide the assembly of LEGO models, teach freehand sketching and coach a neophyte Ping-Pong player.
“The experience is much like a driver using a GPS navigation system,” Satyanarayanan said. “It gives you instructions when you need them, corrects you when you make a mistake and, most of the time, shuts up so it doesn’t bug you.”
In many respects, it’s like a robot in its use of sensing and task planning; the big difference is that the actuation is performed by a person instead of a machine.
Wearable cognitive assistants only now are becoming possible because of recent advances in several key areas of hardware, software and computation. Satyanarayanan said rapid advances in computer vision are making it possible for computers to recognize objects and understand the context of scenes. Cognitive algorithms, such as IBM’s Watson, make it possible for computers to direct tasks and cloud computing makes it possible to perform the intensive computation to run such algorithms.
The speed and agility necessary for these applications are made possible by cloudlets – data centers that provide some of the computational power of the cloud and that support multiple mobile users. Conceived by Satyanarayanan and first implemented by his research group, cloudlets are situated close to users, such as on cell towers or in office buildings, so that they are just one wireless “hop” away.
By “bringing the cloud closer,” cloudlets reduce the roundtrip time of communications from the 70 milliseconds typical of cloud computing to just a few tens of milliseconds, or less. That enables Gabriel to work in real-time, such as a Ping-Pong coach app that instructs a player to hit the ball to the right or the left on each return, depending on the location of the opposing player.
In addition to Satyanarayanan, the research team includes Martial Hebert, director of the CMU Robotics Institute and an expert on computer vision, Daniel Siewiorek, a professor in the Human-Computer Interaction Institute and a pioneer in wearable computing, and Roberta Klatzky, professor of psychology and human-computer interaction who specializes in human assistance technologies.
The researchers will be focusing on the fundamental technologies necessary to make wearable cognitive assistants, such as further improvements in computer vision and the incorporation of audio and location sensing. The initial applications will be for tasks that require specialized knowledge or skills, but ultimately cognitive assistance could be applied to virtually all facets of everyday life.