Human Telesupervision of Very Heterogeneous Planetary Robot Teams
Abstract
Lunar and planetary surfaces are the most hostile working environments into which humans can be sent. The protective spacesuit is massive and cumbersome, with EVA mission time limited by both the suit's resources and the astronaut's stamina. To maintain human presence on the Moon and to expand it to Mars requires enormous investments in transportation and life support for each human. Therefore, successful and sustainable space exploration and operations must maximize the efficiency of every astronaut and keep them "as safe as reasonably achievable". Towards this goal, tasks for which current robotic autonomy technologies are effective should be offloaded from the astronauts. However, whenever the limits of autonomy are reached, a human will need to intervene, preferably by telesupervising the robotic assets (thus reducing EVAs). Employing an effective telesupervision architecture to augment the ingenuity of a human supervisor with state-of-the-art autonomous systems results in a manifold increase in the human's performance and a significant improvement in safety. This completely changes the risk profile of a mission, and allows astronauts to perform substantial amounts of hazardous work from a well-supplied operations base, such as an orbital station, a CEV, or a Lunar or Martian habitat. Telesupervised robotic systems have been identified as a key technology by the NASA Exploration Systems Mission Directorate, and are crucial to the success of the Vision for Space Exploration. However, very little applicable work has been done in the design of telesupervised system architectures, the appropriate mix of autonomy and remote control, and context switching between them, or in the testing and deployment of such systems. This paper focuses on the development of an advanced telesupervision system architecture that will provide a highly efficient approach to human-robot interaction while allowing very heterogeneous robotic assets to be deployed. These assets include exploration rovers and climbers; large autonomous miners and transporters; stationary ISRU processing plants, materials fabricators, and power stations; and construction and maintenance robots. We argue that for the telesupervisor to acquire the state of each varied robot and its environment involves not only telemetry and high-fidelity telepresence (including proprioceptive cues), but also a sensorial "playback" of the recent history of autonomous operation that will reveal the issues that led to the crisis that now requires assistance. Providing the framework within which this history and context are acquired and reproduced is crucial to a viable telesupervision architecture. Our philosophy of maximizing the efficiency and safety of humans through telesupervision of autonomous robotic systems is applicable across all anticipated operational phases of the Vision for Space Exploration, including: telesupervising Lunar robots from Earth, Lunar orbit, or the Lunar surface; and telesupervising Martian robots from Mars orbit, or the Martian surface. The described architecture also applies to on-orbit assembly, inspection, and maintenance operations. Finally, the telesupervision architecture is relevant for earth science applications such as ecological forecasting, water management, carbon management, disaster management, coastal management, and homeland security. We describe two applications of our Multilevel-Autonomy Robot Telesupervision Architecture: to planetary mineral prospecting using multiple semi-autonomous rovers based on work conducted under a past project funded by the NASA Exploration Systems Mission Directorate; and to Harmful Algal Bloom detection and characterization by multiple semi-autonomous ocean vessels based on work conducted under an ongoing project funded by the NASA Earth Science Technology Office. By addressing the real problem of human and robot cooperative effectiveness, the system described in this paper is responsive to the goals of NASA's Global Exploration Strategy and the Lunar Exploration Program Architecture. The Lunar activities to which this architecture is applicable are an essential testbed for refining the technology for subsequent deployment to Mars. Applying this telesupervision architecture will save thousands of hours of astronaut time, as well as thousands of tons of mass due to fewer astronauts needing support to achieve the Lunar and Martian objectives.
BibTeX
@conference{Podnar-2007-9813,author = {Gregg Podnar and John M. Dolan and Alberto Elfes and Marcel Bergerman},
title = {Human Telesupervision of Very Heterogeneous Planetary Robot Teams},
booktitle = {Proceedings of 45th AIAA Aerospace Sciences Meeting (AIAA '07)},
year = {2007},
month = {September},
}