Decorative painting over large complex surfaces with highly detailed patterns is a challenging operation in aerospace manufacturing, which requires a tedious masking-and-painting process which still heavily relies on manual labor and is extremely time-consuming. Moreover, this slow and manual painting process poses health risks to workers, where full-body protective suits and respirators are mandatory at all times during the painting process. Of course, the use of the protective suits hampers performance and increases time. One potential solution for this problem is to combine industrial-grade inkjet applicators with large-scale articulated robotic arms, where future airplanes or large structures can be painted more efficiently and safely by a team of autonomous robots and result in a photorealistic painted surface.
Toward such a goal, this research aims to provide two key solutions for the large aircraft robotic painting industry:
- Heterogeneous coverage planning for multi-robot and multi-scale painting systems;
- Sensor-based online dynamic path replanning with edge computing for improved positioning accuracy and obstacle avoidance capability.
Heterogeneous coverage planning
The process of determining optimal paths for robotic painting end-effectors to paint the exterior of an airplane is called a “coverage planning problem.” The problem seeks the shortest painting time while requiring the end-effectors to cover the target surface completely and uniformly. When this coverage planning problem involves multiple and heterogeneous types of robots with different capabilities and limitations, it presents a two additional challenge: 1) the time-optimal paths need to coordinate multiple distinct systems to guarantee complete coverage of the target surfaces, as well as, 2) safety in that no robots can collide with each other while operating in a shared workspace.
Prior works: Conventional methods can be categorized into primitive-based and sampling-based approaches, for 2D and 3D target surfaces respectively. In the 2D case, the primitive-based approaches, some originating from Choset’s prior work on coverage which used cell decomposition methods to separate the surface into multiple areas to be covered separately. Specifically, Choset, et al proposed Boustrophedon decomposition to produce simplified shapes where shorter paths that guarantee complete coverage can be found. The above-mentioned methods can also be extended to multi-robot scenarios [6]. Those methods are suited for agriculture and cleaning robots, e.g., robot vacuum cleaners and lawnmowers. In addition, they can be extended to the painting of simple surfaces, e.g. automotive surface patches.
This work seeks to develop a hierarchical framework that takes into account the heterogeneity of robots with different nozzle footprints, moving velocity/acceleration, and other painting capabilities. Additionally, at the local planning level, our method produces a uniform coverage trajectory by utilizing pre-defined path patterns and navigates through local obstacles with the cell decomposition method. Our proposed method optimizes for efficiency by minimizing the overall painting time and energy cost of multi-robot systems. At the same time, it seeks feasible solutions that guarantee completeness and uniformity of the coverage to satisfy the painting objectives.
Sensor-based online dynamic path replanning
Given an initial pre-planned coverage trajectory for the inkjet painting of an airplane body, multiple close-range sensors can be deployed to aid the robot arms’ ability to follow the given trajectory and check for ink quality. Issues that may crop up during trajectory execution include 1) deviations from the modeled CAD, for example the airplane body curvature being slightly deformed, or unmodeled rivets on the wing surface; 2) structural deformation and vibration from the large-scale robotic arms, will introduce positioning error, and results misalignment during multi-path painting process.
Prior works: Current methods for sensor-based online dynamic path replanning consist of two steps: deploy sensors to build an understanding of the world, then use a dynamic replanning algorithm to navigate the robot arm along the desired trajectory. For sensory understanding, Choset’s group has developed a confined space Simultaneous Localization and Mapping (SLAM) technique that is capable of scanning and mapping in confined spaces using the custom designed sensor called “Blaser”, which can generate information in aiding trajectory execution.
As this project is new, we propose to further improve the Confined Space Simultaneous Localization and Mapping (SLAM) via the following six steps:
1. Initialize global map to the using geometry and motion constraints, update painting objective functions.
2. Receive RGB-Depth sensory data with the close-range Blaser sensor(s).
3. Localize the printhead’s position and orientation within the global map using collected sensory data.
4. Update the global map using the captured sensor data and the solved position + orientation.
5. Project the coverage trajectory into the global map, and perform dynamic path replanning and obstacle avoidance algorithm to navigate towards the next waypoint.
6. Repeat from [Step 2] until painting is completed.
Compared to a conventional plan-then-act process without online sensing, our approach would allow multiple robot arms to co-registered into a unified coordinate frame to align the painted image during multi-robot and multi-path motion executions. Additionally, our confined space enabled close range sensor enables high-resolution surface measurement to not only create a detailed inspection map, but also provide critical situational awareness for motion replanning curing obstacle avoidance maneuvers.
Additionally, dynamic local path replanning methods will be studied and developed. We propose using a geometry-based method to project the given coverage trajectory and obstacle information onto the airplane surface defined in the global map. The robot arm then navigates between the coverage trajectory’s waypoints using a local planner with an emphasis on a smooth, straight trajectory in cartesian space using an inverse-kinematic approach. The geometry-based approach allows reduced computational cost and a rapid replan cycle, and thus allows motion adjustment to be controlled and executed in a real-time fashion with the help of sensory feedback.