Abstract Scope |
The first commercial use of robotics in manufacturing was for welding operations. Since then, industrial robots have become more diverse and common for automating manufacturing processes. Even though robotic design and control have rapidly progressed in the past half-century, many welding operations are still done manually. The skills needed for complex welding scenarios possessed by trained professionals have yet to be mimicked by current industrial robots. Existing robotic control is incapable of adaptively adjusting its robotic operation in response to a dynamic welding environment, whereas a skilled human welder can. To enable sophisticated and adaptive robotic control, three elements are needed: perception, prediction, and reaction. Perception (e.g., weld pool dynamics and welder operations) can be easily realized through in-situ high-speed cameras, but real-time welding quality prediction (e.g., penetration and back-side bead width) and process control (e.g., adjustment of welding speed and current) to stabilize and maximize the welding quality are more difficult. Accurate prediction and real-time reaction rely on the effective and efficient processing of perception data and characterization of this highly dynamic system. Emerging machine learning and deep learning techniques have the potential to realize adaptive robotic control mirroring human capabilities.
This research presents a preliminary study on developing appropriate machine learning techniques for real-time welding quality prediction and adaptive welding speed adjustment for TIG welding at a constant current. In order to collect the data needed to train the machine learning models, two cameras were applied to monitor the welding process, with one camera (available in practical robotic welding) recording the top-side weld pool dynamics and a second camera (unavailable in practical robotic welding, but applicable for training purpose) recording the back-side bead formation. Given these two data sets, correlations can be discovered through a convolutional neural network (CNN) that is good at image characterization. With the CNN, top-side weld pool images can be analyzed to predict the back-side bead width during active welding control. Furthermore, the monitoring process has been applied to multiple experimental trials with varying speeds. This allowed the effect of welding speed on bead width to be modeled through a standard perceptron. Through the trained perceptron, a computationally efficient gradient descent algorithm has been developed to adjust the travel speed accordingly to achieve an optimal bead width with full material penetration. Because of the nature of gradient descent, the robot would change faster when the quality is further away and then fine-tune the speed when it was close to the goal.
Preliminary experiments have shown promising results in both the bead width prediction based on the in-situ weld pool imaging and the correlation between travel speed and bead width. These algorithms have been successfully programmed in a UR-5 robot, which was able to adaptively adjust the speed to achieve the optimum bead width regardless of the initial welding speed. This configuration on welding speed adjustment will be extended to the current, path, and orientation changes that are more organic, in order to apply robotic solutions to the most difficult welding scenarios. |