Abstract Scope |
Specialized skills like adaptive judgment and decision making specific to sophisticated welding operations require many years of practice to develop. However, there is a growing shortage of skilled welders. To address this issue, we can increase the use of robotic welding but robotic systems must be able to perform with certain adaptive judgment and decision making capabilities skilled welders possess. To establish this kind of capabilities, it will be important to automatically extract weld pool scene/features from images in real-time. Since the welding process is very complex, there are many factors generated during the welding process that influence the image we capture such as arc radiation, spatters, oscillation of the weld pool. It is thus challenging to design a static feature pattern to consistently detect the weld pool scene, including but not limited to the pool edge and position in real-time. Therefore, this work develops a deep learning network method to solve this problem effectively and accurately. The network’s architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. This network is able to be trained end-to-end from few images and deliver quality results in the segmentation of weld pool scene for interested features. Also, it is fast enough to satisfy the real-time requirement from the real time monitoring and control of welding process. With this network, a robot can follow the human welder to capture the moving weld scene to obtain the weld information in real-time like weld pool, weld arc, weld seam, etc. Such information contains how experienced human welder reacts in different weld circumstances, like how to adjust the welding torch based on different weld pools, in other words, their “weld pool/arc condition – torch manipulation” relationships. Based on that information, we will be able to provide a model to the robot for it to react to the weld pool like a skilled human welder. |