Abstract Scope |
Effective monitoring and control of weld penetration are crucial for intelligent robotic welding. However, since fusion occurs beneath the workpiece and is not directly observable, monitoring weld penetration relies on inferring it from complex surface phenomena, posing a long-standing challenge for researchers. Recent breakthroughs in deep learning have advanced this inference, but training such models requires a large number of weld penetration labels, which are not readily available. An important observation is that welding images contain abundant visual features, but only those causally related to welding parameters can reflect penetration. Based on this observation, this work proposes an unsupervised and robust framework for weld penetration monitoring by introducing a physically-guided variational autoencoder (VAE). The model incorporates temporal welding parameters as causal guidance to structure the latent space without relying on penetration labels. The VAE learns a latent representation from welding images, guided by the predicted penetration derived from welding parameters. This guidance enforces part of the latent space to encode penetration-relevant features, while the remaining part captures unrelated visual variations.The proposed method is validated with experimental data, demonstrating robustness to various imaging disturbances. Additionally, the causal guidance model enables dynamic process modeling in an unsupervised manner. |