This work aims at automatically tracking the weld pool boundary to monitor the process and collect operation and process data for analysis and modeling of the operator. The strong arc radiation and less predictable operation add challenges. It is relatively difficult to design a static feature pattern to consistently detect the weld pool boundary/position under different welding conditions. A deep learning network, such as a conventional neural network (CNN), could provide a solution. However, its training requires large amount of data. In a classification problem where each image corresponds to one label, an automated method may be available to calculate the needed labels. Unfortunately, for the problem of image segmentation being addressed in this study, i.e., detecting the weld pool, the labels cannot be generated automatically or the problem would have been solvable by conventional image processing. An effective deep learning approach that does not require large datasets is needed. As such, we propose to use an architecture of the U-Net, which extracts contextual information by combining low-level feature maps with high-levels. It not only can be trained end-to-end from few images to perform well but is also known for its capability to be real-time. To train the network, various welding experiments have been conducted by randomly changing the welding current and welding speed. Various weld pool boundaries/shapes are generated under various welding conditions. As such, the data is representative to better ensure the reliability and robustness of the trained network. Experimental results verified that the trained network can detect the weld pool boundary under various conditions accurately in different welding current, welding speed, and weld pool shape.