Electron microscopy is widely used to explore defects in crystal structures, but human tracking of defects can be time-consuming, error prone, and unreliable, and is not scalable to large numbers of images or real-time analysis. In this work we discuss application of deep learning machine vision approaches to find the location and geometry of different defect clusters in irradiated steels. We assess both bounding box and pixel level segmentation approaches, namely the Faster Regional Convolutional Neural Network (RCNN) and Mask RCNN, respectively, and demonstrate similar performance. We also consider the You Only Look Once (YOLO) approach and demonstrate the capability for real-time analysis of electron microscopy videos, including tracking defect counts, growth, and diffusion. In all cases we are able to achieve performance comparable to human labeling, suggesting that these technologies can support a radical transformation to orders of magnitude gains in our ability to analyze defects.