ICME 2023: AI/ML: Microstructure II
Program Organizers: Charles Ward, AFRL/RXM; Heather Murdoch, U.S. Army Research Laboratory

Tuesday 10:10 AM
May 23, 2023
Room: Caribbean VI & VII
Location: Caribe Royale

Session Chair: Austin Mann, ATI Materials


10:10 AM  
Microstructural Analysis of Stainless Steel SEM Images by Combining EBSD Data and Deep Learning: Julia Nguyen1; Jenna Pope1; Christina Doty1; Marissa Gomez Hernandez1; 1PNNL
    Microstructural features influence material properties, thus characterization of these features is essential to understanding and predicting material performance. One powerful tool for microstructural characterization is electron backscatter diffraction (EBSD). EBSD is a scanning electron microscopy (SEM) based technique that provides information about crystal structure and orientation, allowing quantification of important features such as shape, size and orientation of grains. While EBSD can provide detailed and accurate information about microstructures, this technique is time-consuming and expensive, limiting its utility for high-throughput microstructural analysis. Here, we describe deep convolutional neural networks that take backscatter electron SEM images, which are easier and faster to collect, and perform semantic segmentation to identify grains and grain boundaries, allowing for microstructural analysis typically accessible through EBSD. We demonstrate the utility and accuracy of our models by performing grain size and shape analysis for stainless steel.

10:30 AM  
Using Unsupervised Learning to Identify Small Crack Characteristics and Link to Fatigue Life: Katelyn Jones1; Reji John2; Paul Shade2; William Musinski3; Elizabeth Holm1; Anthony Rollett1; 1Carnegie Mellon University; 2Air Force Research Laboratory; 3University of Wisconsin Milwaukee
    The work seeks to create a dataset of secondary election images of Ti-6Al-4V fatigue fracture surfaces and apply transfer learned Convolution Neural Networks (CNNs) to the dataset to make connections between the fracture surfaces and the fatigue life, with the goal being fractography by computer. The images consist of the crack initiation site, short crack region, steady crack growth, and instantaneous failure region on multiple samples with varying loading conditions and fatigue lifetimes. Unsupervised learning which consists of dimensionality reduction and clustering is used to determine which subset of data, i.e., which crack growth region or magnification, provides the most physically meaningful information. Additionally, the features of those images that are deemed most important provide information that can tie the fracture surface to the fatigue life and microstructure. The images taken, the algorithms used, identified fatigue properties, and fracture characteristics will be presented.

10:50 AM  
Spatiotemporal Feature Extraction Using Deep Learning for Stress Corrosion Cracking in X-ray Computed Tomography Scans of Al-Mg Alloys: Thomas Ciardi1; Pawan Tripathi1; John Lewandowski1; Roger French1; 1Case Western Reserve University
    Spatiotemporal studies of material degradation have rapidly improved with the high resolution imaging capabilities of X-ray computed tomography (XCT). Materials science, however, lacks the tooling to analyze the scale of data produced. As a result, analysis is limited to manually segmented features and subsets of data which is time consuming and results in large information loss. We propose leveraging computer vision and deep learning to develop automated frameworks for full-scale feature extraction and analysis. Slow strain rate tension tests were conducted with collaborators at the Diamond Light Source on field-retrieved Al-Mg plate material removed after 42-years of service exposure. A sample at 50%RH and a sample in dry air were tested to determine the effects of long-term service on stress corrosion cracking. We developed an automated deep learning pipeline that segments features of interest, quantifies their properties, and builds a complete spatiotemporal microstructural degradation profile of the dataset.

11:10 AM  
Vapor Depression Segmentation and Absorptivity Prediction from Synchrotron X-ray Images Using Deep Neural Networks: Runbo Jiang1; John Smith1; Yu-Tsen Yi1; Brian Simonds2; Tao Sun3; Anthony Rollett1; 1Carnegie Mellon University; 2National Institute of Standards and Technology; 3University of Virginia
    The quantification of the amount of absorbed light is essential for understanding laser-material interactions and melt pool dynamics in additive manufacturing process. The geometry of a vapor depression, also known as a keyhole, in melt pools formed during laser melting is closely related to laser absorptivity. This relationship has been observed by the state-of-the-art in situ high speed synchrotron x-ray visualization and integrating sphere radiometry. These two techniques create a temporally resolved dataset consisting of keyhole images and the corresponding laser absorptivity. In this work, we propose two different pipelines to predict laser absorptivity. The end-to-end approach uses deep convolutional neural networks to interpret an unprocessed x-ray image and predict the amount of light absorbed. The two-stage approach uses a fine-tuned image segmentation model to extract geometric features and predict absorption using regression. Though with different advantages and limitations, both approaches reached an MAE less than 6.9%.

11:30 AM  
Towards Deep Learning of Dislocations from TEM Images: The Problem of “Never Enough Training Data”: Kishan Govind1; Marc Legros2; Stefan Sandfeld1; Daniela Oliveros2; 1Institute for Advanced Simulation; 2CEMES-CNRS
    Transmission electron microscopy (TEM) data generated in the form of bright field screenshots of microstructure can be used directly to study dislocations in great detail but is limited by the difficulty in identifying and extracting these individual defects. We attempt to automate the study of dislocations present in TEM images using a deep learning network, U-Net which can segment dislocations. Here we present a parametric model to generate synthetic training data for the supervised machine learning task of dislocation segmentation. Experimentation with different synthetic datasets which vary in dislocation microstructure as well as background for synthetic images suggest that a general synthetic data which requires very little input from real data can give better results. This study shows the importance of using synthetic training data to help studying large and real TEM data in great detail.