AI for Big Data Problems in Advanced Imaging, Materials Modeling and Automated Synthesis: Artificial Intelligence for Automated Synthesis and Characterization
Sponsored by: TMS: Computational Materials Science and Engineering Committee
Program Organizers: Mathew Cherukara, Argonne National Laboratory; Badri Narayanan, University of Louisville; Subramanian Sankaranarayanan, University of Illinois (Chicago)

Monday 2:00 PM
October 18, 2021
Room: A124
Location: Greater Columbus Convention Center

Session Chair: Devang Bhagat, University of Louisville


2:00 PM  Cancelled
Improving EBM NIR Image Analysis for Component Qualification a Statistical Learning Approach: Michael Sprayberry1; John Ledford1; Michael Kirka1; 1Oak Ridge National Laboratory
    Additive manufacturing using electron beam melting (EBM) has successfully reduced the manufacturing lead-time of complex geometric structures with materials that are nearly impossible to manufacture with conventional processing techniques. However, certification of the component quality can be challenging. Due to the continuous deposition of successive layers of material, components can be quantitatively and qualitatively examined without destructively testing the component. However, in-situ monitoring processes have been complicated due to the unique processing environment associated with EBM metal powder. This work describes a solution to one of the challenges of using Near-infrared (NIR) images as a component qualification process. Here, the correlation of in-process backscatter data with the NIR images increases predicting anomalies during the manufacturing process. Results are presented related to in-situ process monitoring and how this technique results in improved mechanical property prediction and reliability of the process.

2:20 PM  
A Deep Generative Model for Parametric EBSD Pattern Simulation: Zihao Ding1; Marc Graef1; 1Carnegie Mellon University
    Currently, the mainstream approach for electron backscatter diffraction (EBSD) pattern simulation is through a physics-based forward model, which calculates backscattered electron trajectories using Monte Carlo and dynamical scattering simulations and then generates patterns using a gnomonic projection. For each simulation, the result is specific to the given set of parameters. It has been shown that a deep neural network is able to extract features from EBSD patterns and predict various characteristics. We propose a deep generative model that combines a conditional variational autoencoder with a generative adversarial network to realize analytic and parametric EBSD pattern simulation. Compared with the conventional forward model, the deep generative model summarizes a distribution over multiple parameters. The accuracy and quality of the generated patterns can be analyzed by accepted indexing methods.

2:40 PM  Invited
Now On-Demand Only: Non-iterative Deep Learning for High-fidelity Microscopic Tomography: Singanallur Venkatakrishnan1; Amir Koushyar Ziabari1; Jacob Hinkle1; Micheal Kirka1; Jeffrey Warren1; Hassina Bilheux1; Vincent Paquit1; Ryan DeHoff1; 1Oak Ridge National Laboratory
     Microscopic computer tomography (CT) is a ubiquitous technique for 3D non-destructive characterization of samples from the nano-meter to micron length scales. However, obtaining high quality 3D reconstructions can be challenging because of the low signal-to-noise ratio, the sparse angular sampling dictated by realistic experiment times, and the complex physics associated with the measurements. In this talk, we will present the development of non-iterative deep-learning-based reconstruction algorithms for microscopic CT data. We will discuss how different network architectures and sources of limited training data (reference samples, single step from a time-resolved scan, and computer-aided design models) can be leveraged to obtain high quality reconstructions from X-ray and neutron CT instruments.These improvements in image quality offered by the deep-learning approach leads to a more efficient use of instrument time enabling faster scans and higher spatio-temporal resolution for time-resolved experiments.

3:00 PM  
Optimizing the Training of Convolutional Neural Networks for Image Segmentation: Benjamin Provencher1; Aly Badran1; Jonathan Kroll1; Mike Marsh2; 1University of Colorado; 2Object Research Systems
    Recent advances in artificial intelligence (AI) have fully automated the operation of image segmentation in scientific images of many important materials samples, which was often laborious and painstaking by previous methods. But optimizing AI usage has been elusive because of the numerous parameters associated with training, in particular the question of how much training data is required. We look at training parameters for convolutional neural networks designed to segment x-ray microCT images for three different composite samples with 2, 3, and 5 material phases, respectively. Analysis of DICE performance of models trained in replicate shows the segmentation accuracy varies with the volume of provided ground-truth training data, but that the response quickly falls off. For all three samples, DICE scores of 0.95 are achieved when the volume of training data exceeds 4 megapixels. Importantly, synthetically augmented training data greatly compensates for a shortage of available natural ground-truth training data.

3:20 PM Break

3:40 PM  
Semantic Segmentation of Porosity in In-situ X-ray Tomography Data Using FCNs: Pradyumna Elavarthi1; Arun Bhattacharjee1; Anca Ralescu1; Ashley Paz y Puente1; 1University of Cincinnati
    X-ray tomography is extensively used in materials science for nondestructive detection of phases and porosity in 3D. In-situ synchrotron tomography is used to track the evolution of porosity in real time. A fully convolutional neural network was used to segment and classify two different types of pores that were observed during in-situ x-ray tomography of pack titanized Ni wires. However, it is difficult to quantify these two pore types separately because of their same intensity and varying shapes. Hence, a series of classical computer vision techniques were used to create initial masks for training a deep learning model. A fully convolutional neural network based on the architecture of U-net was designed and trained on the created masks. Various domain specific data-augmentation techniques were used in the training to improve the generalizability of the model. An F1 score of 0.96 and 0.95 was achieved for pore types I and II, respectively.

4:00 PM  Cancelled
Machine-learning Based Algorithms for 4D X-ray Microtomographic Analysis: Hamidreza T-Sarraf1; Sridhar Niverty1; Nikhilesh Chawla1; 1Purdue University
    Time-dependent x-ray tomography (4D) is an excellent approach to understand material behavior. The quality of the x-ray projections is proportional to the x-ray exposure time. Also, image modalities such as phase-contrast and diffraction-contrast can be used to highlight different microstructural features. These factors extend the scan time and limit the number of scan iterations for time-evolved tomography experiments. Moreover, image processing and segmentation are extremely time-intensive for 4D tomographic data. Thus, there is a need to establish a robust workflow and algorithms that can render time-dependent x-ray datasets accurately and efficiently. In this study, we discuss the utility and efficiency of different Deep Convolutional Neural Network (DCNN) architectures and Generative Adversarial Network (GAN) algorithms for quality enhancement and automated segmentation of x-ray tomography datasets obtained by different modalities. These developments demonstrate the ability to drastically reduce x-ray data acquisition times, thereby opening the window for efficient 4D experiments.