AI/Data Informatics: Computational Model Development, Validation, and Uncertainty Quantification: Material Design IV
Sponsored by: TMS Materials Processing and Manufacturing Division, TMS: Computational Materials Science and Engineering Committee
Program Organizers: Saurabh Puri, VulcanForms Inc; Dennis Dimiduk, BlueQuartz Software LLC; Darren Pagan, Pennsylvania State University; Anthony Rollett, Carnegie Mellon University; Francesca Tavazza, National Institute of Standards and Technology; Christopher Woodward, Air Force Research Laboratory

Thursday 2:00 PM
March 3, 2022
Room: 256A
Location: Anaheim Convention Center

Session Chair: Jason Gibson, University of Florida


2:00 PM  
Coping with Materials Variance Using Transfer Learning: Ali Riza Durmaz1; Aurèle Goetz1; Martin Müller2; Akhil Thomas1; Pierre Kerfrieden3; 1Fraunhofer IWM; 2Universität des Saarlandes; 3Mines ParisTech (PSL University)
     Materials' microstructures exist in pronounced variety as they are signatures of alloying composition and processing route. As materials become increasingly intricate and their development is accelerated, deep learning (DL) becomes relevant for the automated and objective microstructure constituent quantification. While DL frequently outperforms classical techniques by a large margin, shortcomings are poor data-efficiency and inter-domain generalizability, which inherently opposes expensive data annotation and materials diversity. To alleviate this issue, we propose to apply a sub-class of transfer learning methods called unsupervised domain adaptation (UDA). This class of learning algorithms addresses the task of finding domain-invariant features when supplied with annotated source data and unannotated target data, such that performance on latter distribution is optimized despite the absence of annotations. This study addresses different surface etchings and imaging modalities in a complex phase steel segmentation task. The UDA approach surpasses the generalizability of supervised trained models by a large extent.

2:20 PM  
Comparison of Human, Machine Learning, and Common Optimization Approaches on Grain Boundary Networks: Christopher Adair1; Oliver Johnson1; Emily Beatty1; Hayley Evans1; Seth Holladay1; Derek Hansen1; 1Brigham Young University
    Of the many high dimensional problems in materials models, macroscopic properties of mesoscopic grain boundary defect networks present a difficult model to optimize with current methods. Copious local minima lower the reliability of common numerical techniques, and global methods sacrifice efficiency to solve said reliability. Machine learning applications are appealing in this case, but need either long unsupervised training times, or a training set of gold standard data. We have created and utilized a video game to obtain both insights into human intuition on the optimization and training sets for neural networks. In this presentation we compare the performance of common global optimization methods, direct human input, and a human trained neural network on a grain boundary network optimization problem.

2:40 PM  
Design of a Scalable Interatomic Potential for GST+C Device Modeling: Zachary Mcclure1; Alejandro Strachan1; Robert Appleton1; David Adams2; 1Purdue University; 2Sandia National Laboratory
    As the complexity of our data structures advance so too must the hardware and material structure advance to accommodate change. Optimization of GST based phase change memory (PCM) devices has been studied extensively, but many of the underlying physics of switching behavior, dopant effects on power density, and microstructural evolution of grain boundaries remain uncharacterized. Ab initio studies have successfully characterized stability of ground state and metastable structures, but are limited by time and length scales needed for switching simulations. To appropriately scale, we design a workflow to generate first principles structure information for GST and GST+C and use the trajectory and energy data as training data for a neural network interatomic potential. The speed of scaling ab intio data to neural network molecular dynamics (NNMD) allows us to bridge the gap of time and length scales to achieve ab initio level accuracy with molecular dynamics scales.

3:00 PM  
Combined Clustering and Regression for Predicting Melting Temperatures of Solids: Vahe Gharakhanyan1; José Garrido Torres1; Ethan Eisenberg1; Snigdhansu Chatterjee2; Dallas Trinkle3; Alexander Urban1; 1Columbia University; 2University of Minnesota; 3University of Illinois at Urbana-Champaign
    Melting temperature is important for materials design because it determines the temperature stability of solids. The use of empirical and computational melting point estimation techniques is limited by scope and computational feasibility, respectively. Machine learning (ML) has previously been used for predicting melting temperatures for a small number of binary compounds and certain material classes. Using a database of melting points of 600 crystalline binary compounds and compound features constructed from elemental properties and zero-Kelvin DFT calculations as model input, we first evaluated a direct supervised-learning strategy for melting temperature prediction. We find that the fidelity of predictions can further be improved by introducing an additional unsupervised-learning step that first classifies the materials before melting-point regression. Not only does this two-step model exhibit an improved accuracy but the approach also provides additional insights into different types of melting that are dependent on the unique atomic interactions inside a material.

3:20 PM Break

3:40 PM  
NOW ON-DEMAND ONLY - Ultra-fast and Interpretable Machine-learning Potentials with Application to Structure Prediction: Stephen Xie1; Matthias Rupp2; Richard Hennig1; 1University of Florida; 2University of Konstanz
    Although ab initio methods are indispensable tools for predicting properties of materials and simulating chemical processes, the tradeoff between computational efficiency and predictive accuracy limits their application to large systems and long simulation times. To address this challenge, we combine effective two- and three-body potentials in a cubic B-spline basis with regularized linear regression to obtain machine-learning potentials that are as fast as traditional empirical potentials, sufficiently accurate for applications, and physically interpretable. We benchmark using a bcc tungsten dataset, including melting point calculations with thousands of atoms. Finally, we discuss applying the introduced potentials in accelerating structure prediction. By coupling the Genetic Algorithm for Structure Prediction (GASP) to our machine-learning approach, we train a potential on-the-fly using configurations from the structure search. As the potential learns the energy landscape, we use the potential as a surrogate model to filter candidate structures, reducing the number of ab initio calculations required.

4:00 PM  
Mining Structure-property Linkage in Nanoporous Materials Using an Interpretative Deep Learning Approach: Haomin Liu1; Niaz Abdolrahim1; 1University of Rochester
    In this study, a 3-D convolution neural network (CNN) is designed and implemented to simulated nanoporous (NP) metallic materials to investigate their structure-property relationship. It is demonstrated that our approach is able to predict the effective stiffness of NP structure with a wide range of microstructures while exhibiting high accuracy and low computational cost. We also developed a unique interpretation method for providing meaningful insights into learning the structure and property linkage of nanoporous material based on our well-trained CNN model. Using this method, it is revealed that the CNN identifies relative density and surface curvature as the two most important features that strongly impact stiffness. While the effect of relative density is already known from previous theoretical models, verifying the predictive ability of the CNN model, the interpretation method also suggests that the anomalous low stiffness could be related to the saddle-shape surface of the NP structure