Thursday 2:00 PM

March 3, 2022

Room: 256A

Location: Anaheim Convention Center

Materials' microstructures exist in pronounced variety as they are signatures of alloying composition and processing route. As materials become increasingly intricate and their development is accelerated, deep learning (DL) becomes relevant for the automated and objective microstructure constituent quantification. While DL frequently outperforms classical techniques by a large margin, shortcomings are poor data-efficiency and inter-domain generalizability, which inherently opposes expensive data annotation and materials diversity. To alleviate this issue, we propose to apply a sub-class of transfer learning methods called unsupervised domain adaptation (UDA). This class of learning algorithms addresses the task of finding domain-invariant features when supplied with annotated source data and unannotated target data, such that performance on latter distribution is optimized despite the absence of annotations. This study addresses different surface etchings and imaging modalities in a complex phase steel segmentation task. The UDA approach surpasses the generalizability of supervised trained models by a large extent.

Of the many high dimensional problems in materials models, macroscopic properties of mesoscopic grain boundary defect networks present a difficult model to optimize with current methods. Copious local minima lower the reliability of common numerical techniques, and global methods sacrifice efficiency to solve said reliability. Machine learning applications are appealing in this case, but need either long unsupervised training times, or a training set of gold standard data. We have created and utilized a video game to obtain both insights into human intuition on the optimization and training sets for neural networks. In this presentation we compare the performance of common global optimization methods, direct human input, and a human trained neural network on a grain boundary network optimization problem.

As the complexity of our data structures advance so too must the hardware and material structure advance to accommodate change. Optimization of GST based phase change memory (PCM) devices has been studied extensively, but many of the underlying physics of switching behavior, dopant effects on power density, and microstructural evolution of grain boundaries remain uncharacterized. Ab initio studies have successfully characterized stability of ground state and metastable structures, but are limited by time and length scales needed for switching simulations. To appropriately scale, we design a workflow to generate first principles structure information for GST and GST+C and use the trajectory and energy data as training data for a neural network interatomic potential. The speed of scaling ab intio data to neural network molecular dynamics (NNMD) allows us to bridge the gap of time and length scales to achieve ab initio level accuracy with molecular dynamics scales.

Melting temperature is important for materials design because it determines the temperature stability of solids. The use of empirical and computational melting point estimation techniques is limited by scope and computational feasibility, respectively. Machine learning (ML) has previously been used for predicting melting temperatures for a small number of binary compounds and certain material classes. Using a database of melting points of 600 crystalline binary compounds and compound features constructed from elemental properties and zero-Kelvin DFT calculations as model input, we first evaluated a direct supervised-learning strategy for melting temperature prediction. We find that the fidelity of predictions can further be improved by introducing an additional unsupervised-learning step that first classifies the materials before melting-point regression. Not only does this two-step model exhibit an improved accuracy but the approach also provides additional insights into different types of melting that are dependent on the unique atomic interactions inside a material.

Although ab initio methods are indispensable tools for predicting properties of materials and simulating chemical processes, the tradeoff between computational efficiency and predictive accuracy limits their application to large systems and long simulation times. To address this challenge, we combine effective two- and three-body potentials in a cubic B-spline basis with regularized linear regression to obtain machine-learning potentials that are as fast as traditional empirical potentials, sufficiently accurate for applications, and physically interpretable. We benchmark using a bcc tungsten dataset, including melting point calculations with thousands of atoms. Finally, we discuss applying the introduced potentials in accelerating structure prediction. By coupling the Genetic Algorithm for Structure Prediction (GASP) to our machine-learning approach, we train a potential on-the-fly using configurations from the structure search. As the potential learns the energy landscape, we use the potential as a surrogate model to filter candidate structures, reducing the number of ab initio calculations required.

In this study, a 3-D convolution neural network (CNN) is designed and implemented to simulated nanoporous (NP) metallic materials to investigate their structure-property relationship. It is demonstrated that our approach is able to predict the effective stiffness of NP structure with a wide range of microstructures while exhibiting high accuracy and low computational cost. We also developed a unique interpretation method for providing meaningful insights into learning the structure and property linkage of nanoporous material based on our well-trained CNN model. Using this method, it is revealed that the CNN identifies relative density and surface curvature as the two most important features that strongly impact stiffness. While the effect of relative density is already known from previous theoretical models, verifying the predictive ability of the CNN model, the interpretation method also suggests that the anomalous low stiffness could be related to the saddle-shape surface of the NP structure