Tuesday 8:00 AM

March 1, 2022

Room: 255C

Location: Anaheim Convention Center

We report large anharmonicity of optical phonons in cuprite (Cu

A description of the potential energy surface as a function of atomic configuration is required for atomistic simulations of materials. Descriptions based on density functional theory (DFT) are accurate but computationally expensive. Machine learning interatomic potentials emerged in the past decade and can now achieve DFT-like accuracy while being much cheaper. We present a framework that uses Gaussian process regression to predict the energy of a crystalline solid when its atoms are displaced from their equilibrium positions based on how similar the atomic configurations are to each other; similarity is computed using the marginalized graph kernel. The prediction error is sufficiently small to extract the finite-temperature lattice dynamics using as few as 300 atomic configurations for training, which are found via active learning. Forces and force constants can be obtained directly from the potential using automatic differentiation. We compute the high-temperature lattice dynamics of bcc Zr to demonstrate the framework.

Monte Carlo methods are robust computational techniques to study thermodynamics of matter. When combined with high-fidelity first-principles calculations such as density functional theory, it is possible to obtain finite-temperature materials properties like phase transitions and stabilities to a practical accuracy. However, this is computationally inefficient for general applications in practice because a large number of Monte Carlo proposals would be rejected. To fully make use of the information obtained from first-principles calculations, we instead construct the Hamiltonian for a physical system by training accurate neural-network surrogate models from first-principles calculations. The resulting neural-network-based model Hamiltonians then allow us to perform Monte Carlo simulations more efficiently, while retaining high accuracy for the thermodynamics. This work combines the use of high-performance computing and machine learning techniques to accelerate physics simulations, which will be demonstrated through applications to selected spin and alloy systems.

Graph neural networks (GNN) have been shown to provide much improved performance for representing and modeling atomistic materials compared with descriptor-based machine-learning models. While most existing GNN models for atomistic predictions are based on atomic distance information, they do not explicitly incorporate bond angles, which are critical for distinguishing many atomic structures. Furthermore, many material properties are known to be sensitive to slight changes in bond angles. We develop Atomistic Line Graph Neural Network (ALIGNN) using a GNN architecture that performs message passing on both the interatomic bond graph and its line graph corresponding to bond angles. We demonstrate that angle information can be explicitly and efficiently included for materials to provide much improved performance. We train 44 models for predicting several solid-state material properties available in the JARVIS-DFT and materials-project databases. ALIGNN can outperform some of the previously known GNN models by up to 43.8 %.

Adaptive learning methods, such as active learning and Bayesian optimization, are becoming more common in experimental and computational materials science research. These methods provide a rational means to efficiently navigate the vast search space, which is otherwise difficult to survey using brute-force approaches. One of the expected outcomes from an adaptive learning iterative loop is an improved black-box machine learning or surrogate model that is believed to capture the complexity of the structure-property relationships with sufficient accuracy. More recently, our group has been exploring novel post hoc model interpretability methods to peek inside the trained black-box models and explain the predictions for each observation in the training data. In this talk, I will focus on specific examples that showcase the value of integrating model interpretability methods into the machine learning framework to provide explanations by example.

A fundamental understanding of Cr atoms' energetics and kinetics in Fe-Ni-Cr alloys is of crucial importance because it determines the mechanical and corrosion-resistant performance of Cr-Ni based austenitic stainless steels. Here we investigate the energy and activation barrier distributions of Cr atoms in austenitic alloys over a multiplicity of modeling samples across a wide range of chemical (e.g. solid solutions vs. segregated states) and microstructural (e.g. bulk vs. grain boundaries) environments. Assisted with energy landscape-sampling algorithm and physics-based machine learning, the thermodynamic and kinetic behaviors of Cr atoms are reliably predicted according to their local electronegativity and free volume of local atomic packing. In general, the stability is more sensitive to local electronegativity, while the mobility is more responsive to local free volume. The corresponding predictive maps in the parameter space are further established, and the insights into the design of austenitic alloys with desired properties are also discussed.