About this Abstract |
| Meeting |
2022 TMS Annual Meeting & Exhibition
|
| Symposium
|
Computational Thermodynamics and Kinetics
|
| Presentation Title |
Integrating Model Interpretability Methods into Machine Learning with Implications in Materials Discovery |
| Author(s) |
Prasanna V. Balachandran |
| On-Site Speaker (Planned) |
Prasanna V. Balachandran |
| Abstract Scope |
Adaptive learning methods, such as active learning and Bayesian optimization, are becoming more common in experimental and computational materials science research. These methods provide a rational means to efficiently navigate the vast search space, which is otherwise difficult to survey using brute-force approaches. One of the expected outcomes from an adaptive learning iterative loop is an improved black-box machine learning or surrogate model that is believed to capture the complexity of the structure-property relationships with sufficient accuracy. More recently, our group has been exploring novel post hoc model interpretability methods to peek inside the trained black-box models and explain the predictions for each observation in the training data. In this talk, I will focus on specific examples that showcase the value of integrating model interpretability methods into the machine learning framework to provide explanations by example. |
| Proceedings Inclusion? |
Planned: |
| Keywords |
Machine Learning, Modeling and Simulation, ICME |