About this Abstract |
Meeting |
2023 TMS Annual Meeting & Exhibition
|
Symposium
|
AI/Data Informatics: Computational Model Development, Validation, and Uncertainty Quantification
|
Presentation Title |
What Does a Computer Vision Model Trained to Classify Material Microstructure Images Actually Understand? |
Author(s) |
Colby Wight, Henry Kvinge, Davis Brown, Keerti Kappagantula |
On-Site Speaker (Planned) |
Davis Brown |
Abstract Scope |
Deep learning-based computer vision models are increasingly being incorporated into research pipelines designed to explore and analyze microstructure images. Within other domains, state-of-the-art explainability techniques have begun to illuminate “reasons” behind model predictions. This understanding is critical in high-consequence areas where domain experts need to have confidence in their models. In this work, we apply interpretability methods to classification models for SEM images of AA7075 tubes manufactured by shear assisted processing and extrusion (ShAPE). We explore how these models respond to some of the features (e.g., grain size distribution, precipitate morphology, void topology) that are used when analyzing microstructure images in typical research process flow. Through this effort, we gain insight into what features the model is sensitive to and identify new features used by the classification models. For example, feature visualization in temper classification models, reveals models behave in peculiar ways only somewhat aligned with human intuition. |
Proceedings Inclusion? |
Planned: |
Keywords |
Machine Learning, Characterization, |