About this Abstract |
Meeting |
MS&T25: Materials Science & Technology
|
Symposium
|
Materials Informatics for Images and Multi-Dimensional Datasets
|
Presentation Title |
Microstructure representation with foundational vision models for efficient learning of microstructure--property relationships |
Author(s) |
Sheila Whitman, Marat I. Latypov |
On-Site Speaker (Planned) |
Marat I. Latypov |
Abstract Scope |
Many materials informatics efforts at the mesoscale focus on the development of task-specific models for individual material classes and their invidiual properties. In this work, we demonstrate the use of foundational vision models for quantiative microstructure representation that can be used for subsequent lightweight machine learning of specific properties. We showcase our approach in two case studies: stiffness of synthetic two-phase microstructures learned from simulation data and Vickers hardness of superalloys learned from experimental data. Our results show pre-trained vision tramsformers can succesfully extract microstructure features from images for efficient machine learning of microstructure–property relationships without the need in expensive task-specific training or fine-tuning. We further present and discuss extensions of this approach to include additional information (e.g., compositions) besides the microstructure for multimodal materials representation and modeling. |