Submitted by webmaster on
|Title||Surrogate ML/AI Model Benchmarking for FAIR Principles' Conformance|
|Publication Type||Conference Paper|
|Year of Publication||2022|
|Authors||Luszczek, P., and C. Brown|
|Conference Name||2022 IEEE High Performance Extreme Computing Conference (HPEC)|
|Keywords||Analytical models, Benchmark testing, Cloud computing, Computational modeling, Data models, Measurement, Satellites|
We present benchmarking platform for surrogate ML/AI models that enables the essential properties for open science and allow them to be findable, accessible, interoperable, and reusable. We also present a use case of cloud cover modeling, analysis, and experimental testing based on a large dataset of multi-spectral satellite sensor data. We use this particular evaluation to highlight the plethora of choices that need resolution for the life cycle of supporting the scientific workflows with data-driven models that need to be first trained to satisfactory accuracy and later monitored during field usage for proper feedback into both computational results and future data model improvements. Unlike traditional testing, performance, or analysis efforts, we focus exclusively on science-oriented metrics as the relevant figures of merit.