Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Additive-feature-attribution methods: A review on explainable artificial intelligence for fluid dynamics and heat transfer
KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik, Strömningsmekanik. KTH, Skolan för teknikvetenskap (SCI), Centra, Linné Flow Center, FLOW.ORCID-id: 0000-0002-7052-4913
Instituto Universitario de Matemática Pura y Aplicada, Universitat Politècnica de València, Valencia, 46022, Spain.
KTH, Skolan för teknikvetenskap (SCI), Centra, Linné Flow Center, FLOW. KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik, Strömningsmekanik.ORCID-id: 0000-0001-6570-5499
2025 (engelsk)Inngår i: International Journal of Heat and Fluid Flow, ISSN 0142-727X, E-ISSN 1879-2278, Vol. 112, artikkel-id 109662Artikkel, forskningsoversikt (Fagfellevurdert) Published
Abstract [en]

The use of data-driven methods in fluid mechanics has surged dramatically in recent years due to their capacity to adapt to the complex and multi-scale nature of turbulent flows, as well as to detect patterns in large-scale simulations or experimental tests. In order to interpret the relationships generated in the models during the training process, numerical attributions need to be assigned to the input features. One important example are the additive-feature-attribution methods. These explainability methods link the input features with the model prediction, providing an interpretation based on a linear formulation of the models. The Shapley additive explanations (SHAP values) are formulated as the only possible interpretation that offers a unique solution for understanding the model. In this manuscript, the additive-feature-attribution methods are presented, showing four common implementations in the literature: kernel SHAP, tree SHAP, gradient SHAP, and deep SHAP. Then, the main applications of the additive-feature-attribution methods are introduced, dividing them into three main groups: turbulence modeling, fluid-mechanics fundamentals, and applied problems in fluid dynamics and heat transfer. This review shows that explainability techniques, and in particular additive-feature-attribution methods, are crucial for implementing interpretable and physics-compliant deep-learning models in the fluid-mechanics field.

sted, utgiver, år, opplag, sider
Elsevier BV , 2025. Vol. 112, artikkel-id 109662
Emneord [en]
Deep learning, Explainable artificial intelligence, Fluid mechanics, SHAP, Shapley values
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-357893DOI: 10.1016/j.ijheatfluidflow.2024.109662ISI: 001433920200001Scopus ID: 2-s2.0-85211198681OAI: oai:DiVA.org:kth-357893DiVA, id: diva2:1922600
Merknad

QC 20250317

Tilgjengelig fra: 2024-12-19 Laget: 2024-12-19 Sist oppdatert: 2025-03-17bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Person

Cremades, AndrésVinuesa, Ricardo

Søk i DiVA

Av forfatter/redaktør
Cremades, AndrésVinuesa, Ricardo
Av organisasjonen
I samme tidsskrift
International Journal of Heat and Fluid Flow

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 154 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf