kth.sePublikationer KTH
Driftmeddelande
För närvarande är det driftstörningar. Felsökning pågår.
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Additive-feature-attribution methods: A review on explainable artificial intelligence for fluid dynamics and heat transfer
KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik, Strömningsmekanik. KTH, Skolan för teknikvetenskap (SCI), Centra, Linné Flow Center, FLOW.ORCID-id: 0000-0002-7052-4913
Instituto Universitario de Matemática Pura y Aplicada, Universitat Politècnica de València, Valencia, 46022, Spain.
KTH, Skolan för teknikvetenskap (SCI), Centra, Linné Flow Center, FLOW. KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik, Strömningsmekanik.ORCID-id: 0000-0001-6570-5499
2025 (Engelska)Ingår i: International Journal of Heat and Fluid Flow, ISSN 0142-727X, E-ISSN 1879-2278, Vol. 112, artikel-id 109662Artikel, forskningsöversikt (Refereegranskat) Published
Abstract [en]

The use of data-driven methods in fluid mechanics has surged dramatically in recent years due to their capacity to adapt to the complex and multi-scale nature of turbulent flows, as well as to detect patterns in large-scale simulations or experimental tests. In order to interpret the relationships generated in the models during the training process, numerical attributions need to be assigned to the input features. One important example are the additive-feature-attribution methods. These explainability methods link the input features with the model prediction, providing an interpretation based on a linear formulation of the models. The Shapley additive explanations (SHAP values) are formulated as the only possible interpretation that offers a unique solution for understanding the model. In this manuscript, the additive-feature-attribution methods are presented, showing four common implementations in the literature: kernel SHAP, tree SHAP, gradient SHAP, and deep SHAP. Then, the main applications of the additive-feature-attribution methods are introduced, dividing them into three main groups: turbulence modeling, fluid-mechanics fundamentals, and applied problems in fluid dynamics and heat transfer. This review shows that explainability techniques, and in particular additive-feature-attribution methods, are crucial for implementing interpretable and physics-compliant deep-learning models in the fluid-mechanics field.

Ort, förlag, år, upplaga, sidor
Elsevier BV , 2025. Vol. 112, artikel-id 109662
Nyckelord [en]
Deep learning, Explainable artificial intelligence, Fluid mechanics, SHAP, Shapley values
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
URN: urn:nbn:se:kth:diva-357893DOI: 10.1016/j.ijheatfluidflow.2024.109662ISI: 001433920200001Scopus ID: 2-s2.0-85211198681OAI: oai:DiVA.org:kth-357893DiVA, id: diva2:1922600
Anmärkning

QC 20250317

Tillgänglig från: 2024-12-19 Skapad: 2024-12-19 Senast uppdaterad: 2025-03-17Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Cremades, AndrésVinuesa, Ricardo

Sök vidare i DiVA

Av författaren/redaktören
Cremades, AndrésVinuesa, Ricardo
Av organisationen
StrömningsmekanikLinné Flow Center, FLOW
I samma tidskrift
International Journal of Heat and Fluid Flow
Datavetenskap (datalogi)

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 153 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf