kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Additive-feature-attribution methods: A review on explainable artificial intelligence for fluid dynamics and heat transfer
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW.ORCID iD: 0000-0002-7052-4913
Instituto Universitario de Matemática Pura y Aplicada, Universitat Politècnica de València, Valencia, 46022, Spain.
KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics.ORCID iD: 0000-0001-6570-5499
2025 (English)In: International Journal of Heat and Fluid Flow, ISSN 0142-727X, E-ISSN 1879-2278, Vol. 112, article id 109662Article, review/survey (Refereed) Published
Abstract [en]

The use of data-driven methods in fluid mechanics has surged dramatically in recent years due to their capacity to adapt to the complex and multi-scale nature of turbulent flows, as well as to detect patterns in large-scale simulations or experimental tests. In order to interpret the relationships generated in the models during the training process, numerical attributions need to be assigned to the input features. One important example are the additive-feature-attribution methods. These explainability methods link the input features with the model prediction, providing an interpretation based on a linear formulation of the models. The Shapley additive explanations (SHAP values) are formulated as the only possible interpretation that offers a unique solution for understanding the model. In this manuscript, the additive-feature-attribution methods are presented, showing four common implementations in the literature: kernel SHAP, tree SHAP, gradient SHAP, and deep SHAP. Then, the main applications of the additive-feature-attribution methods are introduced, dividing them into three main groups: turbulence modeling, fluid-mechanics fundamentals, and applied problems in fluid dynamics and heat transfer. This review shows that explainability techniques, and in particular additive-feature-attribution methods, are crucial for implementing interpretable and physics-compliant deep-learning models in the fluid-mechanics field.

Place, publisher, year, edition, pages
Elsevier BV , 2025. Vol. 112, article id 109662
Keywords [en]
Deep learning, Explainable artificial intelligence, Fluid mechanics, SHAP, Shapley values
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-357893DOI: 10.1016/j.ijheatfluidflow.2024.109662ISI: 001433920200001Scopus ID: 2-s2.0-85211198681OAI: oai:DiVA.org:kth-357893DiVA, id: diva2:1922600
Note

QC 20250317

Available from: 2024-12-19 Created: 2024-12-19 Last updated: 2025-03-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Cremades, AndrésVinuesa, Ricardo

Search in DiVA

By author/editor
Cremades, AndrésVinuesa, Ricardo
By organisation
Fluid MechanicsLinné Flow Center, FLOW
In the same journal
International Journal of Heat and Fluid Flow
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 106 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf