kth.sePublications KTH
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Deep Reinforcement Learning for the Management of the Wall Regeneration Cycle in Wall-Bounded Turbulent Flows
Department of Engineering, City St George’s, University of London, Northampton Square, EC1V 0HB, London, UK.
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics. School of Computation, Information and Technology, TU Munich, Boltzmannstr. 3, 85748, Garching, Germany.ORCID iD: 0000-0002-8589-1572
KTH, School of Engineering Sciences (SCI), Engineering Mechanics, Fluid Mechanics.ORCID iD: 0000-0001-6570-5499
Department of Engineering, City St George’s, University of London, Northampton Square, EC1V 0HB, London, UK.
2024 (English)In: Flow Turbulence and Combustion, ISSN 1386-6184, E-ISSN 1573-1987Article in journal (Refereed) Epub ahead of print
Abstract [en]

The wall cycle in wall-bounded turbulent flows is a complex turbulence regeneration mechanism that remains not fully understood. This study explores the potential of deep reinforcement learning (DRL) for managing the wall regeneration cycle to achieve desired flow dynamics. To create a robust framework for DRL-based flow control, we have integrated the StableBaselines3 DRL libraries with the open-source direct numerical simulation (DNS) solver CaNS. The DRL agent interacts with the DNS environment, learning policies that modify wall boundary conditions to optimise objectives such as the reduction of the skin-friction coefficient or the enhancement of certain coherent structures’ features. The implementation makes use of the message-passing-interface (MPI) wrappers for efficient communication between the Python-based DRL agent and the DNS solver, ensuring scalability on high-performance computing architectures. Initial experiments demonstrate the capability of DRL to achieve drag reduction rates comparable with those achieved via traditional methods, although limited to short time intervals. We also propose a strategy to enhance the coherence of velocity streaks, assuming that maintaining straight streaks can inhibit instability and further reduce skin-friction. Our results highlight the promise of DRL in flow-control applications and underscore the need for more advanced control laws and objective functions. Future work will focus on optimising actuation intervals and exploring new computational architectures to extend the applicability and the efficiency of DRL in turbulent flow management.

Place, publisher, year, edition, pages
Springer Nature , 2024.
Keywords [en]
Deep reinforcement learning, Direct numerical simulation, Drag reduction, Flow control
National Category
Fluid Mechanics
Identifiers
URN: urn:nbn:se:kth:diva-367348DOI: 10.1007/s10494-024-00609-4ISI: 001355278300001Scopus ID: 2-s2.0-85209070244OAI: oai:DiVA.org:kth-367348DiVA, id: diva2:1984662
Note

QC 20250717

Available from: 2025-07-17 Created: 2025-07-17 Last updated: 2025-07-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Guastoni, LucaVinuesa, Ricardo

Search in DiVA

By author/editor
Guastoni, LucaVinuesa, Ricardo
By organisation
Fluid Mechanics
In the same journal
Flow Turbulence and Combustion
Fluid Mechanics

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 17 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf