kth.sePublikationer KTH
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Deep Reinforcement Learning for the Management of the Wall Regeneration Cycle in Wall-Bounded Turbulent Flows
Department of Engineering, City St George’s, University of London, Northampton Square, EC1V 0HB, London, UK.
KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik, Strömningsmekanik. School of Computation, Information and Technology, TU Munich, Boltzmannstr. 3, 85748, Garching, Germany.ORCID-id: 0000-0002-8589-1572
KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik, Strömningsmekanik.ORCID-id: 0000-0001-6570-5499
Department of Engineering, City St George’s, University of London, Northampton Square, EC1V 0HB, London, UK.
2024 (Engelska)Ingår i: Flow Turbulence and Combustion, ISSN 1386-6184, E-ISSN 1573-1987Artikel i tidskrift (Refereegranskat) Epub ahead of print
Abstract [en]

The wall cycle in wall-bounded turbulent flows is a complex turbulence regeneration mechanism that remains not fully understood. This study explores the potential of deep reinforcement learning (DRL) for managing the wall regeneration cycle to achieve desired flow dynamics. To create a robust framework for DRL-based flow control, we have integrated the StableBaselines3 DRL libraries with the open-source direct numerical simulation (DNS) solver CaNS. The DRL agent interacts with the DNS environment, learning policies that modify wall boundary conditions to optimise objectives such as the reduction of the skin-friction coefficient or the enhancement of certain coherent structures’ features. The implementation makes use of the message-passing-interface (MPI) wrappers for efficient communication between the Python-based DRL agent and the DNS solver, ensuring scalability on high-performance computing architectures. Initial experiments demonstrate the capability of DRL to achieve drag reduction rates comparable with those achieved via traditional methods, although limited to short time intervals. We also propose a strategy to enhance the coherence of velocity streaks, assuming that maintaining straight streaks can inhibit instability and further reduce skin-friction. Our results highlight the promise of DRL in flow-control applications and underscore the need for more advanced control laws and objective functions. Future work will focus on optimising actuation intervals and exploring new computational architectures to extend the applicability and the efficiency of DRL in turbulent flow management.

Ort, förlag, år, upplaga, sidor
Springer Nature , 2024.
Nyckelord [en]
Deep reinforcement learning, Direct numerical simulation, Drag reduction, Flow control
Nationell ämneskategori
Strömningsmekanik
Identifikatorer
URN: urn:nbn:se:kth:diva-367348DOI: 10.1007/s10494-024-00609-4ISI: 001355278300001Scopus ID: 2-s2.0-85209070244OAI: oai:DiVA.org:kth-367348DiVA, id: diva2:1984662
Anmärkning

QC 20250717

Tillgänglig från: 2025-07-17 Skapad: 2025-07-17 Senast uppdaterad: 2025-07-17Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltextScopus

Person

Guastoni, LucaVinuesa, Ricardo

Sök vidare i DiVA

Av författaren/redaktören
Guastoni, LucaVinuesa, Ricardo
Av organisationen
Strömningsmekanik
I samma tidskrift
Flow Turbulence and Combustion
Strömningsmekanik

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 20 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf