kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Active flow control for drag reduction through multi-agent reinforcement learning on a turbulent cylinder at ReD=3900
KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW. KTH, School of Engineering Sciences (SCI), Engineering Mechanics.
KTH, School of Engineering Sciences (SCI), Engineering Mechanics. KTH, School of Engineering Sciences (SCI), Centres, Linné Flow Center, FLOW.
Barcelona Supercomputing Center (BSC-CNS).
Independent researcher, Oslo.
Show others and affiliations
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This study presents novel active-flow-control (AFC) strategies aimed at achieving drag reduction for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of (Re_D=3900). The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. Thanks to the acceleration in training facilitated by exploiting the local invariants with MARL, a drag reduction of (8\%) was achieved, with a mass cost efficiency two orders of magnitude lower than those of the existing classical controls in the literature. This development represents a significant advancement in active flow control, particularly in turbulent regimes critical to industrial applications.

Keywords [en]
Machine learning, active flow control, deep reinforcement learning, fluid mechanics
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-356279DOI: 10.48550/arXiv.2405.17655OAI: oai:DiVA.org:kth-356279DiVA, id: diva2:1912846
Note

QC 20241115

Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-03-31Bibliographically approved
In thesis
1. Multi-agent reinforcement learning for enhanced turbulence control in bluff bodies
Open this publication in new window or tab >>Multi-agent reinforcement learning for enhanced turbulence control in bluff bodies
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Alternative title[sv]
Multi-agent förstärkningsinlärning för förbättrad turbulensreglering i bluffkroppar
Abstract [en]

This licentiate thesis explores the application of deep reinforcement learning (DRL) to flow control in bluff bodies, focusing on reducing drag forces in infinite cylinders. The research spans a range of flow conditions, from laminar to fully turbulent, aiming to advance the state-of-the-art in DRL by exploring novel scenarios not yet covered in the fluid-mechanics literature. Our focus is on the flow around cylinders in two and three dimensions, over a range of Reynolds numbers Re_D based on freestream velocity U and cylinder diameter D. We first consider a single-agent reinforcement learning (SARL) approach using the proximal-policy optimization (PPO) algorithm, coupled with the Alya numerical solver. This approach led to significant drag reductions of 20% and 17.7% for Re_D = 1000 and 2000, respectively, in a two-dimensional (2D) setting. The framework was designed for deployment on high-performance computers, enabling large-scale training with synchronized numerical simulations.

Next, we focused on three-dimensional (3D) cylinders, where spanwise instabilities emerge for Re_D > 250. Drawing inspiration from studies such as Williamson (1996) and findings from Tang et al. (2020), we explored strategies for Re_D = 100 to 400 with a multi-agent reinforcement learning (MARL) framework. This approach focused on local invariants, using multiple jets across the top and bottom surfaces. The MARL framework successfully reduced drag by 21% and 16.5% for Re_D = 300 and 400, respectively, outperforming periodic-control strategies by 10 percentage points and doubling efficiency.

Finally, the framework was tested in a fully turbulent environment at Re_D = 3900, a well-established case in the literature. Despite the significant computational challenges and complex flow structures, the MARL approach delivered significant results, with an 8.3% drag reduction and reducing the mass flow used in the actuation by two orders of magnitude compared with Kim & Choi (2005). Across these studies, the drag-reduction mechanisms learned by the agents involve altering the wake topology to attenuate and move the location of the Reynolds-stresses maximum values upstream, focusing on enlarging the recirculation bubble and reducing pressure drag.

Abstract [sv]

Denna licentiatavhandling utforskar möjligheten att använda förstärkningsin-lärning (DRL) för strömningskontroll kring trubbiga kroppar där speciellt fokus ligger på att minska motståndskrafterna på oändliga cylindrar. En rad strömningsförhållanden undersöks från laminärt till fullt utvecklad turbulent strömning. Målet är att bygga vidare på den senaste utvecklingen inom DRL genom att utforska nya strömningsförhållanden som ännu inte har behandlats inom strömningsmekaniken. Vårt fokus ligger på strömning runt cylindrar i två och tre dimensioner samt över ett spektrum av Reynolds-tal Re_D baserat på fri strömningshastigheten U och cylinderns diameter D. I den första delen av avhandlingen utvecklades en enskild agent-förstärkningsinlärningsmetod med proximal policy optimization kopplad till den numeriska lösaren Alya. Denna metod ledde till betydande minskningar i motståndskrafterna på 20% och 17,7% för Re_D = 1000 och 2000 i en tvådimensionell (2D) miljö. Detta ramverk för kontroll, numerisk simulering och analys utformades för att köras på högpresterande datorer vilket möjliggjorde storskalig träning av nätverket med synkroniserade numeriska simuleringar. Därefter fokuserade vi på tredimensionella (3D) cylindrar, där instabiliteter längs cylinderaxeln uppträder vid Re_D > 250. Inspirerade av studier som Williamson (1996) och resultat från Tang et al. (2020), undersökte vi strategier för Re_D = 100 till 400 med ett multiagent-förstärkningsinlärningsramverk (MARL). Denna metod fokuserade på lokala invariansprinciper och använde flera jetstrålar över cylinderns övre och nedre ytor för kontroll av strömningen. MARL-ramverket minskade motståndet med 21% respektive 16,5% för Re_D = 300 och 400 och överträffade periodiska kontrollstrategier med 10 procentenheter och fördubblade effektiviteten. Slutligen testades ramverket på turbulent strömning vid Re_D = 3900, ett välkänt fall i litteraturen. Trots beräkningsutmaningar och komplexa flödesstrukturer minskade MARL motståndet med 8,3% och massflödet med två storleksordningar jämfört med Kim & Choi (2005). Gemensamt för våra studier är att de motståndsminskande mekanismer som lärts av agenterna involverar att förändra strömningsvakens topologi för att dämpa och flytta Reynolds-stressernas maximala värden uppströms. Detta leder till förstora återcirkulationsbubblan och minska tryckmotståndet.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2024. p. 124
Series
TRITA-SCI-FOU ; 2024:53
Keywords
Machine learning, active flow control, deep reinforcement learning, fluid mechanics, turbulence, Fluidmekanik, laminär-turbulent övergång, aktiv flödeskontroll, dragreduktion, maskininlärning, djup förstärkningsinlärning
National Category
Fluid Mechanics Engineering and Technology
Research subject
Engineering Mechanics
Identifiers
urn:nbn:se:kth:diva-356281 (URN)978-91-8106-103-1 (ISBN)
Presentation
2024-12-05, F2, Kungliga Tekniska Högskolan, Lindstedtsvägen 26 & 28, Stockholm, 13:00 (English)
Opponent
Supervisors
Funder
EU, European Research Council, grant no.2021-CoG-101043998, DEEPCONTROL
Note

QC 241114

Available from: 2024-11-14 Created: 2024-11-13 Last updated: 2025-02-05Bibliographically approved

Open Access in DiVA

fulltext(20330 kB)53 downloads
File information
File name FULLTEXT01.pdfFile size 20330 kBChecksum SHA-512
379cca39bdfaaa4f641b55acc28aa4f7efcccf3eda86de4a0235379eb0ecdb070bedc4818e6adcd37b3307267f1112fb4c81666d1bcc107dc082ce1a92b34aec
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Suarez, PolAlcantara-Avila, FranciscoVinuesa, Ricardo

Search in DiVA

By author/editor
Suarez, PolAlcantara-Avila, FranciscoVinuesa, Ricardo
By organisation
Linné Flow Center, FLOWEngineering MechanicsFluid Mechanics
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar
Total: 53 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 217 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf