kth.sePublikationer
Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Multi-agent reinforcement learning for enhanced turbulence control in bluff bodies
KTH, Skolan för teknikvetenskap (SCI), Teknisk mekanik.
2024 (Engelska)Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
Hållbar utveckling
SDG 9: Hållbar industri, innovationer och infrastruktur
Alternativ titel
Multi-agent förstärkningsinlärning för förbättrad turbulensreglering i bluffkroppar (Svenska)
Abstract [en]

This licentiate thesis explores the application of deep reinforcement learning (DRL) to flow control in bluff bodies, focusing on reducing drag forces in infinite cylinders. The research spans a range of flow conditions, from laminar to fully turbulent, aiming to advance the state-of-the-art in DRL by exploring novel scenarios not yet covered in the fluid-mechanics literature. Our focus is on the flow around cylinders in two and three dimensions, over a range of Reynolds numbers Re_D based on freestream velocity U and cylinder diameter D. We first consider a single-agent reinforcement learning (SARL) approach using the proximal-policy optimization (PPO) algorithm, coupled with the Alya numerical solver. This approach led to significant drag reductions of 20% and 17.7% for Re_D = 1000 and 2000, respectively, in a two-dimensional (2D) setting. The framework was designed for deployment on high-performance computers, enabling large-scale training with synchronized numerical simulations.

Next, we focused on three-dimensional (3D) cylinders, where spanwise instabilities emerge for Re_D > 250. Drawing inspiration from studies such as Williamson (1996) and findings from Tang et al. (2020), we explored strategies for Re_D = 100 to 400 with a multi-agent reinforcement learning (MARL) framework. This approach focused on local invariants, using multiple jets across the top and bottom surfaces. The MARL framework successfully reduced drag by 21% and 16.5% for Re_D = 300 and 400, respectively, outperforming periodic-control strategies by 10 percentage points and doubling efficiency.

Finally, the framework was tested in a fully turbulent environment at Re_D = 3900, a well-established case in the literature. Despite the significant computational challenges and complex flow structures, the MARL approach delivered significant results, with an 8.3% drag reduction and reducing the mass flow used in the actuation by two orders of magnitude compared with Kim & Choi (2005). Across these studies, the drag-reduction mechanisms learned by the agents involve altering the wake topology to attenuate and move the location of the Reynolds-stresses maximum values upstream, focusing on enlarging the recirculation bubble and reducing pressure drag.

Abstract [sv]

Denna licentiatavhandling utforskar möjligheten att använda förstärkningsin-lärning (DRL) för strömningskontroll kring trubbiga kroppar där speciellt fokus ligger på att minska motståndskrafterna på oändliga cylindrar. En rad strömningsförhållanden undersöks från laminärt till fullt utvecklad turbulent strömning. Målet är att bygga vidare på den senaste utvecklingen inom DRL genom att utforska nya strömningsförhållanden som ännu inte har behandlats inom strömningsmekaniken. Vårt fokus ligger på strömning runt cylindrar i två och tre dimensioner samt över ett spektrum av Reynolds-tal Re_D baserat på fri strömningshastigheten U och cylinderns diameter D. I den första delen av avhandlingen utvecklades en enskild agent-förstärkningsinlärningsmetod med proximal policy optimization kopplad till den numeriska lösaren Alya. Denna metod ledde till betydande minskningar i motståndskrafterna på 20% och 17,7% för Re_D = 1000 och 2000 i en tvådimensionell (2D) miljö. Detta ramverk för kontroll, numerisk simulering och analys utformades för att köras på högpresterande datorer vilket möjliggjorde storskalig träning av nätverket med synkroniserade numeriska simuleringar. Därefter fokuserade vi på tredimensionella (3D) cylindrar, där instabiliteter längs cylinderaxeln uppträder vid Re_D > 250. Inspirerade av studier som Williamson (1996) och resultat från Tang et al. (2020), undersökte vi strategier för Re_D = 100 till 400 med ett multiagent-förstärkningsinlärningsramverk (MARL). Denna metod fokuserade på lokala invariansprinciper och använde flera jetstrålar över cylinderns övre och nedre ytor för kontroll av strömningen. MARL-ramverket minskade motståndet med 21% respektive 16,5% för Re_D = 300 och 400 och överträffade periodiska kontrollstrategier med 10 procentenheter och fördubblade effektiviteten. Slutligen testades ramverket på turbulent strömning vid Re_D = 3900, ett välkänt fall i litteraturen. Trots beräkningsutmaningar och komplexa flödesstrukturer minskade MARL motståndet med 8,3% och massflödet med två storleksordningar jämfört med Kim & Choi (2005). Gemensamt för våra studier är att de motståndsminskande mekanismer som lärts av agenterna involverar att förändra strömningsvakens topologi för att dämpa och flytta Reynolds-stressernas maximala värden uppströms. Detta leder till förstora återcirkulationsbubblan och minska tryckmotståndet.

Ort, förlag, år, upplaga, sidor
Stockholm: KTH Royal Institute of Technology, 2024. , s. 124
Serie
TRITA-SCI-FOU ; 2024:53
Nyckelord [en]
Machine learning, active flow control, deep reinforcement learning, fluid mechanics, turbulence
Nyckelord [sv]
Fluidmekanik, laminär-turbulent övergång, aktiv flödeskontroll, dragreduktion, maskininlärning, djup förstärkningsinlärning
Nationell ämneskategori
Strömningsmekanik Teknik och teknologier
Forskningsämne
Teknisk mekanik
Identifikatorer
URN: urn:nbn:se:kth:diva-356281ISBN: 978-91-8106-103-1 (tryckt)OAI: oai:DiVA.org:kth-356281DiVA, id: diva2:1912939
Presentation
2024-12-05, F2, Kungliga Tekniska Högskolan, Lindstedtsvägen 26 & 28, Stockholm, 13:00 (Engelska)
Opponent
Handledare
Forskningsfinansiär
EU, Europeiska forskningsrådet, grant no.2021-CoG-101043998, DEEPCONTROL
Anmärkning

QC 241114

Tillgänglig från: 2024-11-14 Skapad: 2024-11-13 Senast uppdaterad: 2025-02-05Bibliografiskt granskad
Delarbeten
1. Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes
Öppna denna publikation i ny flik eller fönster >>Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes
Visa övriga...
2022 (Engelska)Ingår i: Actuators, E-ISSN 2076-0825, Vol. 11, nr 12, artikel-id 359Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The increase in emissions associated with aviation requires deeper research into novel sensing and flow-control strategies to obtain improved aerodynamic performances. In this context, data-driven methods are suitable for exploring new approaches to control the flow and develop more efficient strategies. Deep artificial neural networks (ANNs) used together with reinforcement learning, i.e., deep reinforcement learning (DRL), are receiving more attention due to their capabilities of controlling complex problems in multiple areas. In particular, these techniques have been recently used to solve problems related to flow control. In this work, an ANN trained through a DRL agent, coupled with the numerical solver Alya, is used to perform active flow control. The Tensorforce library was used to apply DRL to the simulated flow. Two-dimensional simulations of the flow around a cylinder were conducted and an active control based on two jets located on the walls of the cylinder was considered. By gathering information from the flow surrounding the cylinder, the ANN agent is able to learn through proximal policy optimization (PPO) effective control strategies for the jets, leading to a significant drag reduction. Furthermore, the agent needs to account for the coupled effects of the friction- and pressure-drag components, as well as the interaction between the two boundary layers on both sides of the cylinder and the wake. In the present work, a Reynolds number range beyond those previously considered was studied and compared with results obtained using classical flow-control methods. Significantly different forms of nature in the control strategies were identified by the DRL as the Reynolds number Re increased. On the one hand, for Re & LE;1000, the classical control strategy based on an opposition control relative to the wake oscillation was obtained. On the other hand, for Re=2000, the new strategy consisted of energization of the boundary layers and the separation area, which modulated the flow separation and reduced the drag in a fashion similar to that of the drag crisis, through a high-frequency actuation. A cross-application of agents was performed for a flow at Re=2000, obtaining similar results in terms of the drag reduction with the agents trained at Re=1000 and 2000. The fact that two different strategies yielded the same performance made us question whether this Reynolds number regime (Re=2000) belongs to a transition towards a nature-different flow, which would only admits a high-frequency actuation strategy to obtain the drag reduction. At the same time, this finding allows for the application of ANNs trained at lower Reynolds numbers, but are comparable in nature, saving computational resources.

Ort, förlag, år, upplaga, sidor
MDPI AG, 2022
Nyckelord
numerical simulation, wake dynamics, flow control, machine learning, deep reinforcement learning
Nationell ämneskategori
Strömningsmekanik
Identifikatorer
urn:nbn:se:kth:diva-356269 (URN)10.3390/act11120359 (DOI)000900414000001 ()2-s2.0-85144726353 (Scopus ID)
Anmärkning

QC 20241217

Tillgänglig från: 2024-11-13 Skapad: 2024-11-13 Senast uppdaterad: 2025-03-31Bibliografiskt granskad
2. Flow control of three-dimensional cylinders transitioning to turbulence via multi-agent reinforcement learning
Öppna denna publikation i ny flik eller fönster >>Flow control of three-dimensional cylinders transitioning to turbulence via multi-agent reinforcement learning
Visa övriga...
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

Designing active-flow-control (AFC) strategies for three-dimensional (3D) bluff bodies is a challenging task with critical industrial implications. In this study we explore the potential of discovering novel control strategies for drag reduction using deep reinforcement learning. We introduce a high-dimensional AFC setup on a 3D cylinder, considering Reynolds numbers (ReD) from 100 to 400, which is a range including the transition to 3D wake instabilities. The setup involves multiple zero-net-mass-flux jets positioned on the top and bottom surfaces, aligned into two slots. The method relies on coupling the computational-fluid-dynamics solver with a multi-agent reinforcement-learning (MARL) framework based on the proximal-policy-optimization algorithm. MARL offers several advantages: it exploits local invariance, adaptable control across geometries, facilitates transfer learning and cross-application of agents, and results in a significant training speedup. For instance, our results demonstrate 21% drag reduction for ReD=300, outperforming classical periodic control, which yields up to 6% reduction. To the authors' knowledge, the present MARL-based framework represents the first time where training is conducted in 3D cylinders. This breakthrough paves the way for conducting AFC on progressively more complex turbulent-flow configurations.

Nyckelord
Machine learning, active flow control, deep reinforcement learning, fluid mechanics
Nationell ämneskategori
Strömningsmekanik Teknik och teknologier
Forskningsämne
Flyg- och rymdteknik
Identifikatorer
urn:nbn:se:kth:diva-356270 (URN)10.48550/arXiv.2405.17210 (DOI)
Anmärkning

Under review in Nature Engineering Communications

QC 20241113

Tillgänglig från: 2024-11-13 Skapad: 2024-11-13 Senast uppdaterad: 2025-03-31Bibliografiskt granskad
3. Active flow control for drag reduction through multi-agent reinforcement learning on a turbulent cylinder at ReD=3900
Öppna denna publikation i ny flik eller fönster >>Active flow control for drag reduction through multi-agent reinforcement learning on a turbulent cylinder at ReD=3900
Visa övriga...
(Engelska)Manuskript (preprint) (Övrigt vetenskapligt)
Abstract [en]

This study presents novel active-flow-control (AFC) strategies aimed at achieving drag reduction for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of (Re_D=3900). The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. Thanks to the acceleration in training facilitated by exploiting the local invariants with MARL, a drag reduction of (8\%) was achieved, with a mass cost efficiency two orders of magnitude lower than those of the existing classical controls in the literature. This development represents a significant advancement in active flow control, particularly in turbulent regimes critical to industrial applications.

Nyckelord
Machine learning, active flow control, deep reinforcement learning, fluid mechanics
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:kth:diva-356279 (URN)10.48550/arXiv.2405.17655 (DOI)
Anmärkning

QC 20241115

Tillgänglig från: 2024-11-13 Skapad: 2024-11-13 Senast uppdaterad: 2025-03-31Bibliografiskt granskad

Open Access i DiVA

summary_licentiate_SUAREZ(18387 kB)155 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 18387 kBChecksumma SHA-512
aef177419fb5e212baeeb32094d18ad211830f5768cb94f96d0e27f9d4953ceaadbcb343581d490be41da1395c69bd068e2fd185b17adecf7b902ba09ca57c17
Typ summaryMimetyp application/pdf

Person

Suárez Morales, Pol

Sök vidare i DiVA

Av författaren/redaktören
Suárez Morales, Pol
Av organisationen
Teknisk mekanik
StrömningsmekanikTeknik och teknologier

Sök vidare utanför DiVA

GoogleGoogle Scholar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

isbn
urn-nbn

Altmetricpoäng

isbn
urn-nbn
Totalt: 236 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf