kth.sePublications
Change search
Link to record
Permanent link

Direct link
Alcantara-Avila, FranciscoORCID iD iconorcid.org/0000-0003-0704-6100
Publications (9 of 9) Show all publications
Suárez Morales, P., Alcantara-Avila, F., Miro, A., Rabault, J., Font, B., Lehmkuhl, O. & Vinuesa, R. (2025). Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at ReD=3900. Flow Turbulence and Combustion
Open this publication in new window or tab >>Active Flow Control for Drag Reduction Through Multi-agent Reinforcement Learning on a Turbulent Cylinder at ReD=3900
Show others...
2025 (English)In: Flow Turbulence and Combustion, ISSN 1386-6184, E-ISSN 1573-1987Article in journal (Refereed) Published
Abstract [en]

This study presents novel drag reduction active-flow-control (AFC) strategies for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of ReD=3900. The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. This work introduces a multi-stage training approach to expand the exploration space and enhance drag reduction stabilization. By accelerating training through the exploitation of local invariants with MARL, a drag reduction of approximately 9% is achieved. The cooperative closed-loop strategy developed by the agents is sophisticated, as it utilizes a wide bandwidth of mass-flow-rate frequencies, which classical control methods are unable to match. Notably, the mass cost efficiency is demonstrated to be two orders of magnitude lower than that of classical control methods reported in the literature. These developments represent a significant advancement in active flow control in turbulent regimes, critical for industrial applications.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Fluid mechanics, Drag reduction, Deep learning, Active flow control, Multi-agent reinforcement learning
National Category
Other Engineering and Technologies
Identifiers
urn:nbn:se:kth:diva-361632 (URN)10.1007/s10494-025-00642-x (DOI)001437491200001 ()2-s2.0-86000319598 (Scopus ID)
Note

QC 20250324

Available from: 2025-03-24 Created: 2025-03-24 Last updated: 2025-03-24Bibliographically approved
Font, B., Alcantara-Avila, F., Rabault, J., Vinuesa, R. & Lehmkuhl, O. (2025). Deep reinforcement learning for active flow control in a turbulent separation bubble. Nature Communications, 16(1), Article ID 1422.
Open this publication in new window or tab >>Deep reinforcement learning for active flow control in a turbulent separation bubble
Show others...
2025 (English)In: Nature Communications, E-ISSN 2041-1723, Vol. 16, no 1, article id 1422Article in journal (Refereed) Published
Abstract [en]

The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow features. This allows to significantly reduce the computational cost of DRL training in a turbulent-flow environment. On the fine grid, the periodic control is able to reduce the TSB area by 6.8%, while the DRL-based control achieves 9.0% reduction. Furthermore, the DRL agent provides a smoother control strategy while conserving momentum instantaneously. The physical analysis of the DRL control strategy reveals the production of large-scale counter-rotating vortices by adjacent actuator pairs. It is shown that the DRL agent acts on a wide range of frequencies to sustain these vortices in time. Last, we also introduce our computational fluid dynamics and DRL open-source framework suited for the next generation of exascale computing machines.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Fluid Mechanics
Identifiers
urn:nbn:se:kth:diva-360388 (URN)10.1038/s41467-025-56408-6 (DOI)001416000300004 ()39915442 (PubMedID)2-s2.0-85218216283 (Scopus ID)
Note

Correction in doi 10.1038/s41467-025-57534-x

QC 20250507

Available from: 2025-02-26 Created: 2025-02-26 Last updated: 2025-05-07Bibliographically approved
Font, B., Alcantara-Avila, F., Rabault, J., Vinuesa, R. & Lehmkuhl, O. (2024). Active flow control of a turbulent separation bubble through deep reinforcement learning. In: 5th Madrid Turbulence Workshop 29/05/2023 - 30/06/2023 Madrid, Spain: . Paper presented at 5th Madrid Summer School on Turbulence Workshop, Madrid, Spain, May 29 2023 - Jun 30 2023. IOP Publishing, 2753, Article ID 012022.
Open this publication in new window or tab >>Active flow control of a turbulent separation bubble through deep reinforcement learning
Show others...
2024 (English)In: 5th Madrid Turbulence Workshop 29/05/2023 - 30/06/2023 Madrid, Spain, IOP Publishing , 2024, Vol. 2753, article id 012022Conference paper, Published paper (Refereed)
Abstract [en]

The control efficacy of classical periodic forcing and deep reinforcement learning (DRL) is assessed for a turbulent separation bubble (TSB) at Reτ = 180 on the upstream region before separation occurs. The TSB can resemble a separation phenomenon naturally arising in wings, and a successful reduction of the TSB can have practical implications in the reduction of the aviation carbon footprint. We find that the classical zero-net-mas-flux (ZNMF) periodic control is able to reduce the TSB by 15.7%. On the other hand, the DRL-based control achieves 25.3% reduction and provides a smoother control strategy while also being ZNMF. To the best of our knowledge, the current test case is the highest Reynolds-number flow that has been successfully controlled using DRL to this date. In future work, these results will be scaled to well-resolved large-eddy simulation grids. Furthermore, we provide details of our open-source CFD-DRL framework suited for the next generation of exascale computing machines.

Place, publisher, year, edition, pages
IOP Publishing, 2024
Series
Journal of Physics: Conference Series, ISSN 1742-6596 ; 2753
National Category
Fluid Mechanics
Identifiers
urn:nbn:se:kth:diva-346842 (URN)10.1088/1742-6596/2753/1/012022 (DOI)001223470600022 ()2-s2.0-85193071647 (Scopus ID)
Conference
5th Madrid Summer School on Turbulence Workshop, Madrid, Spain, May 29 2023 - Jun 30 2023
Note

QC 20240531

Available from: 2024-05-24 Created: 2024-05-24 Last updated: 2025-02-09Bibliographically approved
Alcantara-Avila, F., Garcia-Raffi, L. M., Hoyas, S. & Oberlack, M. (2024). Validation of symmetry-induced high moment velocity and temperature scaling laws in a turbulent channel flow. Physical review. E, 109(2), Article ID 025104.
Open this publication in new window or tab >>Validation of symmetry-induced high moment velocity and temperature scaling laws in a turbulent channel flow
2024 (English)In: Physical review. E, ISSN 2470-0045, E-ISSN 2470-0053, Vol. 109, no 2, article id 025104Article in journal (Refereed) Published
Abstract [en]

The symmetry -based turbulence theory has been used to derive new scaling laws for the streamwise velocity and temperature moments of arbitrary order. For this, it has been applied to an incompressible turbulent channel flow driven by a pressure gradient with a passive scalar equation coupled in. To derive the scaling laws, symmetries of the classical Navier-Stokes and the thermal energy equations have been used together with statistical symmetries, i.e., the statistical scaling and translation symmetries of the multipoint moment equations. Specifically, the multipoint moments are built on the instantaneous velocity and temperature fields other than in the classical approach, where moments are based on the fluctuations of these fields. With this instantaneous approach, a linear system of multipoint correlation equations has been obtained, which greatly simplifies the symmetry analysis. The scaling laws have been derived in the limit of zero viscosity and heat conduction, i.e., Ret -> infinity and Pr > 1, and they apply in the center of the channel, i.e., they represent a generalization of the deficit law, thus extending the work of Oberlack et al. [Phys. Rev. Lett. 128, 024502 (2022)]. The scaling laws are all power laws, with the exponent of the high moments all depending exclusively on those of the first and second moments. To validate the new scaling laws, the data from a large number of direct numerical simulations (DNS) for different Reynolds and Prandtl numbers have been used. The results show a very high accuracy of the scaling laws to represent the DNS data. The statistical scaling symmetry of the multipoint moment equations, which characterizes intermittency, has been the key to the new results since it generates a constant in the exponent of the final scaling law. Most important, since this constant is independent of the order of the moments, it clearly indicates anomalous scaling.

Place, publisher, year, edition, pages
American Physical Society (APS), 2024
National Category
Fluid Mechanics
Identifiers
urn:nbn:se:kth:diva-345064 (URN)10.1103/PhysRevE.109.025104 (DOI)001171289100003 ()38491667 (PubMedID)2-s2.0-85185395013 (Scopus ID)
Note

QC 20240405

Available from: 2024-04-05 Created: 2024-04-05 Last updated: 2025-02-09Bibliographically approved
Vignon, C., Rabault, J., Vasanth, J., Alcantara-Avila, F., Mortensen, M. & Vinuesa, R. (2023). Effective control of two-dimensional Rayleigh-Benard convection: Invariant multi-agent reinforcement learning is all you need. Physics of fluids, 35(6), Article ID 065146.
Open this publication in new window or tab >>Effective control of two-dimensional Rayleigh-Benard convection: Invariant multi-agent reinforcement learning is all you need
Show others...
2023 (English)In: Physics of fluids, ISSN 1070-6631, E-ISSN 1089-7666, Vol. 35, no 6, article id 065146Article in journal (Refereed) Published
Abstract [en]

Rayleigh-Benard convection (RBC) is a recurrent phenomenon in a number of industrial and geoscience flows and a well-studied system from a fundamental fluid-mechanics viewpoint. In the present work, we conduct numerical simulations to apply deep reinforcement learning (DRL) for controlling two-dimensional RBC using sensor-based feedback control. We show that effective RBC control can be obtained by leveraging invariant multi-agent reinforcement learning (MARL), which takes advantage of the locality and translational invariance inherent to RBC flows inside wide channels. MARL applied to RBC allows for an increase in the number of control segments without encountering the curse of dimensionality that would result from a naive increase in the DRL action-size dimension. This is made possible by the MARL ability for reusing the knowledge generated in different parts of the RBC domain. MARL is able to discover an advanced control strategy that destabilizes the spontaneous RBC double-cell pattern, changes the topology of RBC by coalescing adjacent convection cells, and actively controls the resulting coalesced cell to bring it to a new stable configuration. This modified flow configuration results in reduced convective heat transfer, which is beneficial in a number of industrial processes. We additionally draw comparisons with a conventional single-agent reinforcement learning (SARL) setup and report that in the same number of episodes, SARL is not able to learn an effective policy to control the cells. Thus, our work both shows the potential of MARL for controlling large RBC systems and demonstrates the possibility for DRL to discover strategies that move the RBC configuration between different topological configurations, yielding desirable heat-transfer characteristics.

Place, publisher, year, edition, pages
AIP Publishing, 2023
National Category
Fluid Mechanics
Identifiers
urn:nbn:se:kth:diva-333551 (URN)10.1063/5.0153181 (DOI)001021745900007 ()2-s2.0-85164269942 (Scopus ID)
Note

QC 20231122

Available from: 2023-08-03 Created: 2023-08-03 Last updated: 2025-02-09Bibliographically approved
Varela, P., Suárez, P., Alcantara-Avila, F., Miró, A., Rabault, J., Font, B., . . . Vinuesa, R. (2022). Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes. Actuators, 11(12), Article ID 359.
Open this publication in new window or tab >>Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes
Show others...
2022 (English)In: Actuators, E-ISSN 2076-0825, Vol. 11, no 12, article id 359Article in journal (Refereed) Published
Abstract [en]

The increase in emissions associated with aviation requires deeper research into novel sensing and flow-control strategies to obtain improved aerodynamic performances. In this context, data-driven methods are suitable for exploring new approaches to control the flow and develop more efficient strategies. Deep artificial neural networks (ANNs) used together with reinforcement learning, i.e., deep reinforcement learning (DRL), are receiving more attention due to their capabilities of controlling complex problems in multiple areas. In particular, these techniques have been recently used to solve problems related to flow control. In this work, an ANN trained through a DRL agent, coupled with the numerical solver Alya, is used to perform active flow control. The Tensorforce library was used to apply DRL to the simulated flow. Two-dimensional simulations of the flow around a cylinder were conducted and an active control based on two jets located on the walls of the cylinder was considered. By gathering information from the flow surrounding the cylinder, the ANN agent is able to learn through proximal policy optimization (PPO) effective control strategies for the jets, leading to a significant drag reduction. Furthermore, the agent needs to account for the coupled effects of the friction- and pressure-drag components, as well as the interaction between the two boundary layers on both sides of the cylinder and the wake. In the present work, a Reynolds number range beyond those previously considered was studied and compared with results obtained using classical flow-control methods. Significantly different forms of nature in the control strategies were identified by the DRL as the Reynolds number Re increased. On the one hand, for Re & LE;1000, the classical control strategy based on an opposition control relative to the wake oscillation was obtained. On the other hand, for Re=2000, the new strategy consisted of energization of the boundary layers and the separation area, which modulated the flow separation and reduced the drag in a fashion similar to that of the drag crisis, through a high-frequency actuation. A cross-application of agents was performed for a flow at Re=2000, obtaining similar results in terms of the drag reduction with the agents trained at Re=1000 and 2000. The fact that two different strategies yielded the same performance made us question whether this Reynolds number regime (Re=2000) belongs to a transition towards a nature-different flow, which would only admits a high-frequency actuation strategy to obtain the drag reduction. At the same time, this finding allows for the application of ANNs trained at lower Reynolds numbers, but are comparable in nature, saving computational resources.

Place, publisher, year, edition, pages
MDPI AG, 2022
Keywords
numerical simulation, wake dynamics, flow control, machine learning, deep reinforcement learning
National Category
Fluid Mechanics
Identifiers
urn:nbn:se:kth:diva-356269 (URN)10.3390/act11120359 (DOI)000900414000001 ()2-s2.0-85144726353 (Scopus ID)
Note

QC 20241217

Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-03-31Bibliographically approved
Schmekel, D., Alcantara-Avila, F., Hoyas, S. & Vinuesa, R. (2022). Predicting Coherent Turbulent Structures via Deep Learning. Frontiers in Physics, 10, Article ID 888832.
Open this publication in new window or tab >>Predicting Coherent Turbulent Structures via Deep Learning
2022 (English)In: Frontiers in Physics, E-ISSN 2296-424X, Vol. 10, article id 888832Article in journal (Refereed) Published
Abstract [en]

Turbulent flow is widespread in many applications, such as airplane wings or turbine blades. Such flow is highly chaotic and impossible to predict far into the future. Some regions exhibit a coherent physical behavior in turbulent flow, satisfying specific properties; these regions are denoted as coherent structures. This work considers structures connected with the Reynolds stresses, which are essential quantities for modeling and understanding turbulent flows. Deep-learning techniques have recently had promising results for modeling turbulence, and here we investigate their capabilities for modeling coherent structures. We use data from a direct numerical simulation (DNS) of a turbulent channel flow to train a convolutional neural network (CNN) and predict the number and volume of the coherent structures in the channel over time. Overall, the performance of the CNN model is very good, with a satisfactory agreement between the predicted geometrical properties of the structures and those of the reference DNS data.

Place, publisher, year, edition, pages
Frontiers Media SA, 2022
Keywords
turbulence, coherent turbulent structures, machine learning, convolutional neural networks, deep learning
National Category
Metallurgy and Metallic Materials Other Physics Topics Building Technologies
Identifiers
urn:nbn:se:kth:diva-312777 (URN)10.3389/fphy.2022.888832 (DOI)000791945300001 ()2-s2.0-85128911862 (Scopus ID)
Note

QC 20220523

Available from: 2022-05-23 Created: 2022-05-23 Last updated: 2024-03-15Bibliographically approved
Suarez, P., Alcantara-Avila, F., Miró, A., Rabault, J., Font, B., Lehmkuhl, O. & Vinuesa, R.Active flow control for drag reduction through multi-agent reinforcement learning on a turbulent cylinder at ReD=3900.
Open this publication in new window or tab >>Active flow control for drag reduction through multi-agent reinforcement learning on a turbulent cylinder at ReD=3900
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This study presents novel active-flow-control (AFC) strategies aimed at achieving drag reduction for a three-dimensional cylinder immersed in a flow at a Reynolds number based on freestream velocity and cylinder diameter of (Re_D=3900). The cylinder in this subcritical flow regime has been extensively studied in the literature and is considered a classic case of turbulent flow arising from a bluff body. The strategies presented are explored through the use of deep reinforcement learning. The cylinder is equipped with 10 independent zero-net-mass-flux jet pairs, distributed on the top and bottom surfaces, which define the AFC setup. The method is based on the coupling between a computational-fluid-dynamics solver and a multi-agent reinforcement-learning (MARL) framework using the proximal-policy-optimization algorithm. Thanks to the acceleration in training facilitated by exploiting the local invariants with MARL, a drag reduction of (8\%) was achieved, with a mass cost efficiency two orders of magnitude lower than those of the existing classical controls in the literature. This development represents a significant advancement in active flow control, particularly in turbulent regimes critical to industrial applications.

Keywords
Machine learning, active flow control, deep reinforcement learning, fluid mechanics
National Category
Engineering and Technology
Identifiers
urn:nbn:se:kth:diva-356279 (URN)10.48550/arXiv.2405.17655 (DOI)
Note

QC 20241115

Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-03-31Bibliographically approved
Suarez, P., Alcantara-Avila, F., Miró, A., Rabault, J., Font, B., le, O. & Vinuesa, R. Flow control of three-dimensional cylinders transitioning to turbulence via multi-agent reinforcement learning.
Open this publication in new window or tab >>Flow control of three-dimensional cylinders transitioning to turbulence via multi-agent reinforcement learning
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Designing active-flow-control (AFC) strategies for three-dimensional (3D) bluff bodies is a challenging task with critical industrial implications. In this study we explore the potential of discovering novel control strategies for drag reduction using deep reinforcement learning. We introduce a high-dimensional AFC setup on a 3D cylinder, considering Reynolds numbers (ReD) from 100 to 400, which is a range including the transition to 3D wake instabilities. The setup involves multiple zero-net-mass-flux jets positioned on the top and bottom surfaces, aligned into two slots. The method relies on coupling the computational-fluid-dynamics solver with a multi-agent reinforcement-learning (MARL) framework based on the proximal-policy-optimization algorithm. MARL offers several advantages: it exploits local invariance, adaptable control across geometries, facilitates transfer learning and cross-application of agents, and results in a significant training speedup. For instance, our results demonstrate 21% drag reduction for ReD=300, outperforming classical periodic control, which yields up to 6% reduction. To the authors' knowledge, the present MARL-based framework represents the first time where training is conducted in 3D cylinders. This breakthrough paves the way for conducting AFC on progressively more complex turbulent-flow configurations.

Keywords
Machine learning, active flow control, deep reinforcement learning, fluid mechanics
National Category
Fluid Mechanics Engineering and Technology
Research subject
Aerospace Engineering
Identifiers
urn:nbn:se:kth:diva-356270 (URN)10.48550/arXiv.2405.17210 (DOI)
Note

Under review in Nature Engineering Communications

QC 20241113

Available from: 2024-11-13 Created: 2024-11-13 Last updated: 2025-03-31Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0704-6100

Search in DiVA

Show all publications