University of Cambridge > > Isaac Newton Institute Seminar Series > Toward data-driven reduced-order modeling and control of flows with complex chaotic dynamics

Toward data-driven reduced-order modeling and control of flows with complex chaotic dynamics

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

TUR - Mathematical aspects of turbulence: where do we stand?

Many fluid flows are characterized by chaotic dynamics, a large number of degrees of freedom, and multiscale structure in space and time. We build on the idea that many dynamical systems that are nominally described by a state variable of very high or infinite dimension—such as the Navier-Stokes equations governing fluid flow—can be characterized with a much smaller number of dimensions, because the long-time dynamics lie on a finite-dimensional manifold. We describe a data-driven reduced order modeling method that finds a coordinate representation of the manifold using an autoencoder and then learns an ordinary differential equation (ODE) describing the dynamics in these coordinates, using the so-called neural ODE framework. With the ODE representation, data can be widely spaced. We apply this framework to spatiotemporal chaos in the Kuramoto-Sivashinsky equation (KSE), chaotic bursting dynamics of Kolmogorov flow, and transitional turbulence in plane Couette flow,  finding  dramatic dimension reduction while still yielding good predictions of short-time trajectories and long-time statistics. For complex manifolds, this approach can be combined with clustering to generate overlapping local representations that are particularly useful for intermittent dynamics.  Finally, we apply this framework to a control problem that models drag reduction in turbulent flow.   Deep reinforcement learning (RL) control can discover control strategies for high-dimensional systems, making it promising for flow control. However, a major challenge is that substantial training data must be generated by interacting with the target system, making it costly when the flow system is computationally or experimentally expensive. We mitigate this challenge by obtaining a low-dimensional dynamical model from a limited data set for the open-loop system, then learn an RL control policy using the model rather than the true system. We apply our method to data from the KSE in a spatiotemporally chaotic regime, with aim of minimizing power consumption. The learned policy is very effective at this aim,  achieving it by discovering and stabilizing a low-dissipation steady state solution,  without having ever been given explicit information about the existence of that solution.  Given that near-wall turbulence is organized around simpler recurrent solutions, the present approach might be effective for drag reduction.  

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity