Conclusions
Machine learning presents many chances to further the subject of computational fluid dynamics and is quickly emerging as a fundamental tool for scientific computing. We highlight some of the areas with the greatest potential influence in this perspective, such as enhancing turbulence closure modeling, speeding up direct numerical simulations, and to create improved lower-order models. Along with some possible drawbacks that should be considered, we also talk about several intriguing new directions in machine learning for computational fluid dynamics.
An overview of the subject was covered in part I, followed by several intriguing uses of ML integration in CFD: Using deep learning to solve the Pressure Poisson Equation (PPE) and accelerate direct numerical simulations are two of the more interesting and pressing topics we have covered. Another is the impact of using machine learning (ML) in turbulence modeling, with a focus on RANS, which is roughly 95% of CFD in industrial applications.
The post “The integration of machine learning (ML) into CFD – Part II” regarded the integration of LES SGS models along with the Machine Learning procedures in the Development of Reduced Order Models (ROMS).
“The integration of machine learning (ML) into CFD – Part III” regarded new advances in Robust methods for non-intrusive sensing and super-resolution for turbulent flows and Novel approaches for flow control.
Part IV shall focus on a novel approach to solving nonlinear partial differential equations (PDEs) using machine learning (ML) techniques with specific emphasis on coarser grids accelerating 2D DNS without hampering solution accuracy.
Accelerating 2D Direct Numerical Simulation (DNS)
Direct Numerical Simulation (DNS) is a computational technique used in fluid dynamics to solve the governing equations for fluid flow in a direct and detailed manner. But the resourcefulness of the plea to a direct numerical description of the equations is a mixed blessing as it seems the availability of such a description is directly matched to the power of a dimensionless number reflecting on how well momentum is diffused relative to the flow velocity (in the cross-stream direction) and on the thickness of a boundary layer relative to the body – The Reynolds Number.
It is found that the computational effort in Direct Numerical Simulation (DNS) of the Navier-Stokes equations rises as Reynolds number in the power of 9/4 which renders such calculations as prohibitive for most engineering applications of practical interest and it shall remain so for the foreseeable future, its use confined to simple geometries and a limited range of Reynolds numbers in the aim of supplying significant insight into turbulence physics that can not be attained in the laboratory.
Integrating Machine learning for the acceleration of DNS
The suggested methodology solves Navier-Stokes equations for turbulent flows by combining the advantages of convolutional neural networks, conventional numerical techniques, and machine learning components. The solver’s accuracy and efficiency are enhanced by the emphasis on convection-based learning interpolation and correction techniques, especially when using a coarse grid.
The algorithm is a hybrid approach that combines neural networks with standard numerical methods for solving fluid dynamics problems, particularly the Navier–Stokes equations:
(1) Time Step Procedure:
In each time step, the neural network generates a latent vector at each grid location based on the current velocity field.
The generated latent vectors are then used by subcomponents of the solver to account for local solution structures.
(2) Neural Network Architecture:
Convolutional neural networks (CNNs) are used, enforcing translation invariance and allowing them to be local in space.
(3) Standard Numerical Methods Components:
- Convective Flux Model: Improves the approximation of the discretized convection operator.
- Divergence Operator: Enforces local conservation of momentum according to a finite volume method.
- Pressure Projection to enforce incompressibility.
- Explicit Time-Step Operator: Forces the dynamics to be continuous in time, allowing for the incorporation of additional time-varying forces.
(4) DNS on a Coarse Grid: Unlike traditional numerical methods that may use arbitrary polynomials, the learned solvers are optimized to fit the observed manifold of solutions to the equations they solve. Empirically, this approach has shown to significantly improve accuracy over high-order numerical methods.
(5) Focus on Two Types of ML Components:
- Learned Interpolation: Centered on convection, a key term in the Navier–Stokes equations for turbulent flows.
- Learned Correction: Also centered on convection. Both types of ML components leverage the strengths of neural networks to improve the modeling of turbulent flows.
The specific application makes use in a numerical computing library that has gained popularity for its efficient support of automatic differentiation termed JAX (short for “Just Another eXtensor”)
JAX is characterized with the following attributes:
- Automatic Differentiation (AD): The process of computing the derivatives of mathematical functions in machine learning and optimization is known as automatic differentiation, or AD. It is essential for using gradient-based optimization techniques to train neural networks. A strong and effective foundation for automated distinction is offered by JAX. It enables users to calculate functions’ higher-order derivatives and gradients, or first-order derivatives, with regard to their inputs.
- Functional and Compositional Approach: JAX adopts a functional and compositional programming style, making it well-suited for expressing mathematical functions in a way that aligns with automatic differentiation.
- Pure and Immutable Functions: JAX encourages the use of pure and immutable functions, which facilitates a more straightforward application of automatic differentiation.
- Support for NumPy Operations: NumPy is a powerful numerical computing library for the Python programming language. It provides support for large, multi-dimensional arrays and matrices, along with mathematical functions to operate on these arrays. NumPy is a fundamental package for scientific computing with Python and is widely used in various fields such as physics, engineering, machine learning, and data science. JAX’s API is designed to be compatible with NumPy, making it easy to transition from NumPy to JAX for users familiar with the widely-used numerical computing library.
- Efficient GPU and TPU Acceleration: JAX seamlessly integrates with hardware accelerators, such as GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units), providing efficient computation for large-scale machine learning tasks.
- Support for Neural Network Libraries: JAX can be used in conjunction with neural network libraries like TensorFlow and PyTorch, allowing users to benefit from JAX’s automatic differentiation capabilities while leveraging the high-level abstractions provided by these frameworks.
For our purpose JAX’s efficiency in calculating gradients via automatic differentiation is valuable for training neural networks within the CFD solver. The ability to efficiently compute gradients is a fundamental requirement for optimizing the parameters of neural networks during the training process.
Conclusions
To sum up, this method broadens the Pareto frontier, or the collection of all Pareto-efficient simulations in CFD. Users of ML-accelerated CFD may either boost accuracy without incurring additional expenditures, or complete costly simulations considerably quicker. In the context of numerical weather prediction, these results mean that we have made progress of about 30 years if we can extend the period of correct forecasts from 4 to 7 time units. Modern deep learning models, which enable accurate simulation with far more compact representations, and modern accelerator hardware, which permits the evaluation of said models with a remarkably small increase in computational cost, work together to make these improvements possible. Both of these technologies are still undergoing rapid advancements. Both trends should persist for the foreseeable future.