학술논문

Understanding Latent Timescales in Neural Ordinary Differential Equation Models for Advection-Dominated Dynamical Systems
Document Type
Working Paper
Source
Subject
Physics - Fluid Dynamics
Language
Abstract
The neural ordinary differential equation (ODE) framework has shown promise in developing accelerated surrogate models for complex systems described by partial differential equations (PDEs). In PDE-based systems, neural ODE strategies use a two-step approach for acceleration: a nonlinear dimensionality reduction via an autoencoder, and a time integration step through a neural-network based model (neural ODE). This study explores the effectiveness of autoencoder-based neural ODE strategies for advection-dominated PDEs. It includes predictive demonstrations and delves into the sources of model acceleration, focusing on how neural ODEs achieve this. The research quantifies the impact of autoencoder and neural ODE components on system time-scales through eigenvalue analysis of dynamical system Jacobians. It examines how various training parameters, like training methods, latent space dimensionality, and training trajectory length, affect model accuracy and latent time-scales. Particularly, it highlights the influence of training trajectory length on neural ODE time-scales, noting that longer trajectories enhance limiting time-scales, with optimal neural ODEs capturing the largest time-scales of the actual system. The study conducts demonstrations on two distinct unsteady fluid dynamics settings influenced by advection: the Kuramoto-Sivashinsky equations and Hydrogen-Air channel detonations, using the compressible reacting Navier--Stokes equations.