학술논문

Understanding Latent Timescales in Neural Ordinary Differential Equation Models for Advection-Dominated Dynamical Systems
Document Type
Working Paper
Source
Subject
Physics - Fluid Dynamics
Language
Abstract
The neural ordinary differential equation (ODE) framework has emerged as a powerful tool for developing accelerated surrogate models of complex physical systems governed by partial differential equations (PDEs). A popular approach for PDE systems employs a two-step strategy: a nonlinear dimensionality reduction using an autoencoder, followed by time integration in the latent space using a neural ODE. This study examines the applicability of such autoencoder-based neural ODE architectures to systems where advection dominates the dynamics. In addition to predictive performance, this work investigates the mechanisms behind model acceleration by analyzing how the autoencoder and neural ODE components influence latent system time-scales. These effects are quantified through eigenvalue analysis of dynamical system Jacobians. Specifically, the study evaluates the sensitivity of model accuracy and discovered latent time-scales to key training choices: decoupled versus end-to-end training, latent space dimensionality, and training trajectory length. A central finding is the crucial role of training trajectory length (i.e., the number of rollout steps included in the loss function), which directly impacts the recovered latent time-scales. Longer trajectories lead to larger limiting time-scales in the latent system, and the most accurate models are shown to capture the largest time-scales of the original system. These insights are demonstrated across a diverse set of unsteady, advection-driven fluid dynamics problems: (1) the Kuramoto-Sivashinsky equations, (2) hydrogen-air channel detonations governed by the compressible reacting Navier-Stokes equations, and (3) 2D atmospheric flows.