Computational Mathematics and Scientific Computing Seminar

Operator Learning: From Theory to Practice

Speaker: Nikola Kovachki, NVIDIA and California Institute of Technology

Location: Warren Weaver Hall 1302

Date: Friday, February 2, 2024, 10 a.m.

Synopsis:

We present a general framework for approximating non-linear maps between infinite dimensional Banach spaces from observations. 

Our approach follows the "discretize last" philosophy by designing approximation architectures directly on the function spaces of interest without tying parameters to any finite dimensional discretization. Such architectures exhibit an approximation error that is independent of the training data discretization and can utilize data sources with diverse discretization common to many engineering problems. We review the infinite-dimensional approximation theory for such architectures, showing the universal approximation property and the manifestation of the curse of dimensionality translating algebraic rates in finite dimensions to exponential rates in infinite dimensions. We discuss efficient approximation of certain operators arising from parametric partial differential equations (PDEs) and show that efficient parametric approximation implies efficient data approximation. We demonstrate the utility of our framework numerically on a variety of large-scale problems arising in fluid dynamics, porous media flow, weather modeling, and crystal plasticity. Our results show that data-driven methods can provide orders of magnitude in computational speed-up at a fixed accuracy compared to classical numerical methods and hold immense promise in modeling complex physical phenomena across multiple scales.