A Mathematical Perspective of Machine Learning
Speaker: Weinan E, Princeton University
Location: Warren Weaver Hall 1302
Date: Monday, February 24, 2020, 3:45 p.m.
The heart of modern machine learning is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality. We will discuss representations and approximations that overcome this difficulty, as well as gradient flows that can be used to find the optimal approximation. We will see that at the continuous level, machine learning consists of a series of reasonably nice variational and PDE-like problems. Modern machine learning models/algorithms, such as the random feature and neural network models, are all special discretizations of these continuous problems. We will also discuss how to construct new models/algorithms using the same philosophy. Finally, we will discuss the fundamental reasons that are responsible for the success of modern machine learning, as well as the subtleties and mysteries that still remain to be understood.