Mathematics Colloquium

How do simple rotations affect the implicit bias of Adam?

Speaker: Rebecca Willett, University of Chicago

Location: Warren Weaver Hall 1302

Date: Monday, October 26, 2026, 3:45 p.m.

Synopsis:

Adaptive gradient methods such as Adam and Adagrad are widely used in machine learning, yet their effect on the generalization of learned models – relative to methods like gradient descent – remains poorly understood. Prior work on binary classification suggests that Adam exhibits a “richness bias,” which can help it learn nonlinear decision boundaries closer to the Bayes-optimal decision boundary relative to gradient descent. However, the coordinate-wise preconditioning scheme employed by Adam renders the overall method sensitive to orthogonal transformations of feature space. We show that this sensitivity can manifest as a reversal of Adam’s competitive advantage: even small rotations of the underlying data distribution can make Adam forfeit its richness bias and converge to a linear decision boundary that is farther from the Bayes-optimal decision boundary than the one learned by gradient descent. To alleviate this issue, we show that a recently proposed reparameterization method – which applies an orthogonal transformation to the optimization objective – endows any first-order method with equivariance to data rotations, and we empirically demonstrate its ability to restore Adam’s bias towards rich decision boundaries. This is joint work with Adela DePavia and Vasileios Charisopoulos.