Title: Classification vs Adversarial Examples for the Overparameterized linear model.
(Joint work with Vidya Muthukumar and Anant Sahai at UC Berkeley)
Abstract: In modern machine learning, overparameterized models are often used. It has been empirically observed that these often generalize well and display double-descent, but are susceptible to adversarial perturbations. Past theoretical explanations of these phenomena usually focus on linear models where the adversary has the power to perturb the features directly. However, the field of meta-learning has revealed that neural networks can be interpreted as first learning a feature-representation and then learning the best linear model on these learned features. The role of lifting in adversarial susceptibility of models is largely unaddressed, primarily because the problem of finding an adversarial example for lifted models is nonconvex and difficult to solve.
In this talk, I will use concepts from signal-processing to propose a toy model that exhibits all of the aforementioned phenomena, most crucially lifting. The toy nature of the model allows us to overcome the challenge of solving the adversarial-search problem. We learn that the adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. Despite the adversarial susceptibility, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups.
Bio: Adhyyan Narang (ECE) is a first-year PhD student, advised by Maryam Fazel and Lilian Ratliff. He is interested in fundamental theoretical questions about learning from data, and broadly works in the intersection of machine learning, optimization and game theory.