BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IFDS
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210416T133000
DTEND;TZID=America/Los_Angeles:20210416T143000
DTSTAMP:20260425T093033
CREATED:20210412T182350Z
LAST-MODIFIED:20210412T182350Z
UID:1138-1618579800-1618583400@ifds.info
SUMMARY:ML-Opt: Adhyyan Narang
DESCRIPTION:Title: Classification vs Adversarial Examples for the Overparameterized linear model. \n(Joint work with Vidya Muthukumar and Anant Sahai at UC Berkeley) \nAbstract: In modern machine learning\, overparameterized models are often used. It has been empirically observed that these often generalize well and display double-descent\, but are susceptible to adversarial perturbations. Past theoretical explanations of these phenomena usually focus on linear models where the adversary has the power to perturb the features directly. However\, the field of meta-learning has revealed that neural networks can be interpreted as first learning a feature-representation and then learning the best linear model on these learned features. The role of lifting in adversarial susceptibility of models is largely unaddressed\, primarily because the problem of finding an adversarial example for lifted models is nonconvex and difficult to solve. \nIn this talk\, I will use concepts from signal-processing to propose a toy model that exhibits all of the aforementioned phenomena\, most crucially lifting. The toy nature of the model allows us to overcome the challenge of solving the adversarial-search problem. We learn that the adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. Despite the adversarial susceptibility\, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups. \nBio: Adhyyan Narang (ECE) is a first-year PhD student\, advised by Maryam Fazel and Lilian Ratliff. He is interested in fundamental theoretical questions about learning from data\, and broadly works in the intersection of machine learning\, optimization and game theory.
URL:https://ifds.info/event/mlopt-adhyyan-narang/
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR