BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IFDS
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210409T133000
DTEND;TZID=America/Los_Angeles:20210409T143000
DTSTAMP:20260425T080805
CREATED:20210412T183054Z
LAST-MODIFIED:20210412T183353Z
UID:1140-1617975000-1617978600@ifds.info
SUMMARY:ML-Opt: Krishna Pillutla
DESCRIPTION:Title: Distributionally Robust Machine Learning with the Superquantile 1) For Supervised Learning\, 2) For Federated Learning\n\n\nAbstract: I will talk about distributionally robust machine learning\, a principled approach for robust performance across subpopulations\, and shifting distributions. We will focus on the superquantile\, a.k.a. the Conditional Value at Risk (CVaR)\, which was popularized by the seminal work of UW’s own R. T. Rockafellar and co-authors in the field of computational finance and economics in the early 2000s.\nWe will first review the use of the superquantile for distributionally robust supervised learning. We will prove a generalization bound from first principles.\nSecond\, we will discuss an application of the superquantile in the field of federated learning\, i.e.\, the distributed training of machine learning models on mobile phones. We will quantify the extent to which a user conforms to the population distribution and show how the superquantile can be leveraged to improve performance on users who do not conform to the population. We will round of the discussion with a communication-efficient training algorithm and experimental results and heterogeneous datasets.\n\nBio: Krishna Pillutla is a 5th year Ph.D. student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington\, where he is advised by Zaid Harchaoui and Sham Kakade. Krishna is broadly interested in machine learning and optimization and works in the particular areas of structured prediction and federated learning. Krishna was a 2019-20 JP Morgan Ph.D. Fellow.
URL:https://ifds.info/event/ml-opt-krishna-pillutla/
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210416T133000
DTEND;TZID=America/Los_Angeles:20210416T143000
DTSTAMP:20260425T080805
CREATED:20210412T182350Z
LAST-MODIFIED:20210412T182350Z
UID:1138-1618579800-1618583400@ifds.info
SUMMARY:ML-Opt: Adhyyan Narang
DESCRIPTION:Title: Classification vs Adversarial Examples for the Overparameterized linear model. \n(Joint work with Vidya Muthukumar and Anant Sahai at UC Berkeley) \nAbstract: In modern machine learning\, overparameterized models are often used. It has been empirically observed that these often generalize well and display double-descent\, but are susceptible to adversarial perturbations. Past theoretical explanations of these phenomena usually focus on linear models where the adversary has the power to perturb the features directly. However\, the field of meta-learning has revealed that neural networks can be interpreted as first learning a feature-representation and then learning the best linear model on these learned features. The role of lifting in adversarial susceptibility of models is largely unaddressed\, primarily because the problem of finding an adversarial example for lifted models is nonconvex and difficult to solve. \nIn this talk\, I will use concepts from signal-processing to propose a toy model that exhibits all of the aforementioned phenomena\, most crucially lifting. The toy nature of the model allows us to overcome the challenge of solving the adversarial-search problem. We learn that the adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. Despite the adversarial susceptibility\, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups. \nBio: Adhyyan Narang (ECE) is a first-year PhD student\, advised by Maryam Fazel and Lilian Ratliff. He is interested in fundamental theoretical questions about learning from data\, and broadly works in the intersection of machine learning\, optimization and game theory.
URL:https://ifds.info/event/mlopt-adhyyan-narang/
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210423T133000
DTEND;TZID=America/Los_Angeles:20210423T143000
DTSTAMP:20260425T080805
CREATED:20210422T180907Z
LAST-MODIFIED:20210422T181817Z
UID:1146-1619184600-1619188200@ifds.info
SUMMARY:ML-Opt@UW: Yue Sun
DESCRIPTION:Title: Subspace Based Meta-learning\nAbstract:\nMeta-learning typically involves two phases. First\, one learns a suitable representation from the previously seen tasks. Secondly\, this representation is used for learning a new task using only a few samples (i.e.\, few-shot learning). In this talk I will discuss:\n1. Linear meta learning: sample complexity of representation learning with general covariance\n2. Linear meta learning: algorithm & analysis for overparameterized few-shot learning\n3. Generalization to nonlinear meta-learning\n\n\nBio:\nYue Sun is a 5th year PhD student from University of Washington\, Seattle. He is interested in theoretical understanding of optimization\, ML and control. His research works are about:\n1. Nonconvex optimization on Riemannian manifolds (UW)\n2. Low order linear system identification (UW)\n3. Subspace based meta-learning (UW)\n4. Nonconvex optimization applied in optimal control (UW)\n\n5. Online optimization for video coding (Google\, 2019)\n6. Compressive sensing and phase retrieval (Ohio State U\, 2015; Nokia Bell Labs\, 2021)
URL:https://ifds.info/event/mloptuw/
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR