BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20210314T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20211107T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210402T133000
DTEND;TZID=America/Los_Angeles:20210402T143000
DTSTAMP:20260516T075806
CREATED:20210115T201436Z
LAST-MODIFIED:20210331T134630Z
UID:880-1617370200-1617373800@ifds.info
SUMMARY:IFDS All-Hands: Chaobing Song & Xuezhou Zhang
DESCRIPTION:Speaker 1: Dr. Chaobing Song\, IFDS Postdoctoral Scholar at U Wisconsin (advised by Prof.s Jelena Diakonikolas and Steve Wright)\n\nTitle: Closing Convergence Gaps for Both Smooth and Nonsmooth Convex Finite-Sums\n \nAbstract: In the Foundation of Data Science\, optimization problems with a finite-sum structure widely exist\, such as the classical empirical risk minimization problem and deep neural network. In the past decade\, one main concern in the study of first-order methods is to study how the finite-sum structure influences optimization efficiency and scalability. In this talk\, based on my work (https://arxiv.org/abs/2006.10281\, https://arxiv.org/abs/2102.13643)\, I will talk about a consistent approach based on dual averaging and a particularly designed initialization strategy to close convergence gaps for smooth convex finite-sums and nonsmooth convex finite-sums. For the first time\, the proposed VRADA algorithm (https://arxiv.org/abs/2006.10281) matches the lower bounds of all the three regimes for smooth convex finite-sums; for the first time\, the proposed VRPDA^2 algorithm  (https://arxiv.org/abs/2102.13643) shows an O(n) improvement theoretically over existing deterministic methods and stochastic primal-dual coordinate methods\, where n is the number of data samples. Both algorithms also have good empirical performance.\n\n \nSpeaker 2: Xuezhou Zhang\, IFDS RA (advised by Prof. Jerry Zhu)\n \n\nTitle: Statistical Robustness in Reinforcement Learning\n \nAbstract: Traditional robust statistics studies the problem of statistical estimation under data contamination. In this talk\, we extend this investigation to decision-making problems. In the first half of the talk\, I will illustrate the concept and unique challenge of robust Decision making using a multi-armed bandit as an example. In the second half of the talk\, I will present some of the lower and upper-bound results we have already established\, and discuss several open directions. This talk is partially based on our recent paper [https://arxiv.org/pdf/2102.05800.pdf].\n\n \n\n—-\nSpeaker Bios:\n \nDr. Chaobing Song: I am a postdoc researcher at University of Wisconsin-Madison with Prof. Jelena Diakonikolas and Prof. Stephen J Wright. My research interests include optimization and machine learning. I obtained my Ph.D. degree at Tsinghua University in 2020 supervised by Prof. Yi Ma from University of California\, Berkeley (who is an affiliate professor at Tsinghua-Berkeley Shenzhen Institute\, Tsinghua University). I spent two wonderful years at Berkeley and finished my Ph.D. thesis there. The main focus of my current research is to apply a general\, powerful and concise framework [arXiv] to optimization problems in machine learning. The development based on this framework substantially improves complexity results in many well-known settings\, such as the setting of nonsmooth convex finite-sums in this talk.  \n \nXuezhou Zhang: Xuezhou is a Ph.D. candidate in the computer sciences department at the University of Wisconsin-Madison\, advised by Professor Jerry Zhu. Before coming to Madison\, he obtained a Bachelor of Applied Mathematics degree from UCLA. His research interests lie in machine learning and reinforcement learning\, and his recent research focuses on designing reinforcement learning algorithms that learn more adaptively and robustly in non-stochastic environments.\n\nThese talks are remote via zoom. Please contact the organizer if you need the link. \nAll-Hands titles and abstracts are tentative\, as of the posting date.
URL:https://ifds.info/event/ifds-all-hands-040221/
LOCATION:WI
CATEGORIES:Monthly All-Hands
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210405T133000
DTEND;TZID=America/Chicago:20210405T143000
DTSTAMP:20260516T075806
CREATED:20210202T203119Z
LAST-MODIFIED:20210211T154049Z
UID:1037-1617629400-1617633000@ifds.info
SUMMARY:IFDS Ideas Forum: Liu Yang
DESCRIPTION:Person: Liu Yang \nTitle: \nAbstract: \nBio:
URL:https://ifds.info/event/ifds-ideas-forum-04052021/
LOCATION:WI
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210407T123000
DTEND;TZID=America/Chicago:20210407T133000
DTSTAMP:20260516T075806
CREATED:20210202T193325Z
LAST-MODIFIED:20210412T181944Z
UID:998-1617798600-1617802200@ifds.info
SUMMARY:SILO: Rebecca Willett
DESCRIPTION:
URL:https://ifds.info/event/silo-anna-choromanska/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210409T133000
DTEND;TZID=America/Los_Angeles:20210409T143000
DTSTAMP:20260516T075806
CREATED:20210412T183054Z
LAST-MODIFIED:20210412T183353Z
UID:1140-1617975000-1617978600@ifds.info
SUMMARY:ML-Opt: Krishna Pillutla
DESCRIPTION:Title: Distributionally Robust Machine Learning with the Superquantile 1) For Supervised Learning\, 2) For Federated Learning\n\n\nAbstract: I will talk about distributionally robust machine learning\, a principled approach for robust performance across subpopulations\, and shifting distributions. We will focus on the superquantile\, a.k.a. the Conditional Value at Risk (CVaR)\, which was popularized by the seminal work of UW’s own R. T. Rockafellar and co-authors in the field of computational finance and economics in the early 2000s.\nWe will first review the use of the superquantile for distributionally robust supervised learning. We will prove a generalization bound from first principles.\nSecond\, we will discuss an application of the superquantile in the field of federated learning\, i.e.\, the distributed training of machine learning models on mobile phones. We will quantify the extent to which a user conforms to the population distribution and show how the superquantile can be leveraged to improve performance on users who do not conform to the population. We will round of the discussion with a communication-efficient training algorithm and experimental results and heterogeneous datasets.\n\nBio: Krishna Pillutla is a 5th year Ph.D. student at the Paul G. Allen School of Computer Science and Engineering at the University of Washington\, where he is advised by Zaid Harchaoui and Sham Kakade. Krishna is broadly interested in machine learning and optimization and works in the particular areas of structured prediction and federated learning. Krishna was a 2019-20 JP Morgan Ph.D. Fellow.
URL:https://ifds.info/event/ml-opt-krishna-pillutla/
LOCATION:WI
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210412T133000
DTEND;TZID=America/Chicago:20210412T143000
DTSTAMP:20260516T075806
CREATED:20210202T203236Z
LAST-MODIFIED:20210211T154258Z
UID:1039-1618234200-1618237800@ifds.info
SUMMARY:IFDS Ideas Forum: Vivak Patel
DESCRIPTION:Person: Vivak Patel \nTitle: Adaptive Iterative Methods for Linear Systems \nAbstract: \nBio: My research is at the intersection of numerical optimization and statistical estimation with applications to dynamical systems. On the one hand\, I interpret this as using numerical optimization ideas to design statistical estimators that are practically computable\, yet still retain certain desirable statistical properties. On the other hand\, I interpret this as using statistical ideas to design optimization methods for solving minimization problems that involve uncertainty. Beyond these two main areas of work\, I also do some work in machine learning\, numerical linear algebra and load modeling for power systems.
URL:https://ifds.info/event/ifds-ideas-forum-04122021/
LOCATION:WI
CATEGORIES:IFDS Ideas Forum
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210414T123000
DTEND;TZID=America/Chicago:20210414T133000
DTSTAMP:20260516T075806
CREATED:20210202T201325Z
LAST-MODIFIED:20210412T181842Z
UID:1019-1618403400-1618407000@ifds.info
SUMMARY:SILO: Merve Bodur
DESCRIPTION:Title: Copositive Duality for Discrete Markets and Games \nModels including binary decisions are often modelled as mixed-integer programs (MIPs). Such models are nonconvex and lack strong duality\, which prevents the use of tools such as shadow prices and KKT conditions. For example\, in convex markets\, shadow (dual) prices are associated with market equilibrium\, and for convex games the existence and uniqueness of Nash equilibrium can be proven via fixed-point theorem and KKT conditions. Those results are lacking in their nonconvex counterparts. We use copositive programming to formulate discrete problems in applications including nonconvex energy markets and nonconvex games\, to leverage its convexity and strong duality features. We obtain several novel theoretical and numerical results for those applications\, including a new revenue-adequate pricing scheme for energy markets\, and existence\, uniqueness\, and KKT conditions for the pure-strategy Nash equilibrium in discrete games. We also propose a novel and purely MIP-based cutting-plane algorithm for mixed-integer copositive programs\, and employ it in our applications. This is a joint work with Cheng Guo and Josh A. Taylor. \nUNTIL FURTHER NOTICE: Seminars are virtual. Sign up for the SILO email list to receive the links to each talk at https://groups.google.com/ and browse for silo
URL:https://ifds.info/event/silo-rashmi-vimayak/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210416T133000
DTEND;TZID=America/Los_Angeles:20210416T143000
DTSTAMP:20260516T075806
CREATED:20210412T182350Z
LAST-MODIFIED:20210412T182350Z
UID:1138-1618579800-1618583400@ifds.info
SUMMARY:ML-Opt: Adhyyan Narang
DESCRIPTION:Title: Classification vs Adversarial Examples for the Overparameterized linear model. \n(Joint work with Vidya Muthukumar and Anant Sahai at UC Berkeley) \nAbstract: In modern machine learning\, overparameterized models are often used. It has been empirically observed that these often generalize well and display double-descent\, but are susceptible to adversarial perturbations. Past theoretical explanations of these phenomena usually focus on linear models where the adversary has the power to perturb the features directly. However\, the field of meta-learning has revealed that neural networks can be interpreted as first learning a feature-representation and then learning the best linear model on these learned features. The role of lifting in adversarial susceptibility of models is largely unaddressed\, primarily because the problem of finding an adversarial example for lifted models is nonconvex and difficult to solve. \nIn this talk\, I will use concepts from signal-processing to propose a toy model that exhibits all of the aforementioned phenomena\, most crucially lifting. The toy nature of the model allows us to overcome the challenge of solving the adversarial-search problem. We learn that the adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. Despite the adversarial susceptibility\, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups. \nBio: Adhyyan Narang (ECE) is a first-year PhD student\, advised by Maryam Fazel and Lilian Ratliff. He is interested in fundamental theoretical questions about learning from data\, and broadly works in the intersection of machine learning\, optimization and game theory.
URL:https://ifds.info/event/mlopt-adhyyan-narang/
LOCATION:WI
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210419T133000
DTEND;TZID=America/Chicago:20210419T143000
DTSTAMP:20260516T075806
CREATED:20210202T203343Z
LAST-MODIFIED:20210316T143205Z
UID:1041-1618839000-1618842600@ifds.info
SUMMARY:IFDS Ideas Forum: Zhiyan Ding
DESCRIPTION:Person: Zhiyan Ding \nTitle:  Connection between optimization and sampling \nAbstract: \nBio: I am a third-year PhD student in the Department of Mathematics\, University of Wisconsin-Madison. My advisor is Qin Li.
URL:https://ifds.info/event/ifds-ideas-forum-04192021/
LOCATION:WI
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210421T123000
DTEND;TZID=America/Chicago:20210421T133000
DTSTAMP:20260516T075806
CREATED:20210202T201514Z
LAST-MODIFIED:20210219T192936Z
UID:1021-1619008200-1619011800@ifds.info
SUMMARY:SILO: Emmanuel Abbe
DESCRIPTION:Title: TBD \nUNTIL FURTHER NOTICE: Seminars are virtual. Sign up for the SILO email list to receive the links to each talk at https://groups.google.com/ and browse for silo
URL:https://ifds.info/event/silo-04212021/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210423T133000
DTEND;TZID=America/Los_Angeles:20210423T143000
DTSTAMP:20260516T075806
CREATED:20210422T180907Z
LAST-MODIFIED:20210422T181817Z
UID:1146-1619184600-1619188200@ifds.info
SUMMARY:ML-Opt@UW: Yue Sun
DESCRIPTION:Title: Subspace Based Meta-learning\nAbstract:\nMeta-learning typically involves two phases. First\, one learns a suitable representation from the previously seen tasks. Secondly\, this representation is used for learning a new task using only a few samples (i.e.\, few-shot learning). In this talk I will discuss:\n1. Linear meta learning: sample complexity of representation learning with general covariance\n2. Linear meta learning: algorithm & analysis for overparameterized few-shot learning\n3. Generalization to nonlinear meta-learning\n\n\nBio:\nYue Sun is a 5th year PhD student from University of Washington\, Seattle. He is interested in theoretical understanding of optimization\, ML and control. His research works are about:\n1. Nonconvex optimization on Riemannian manifolds (UW)\n2. Low order linear system identification (UW)\n3. Subspace based meta-learning (UW)\n4. Nonconvex optimization applied in optimal control (UW)\n\n5. Online optimization for video coding (Google\, 2019)\n6. Compressive sensing and phase retrieval (Ohio State U\, 2015; Nokia Bell Labs\, 2021)
URL:https://ifds.info/event/mloptuw/
LOCATION:WI
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210426T133000
DTEND;TZID=America/Chicago:20210426T140000
DTSTAMP:20260516T075806
CREATED:20210114T204232Z
LAST-MODIFIED:20210115T200951Z
UID:827-1619443800-1619445600@ifds.info
SUMMARY:IFDS Ideas Forum: Shi Chen
DESCRIPTION:Title: \nAbstract: \nShi Chen (Mathematics)\, advised by Qin Li (Mathematics) and Stephen J. Wright (Computer Science)\, works on problems in the interdisciplinary area of applied math and machine learning. He is interested in connecting machine learning to various mathematical physics problems\, including the homogenization of PDEs and inverse problems. His recent work focuses on applying deep learning to PDE inverse problems.
URL:https://ifds.info/event/ifds-ideas-forum-shi-chen/
LOCATION:Webex
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210426T140000
DTEND;TZID=America/Chicago:20210426T143000
DTSTAMP:20260516T075806
CREATED:20210114T204543Z
LAST-MODIFIED:20210115T200856Z
UID:829-1619445600-1619447400@ifds.info
SUMMARY:IFDS Ideas Forum: Shashank Rajput
DESCRIPTION:Title: \nAbstract: \nShashank Rajput (Computer Science)\, advised by Dimitris Papailiopoulos (Electrical and Computer Engineering)\, Kangwook Lee (Electrical and Computer Engineering) and Stephen Wright (Computer Science)\, works on problems in distributed machine learning and optimization. He works on developing techniques which are fast and scalable for practical use\, and have theoretical guarantees. Currently\, he is working on finding better shuffling mechanisms that beat random reshuffling as well as developing algorithms for training neural networks by only pruning them.
URL:https://ifds.info/event/ifds-ideas-forum-shashank-rajput/
LOCATION:Webex
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210428T123000
DTEND;TZID=America/Chicago:20210428T133000
DTSTAMP:20260516T075806
CREATED:20210202T201908Z
LAST-MODIFIED:20210202T201908Z
UID:1025-1619613000-1619616600@ifds.info
SUMMARY:SILO: Martin Wainwright
DESCRIPTION:Title: TBD \nUNTIL FURTHER NOTICE: Seminars are virtual. Sign up for the SILO email list to receive the links to each talk at https://groups.google.com/ and browse for silo
URL:https://ifds.info/event/silo-martin-wainwright/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
END:VCALENDAR