BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20240310T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20241103T070000
END:STANDARD
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240205T123000
DTEND;TZID=America/Chicago:20240205T133000
DTSTAMP:20260514T200325
CREATED:20240318T213252Z
LAST-MODIFIED:20240318T213252Z
UID:2892-1707136200-1707139800@ifds.info
SUMMARY:Towards a new toolbox of optimal statistical primitives
DESCRIPTION:Abstract: Given society’s increasing reliance on data\, its collection and processing into useful information is a technical problem of growing focus\, and perhaps paradoxically\, a critical bottleneck in many data science and machine learning applications. My research focuses on designing algorithms that push the limits of both statistical efficiency and computational efficiency. In particular\, my work tackles the divide between the theory and practice of data science\, which exists even for the most basic statistical problems including mean and (co)variance estimation. Conventional methods such as the sample mean\, while supported by theoretical results under strong assumptions\, are often brittle in the presence of extreme data points. To counter such deficiencies\, practitioners often use ad-hoc and unprincipled “outlier removal” heuristics\, revealing a marked gap between the theory and practice even for these fundamental problems. \nIn this talk\, I will describe my work towards building a new toolbox of optimal statistical primitives\, bridging the theory-practice divide. I will specifically highlight 3 works: A) constructing a statistically-optimal and computationally-efficient 1-dimensional mean estimator\, whose estimation error is optimal even in the leading multiplicative constant\, under bare minimum distributional assumptions\, B) a rather different but also optimal mean estimator for the “very high-dimensional” regime\, and C) a recent result on robustly clustering Gaussian mixtures based on their covariances even in the presence of adversarial data corruption. To conclude the talk\, I will discuss my vision for the new theory and toolbox\, serving as a blueprint for my long-term future research. \nBio: Jasper Lee is a postdoctoral research associate at the University of Wisconsin-Madison\, mentored by Ilias Diakonikolas in the Department of Computer Sciences\, and also affiliated with the Institute for Foundations of Data Science. He completed his PhD at Brown University\, advised by Paul Valiant. \nHis research interests are broadly in the foundations of data science\, aiming to design practical\, data-efficient and computationally-efficient algorithms for a variety of statistical applications. \nHis work is partially supported by a Croucher Fellowship for Postdoctoral Research.
URL:https://ifds.info/event/towards-a-new-toolbox-of-optimal-statistical-primitives/
LOCATION:Orchard View Room\, 330 N. Orchard Street\, 3rd Floor NE\, Madison\, Wisconsin\, 53715\, United States
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240207T123000
DTEND;TZID=America/Chicago:20240207T133000
DTSTAMP:20260514T200325
CREATED:20240315T164015Z
LAST-MODIFIED:20240315T172944Z
UID:2829-1707309000-1707312600@ifds.info
SUMMARY:SILO: Universality in High-Dimensional Statistics
DESCRIPTION:Rishabh Dudeja
URL:https://ifds.info/event/silo-universality-in-high-dimensional-statistics/
LOCATION:Orchard View Room\, 330 N. Orchard Street\, 3rd Floor NE\, Madison\, Wisconsin\, 53715\, United States
CATEGORIES:SILO
ATTACH;FMTTYPE=image/png:https://ifds.info/wp-content/uploads/2022/10/SILO-1024x683-1-e1665597390709.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240209T133000
DTEND;TZID=America/Los_Angeles:20240209T143000
DTSTAMP:20260514T200325
CREATED:20240318T212503Z
LAST-MODIFIED:20240318T212503Z
UID:2882-1707485400-1707489000@ifds.info
SUMMARY:Policy Optimization with Compatible Mirror Approximation
DESCRIPTION:Speaker Bio: Zhihan is a fourth-year PhD student in the Paul G. Allen School of Computer Science & Engineering at University of Washington\, advised by Prof. Maryam Fazel. His research interests are broadly in statistics\, optimization and machine learning. \n\n\nAbstract: We propose Compatible Mirror Policy Optimization (CoMPO)\, a framework that incorporates general function approximation into policy mirror descent methods. In contrast to the popular approach of using the $L_2$ norm to measure function approximation errors (regardless of the mirror map)\, CoMPO uses the Bregman divergence induced by the specific mirror map for policy projection. Such a compatibility bridges the gap between theory and practice: not only does it achieve fast linear convergence with general function approximation\, but it also includes several well-known practical methods as special cases\, immediately providing them strong convergence guarantees.
URL:https://ifds.info/event/policy-optimization-with-compatible-mirror-approximation/
LOCATION:Zoom
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240212T123000
DTEND;TZID=America/Chicago:20240212T133000
DTSTAMP:20260514T200325
CREATED:20240318T213417Z
LAST-MODIFIED:20240318T213417Z
UID:2894-1707741000-1707744600@ifds.info
SUMMARY:Theoretical exploration of foundation model adaption methods
DESCRIPTION:Speaker: Kangwook Lee
URL:https://ifds.info/event/theoretical-exploration-of-foundation-model-adaption-methods/
LOCATION:Orchard View Room\, 330 N. Orchard Street\, 3rd Floor NE\, Madison\, Wisconsin\, 53715\, United States
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240214T123000
DTEND;TZID=America/Chicago:20240214T133000
DTSTAMP:20260514T200325
CREATED:20240315T164015Z
LAST-MODIFIED:20240315T173037Z
UID:2830-1707913800-1707917400@ifds.info
SUMMARY:SILO: Theoretical Exploration of Foundation Model Adaptation Methods
DESCRIPTION:Kangwook Lee
URL:https://ifds.info/event/silo-theoretical-exploration-of-foundation-model-adaptation-methods/
LOCATION:Orchard View Room\, 330 N. Orchard Street\, 3rd Floor NE\, Madison\, Wisconsin\, 53715\, United States
CATEGORIES:SILO
ATTACH;FMTTYPE=image/png:https://ifds.info/wp-content/uploads/2022/10/SILO-1024x683-1-e1665597390709.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240216T133000
DTEND;TZID=America/Los_Angeles:20240216T143000
DTSTAMP:20260514T200325
CREATED:20240318T212625Z
LAST-MODIFIED:20240318T212625Z
UID:2884-1708090200-1708093800@ifds.info
SUMMARY:Offline Multi-task Transfer RL with Representational Penalization
DESCRIPTION:Speaker Bio: Avinandan is a second year PhD student\, advised by Maryam Fazel and Lillian Ratliff. His interests are in sequential learning and game theory. \n\n\nAbstract: We study the problem of representational transfer in offline Reinforcement Learning (RL)\, where a learner has access to episodic data from a number of source tasks collected a priori\, and aims to learn a shared representation to be used in finding a good policy for a target task. Unlike in online RL where the agent interacts with the environment while learning a policy\, in the offline setting there cannot be such interactions in either the source tasks or the target task; thus multi-task offline RL can suffer from incomplete coverage.We propose an algorithm to compute pointwise uncertainty measures for the learnt representation\, and establish a data-dependent upper bound for the suboptimality of the learnt policy for the target task. Our algorithm leverages the collective exploration done by source tasks to mitigate poor coverage at some points by a few tasks\, thus overcoming the limitation of needing uniformly good coverage for a meaningful transfer by existing offline algorithms. We complement our theoretical results with empirical evaluation on a rich-observation MDP which requires many samples for complete coverage. Our findings illustrate the benefits of penalizing and quantifying the uncertainty in the learnt representation.
URL:https://ifds.info/event/offline-multi-task-transfer-rl-with-representational-penalization/
LOCATION:CSE (Allen) 403
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240219T123000
DTEND;TZID=America/Chicago:20240219T133000
DTSTAMP:20260514T200325
CREATED:20240318T213630Z
LAST-MODIFIED:20240318T213656Z
UID:2897-1708345800-1708349400@ifds.info
SUMMARY:A good score does not lead to a good generative model
DESCRIPTION:Speaker: Sixu Li \nAbstract: Score-based Generative Models (SGMs) is one leading method in generative modeling\, renowned for their ability to generate high-quality samples from complex\, high-dimensional data distributions. The method enjoys empirical success and is supported by rigorous theoretical convergence properties. In particular\, it has been shown that SGMs can generate samples from a distribution that is close to the ground-truth if the underlying score function is learned well\, suggesting the success of SGM as a generative model. We provide a counter-example in this paper. Through the sample complexity argument\, we provide one specific setting where the score function is learned well. Yet\, SGMs in this setting can only output samples that are Gaussian blurring of training data points\, mimicking the effects of kernel density estimation. The finding resonates a series of recent finding that reveal that SGMs can demonstrate strong memorization effect and fail to generate. This is joint with Shi Chen and Qin Li. 
URL:https://ifds.info/event/a-good-score-does-not-lead-to-a-good-generative-model/
LOCATION:WID 1145\, 330 N Orchard Street\, Madison\, WI\, 53715\, United States
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240221T123000
DTEND;TZID=America/Chicago:20240221T133000
DTSTAMP:20260514T200325
CREATED:20240315T164015Z
LAST-MODIFIED:20240315T173126Z
UID:2831-1708518600-1708522200@ifds.info
SUMMARY:SILO: Foundations of Real-World Reinforcement Learning
DESCRIPTION:Jeongyeol Kwon
URL:https://ifds.info/event/silo-foundations-of-real-world-reinforcement-learning/
LOCATION:Orchard View Room\, 330 N. Orchard Street\, 3rd Floor NE\, Madison\, Wisconsin\, 53715\, United States
CATEGORIES:SILO
ATTACH;FMTTYPE=image/png:https://ifds.info/wp-content/uploads/2022/10/SILO-1024x683-1-e1665597390709.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240223T133000
DTEND;TZID=America/Los_Angeles:20240223T143000
DTSTAMP:20260514T200325
CREATED:20240318T212752Z
LAST-MODIFIED:20240318T212752Z
UID:2886-1708695000-1708698600@ifds.info
SUMMARY:GumbelSpec Sampling for Accelerating LLM Inference
DESCRIPTION:Bio: Tianxiao Shen is a postdoctoral scholar at the University of Washington\, working with Yejin Choi and Zaid Harchaoui. Her research interests lie in natural language processing and machine learning\, in particular developing models and algorithms for efficient\, accurate\, diverse\, flexible and controllable text generation. She received her PhD from MIT\, advised by Regina Barzilay and Tommi Jaakkola. Before that\, she did her undergrad at Tsinghua University. \n\n\n\n\nAbstract: We propose GumbelSpec sampling\, a novel algorithm that leverages smaller language models to accelerate inference of large language models without changing their output distribution. Central to our approach is the application of the Gumbel-Softmax technique to convert the stochastic decoding process into a deterministic process by integrating independently sampled Gumbel noise. Employing the same set of Gumbel noise\, we perform beam search on the smaller model to generate multiple candidate short continuations\, and then utilize tree-based attention to efficiently verify them in parallel using the larger model. GumbelSpec sampling significantly improves upon previous rejection sampling based speculative decoding methods by increasing the token acceptance rate by 1.7x-2.2x and achieving an additional speedup of 1.2x-1.5x. This results in a total speedup of 1.5x-2.6x compared to traditional autoregressive decoding.
URL:https://ifds.info/event/gumbelspec-sampling-for-accelerating-llm-inference/
LOCATION:CSE (Allen) 403
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240226T123000
DTEND;TZID=America/Chicago:20240226T133000
DTSTAMP:20260514T200325
CREATED:20240318T213825Z
LAST-MODIFIED:20240318T213825Z
UID:2900-1708950600-1708954200@ifds.info
SUMMARY:Prelimit coupling and steady-state convergence of constant-stepsize nonsmooth contractive SA
DESCRIPTION:Speaker: Yixuan Zhang \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbstract:  \nMotivated by Q-learning\, we study nonsmooth contractive stochastic approximation (SA) with constant stepsize. We focus on two important classes of dynamics: 1) nonsmooth contractive SA with additive noise\, and 2) synchronous and asynchronous Q-learning\, which features both additive and multiplicative noise. For both dynamics\, we establish weak convergence of the iterates to a stationary limit distribution in Wasserstein distance. Furthermore\, we propose a prelimit coupling technique for establishing steady-state convergence and characterize the limit of the stationary distribution as the stepsize goes to zero. Using this result\, we derive that the asymptotic bias of nonsmooth SA is proportional to the square root of the stepsize\, which stands in sharp contrast to smooth SA. This bias characterization allows for the use of Richardson-Romberg extrapolation for bias reduction in nonsmooth SA.
URL:https://ifds.info/event/prelimit-coupling-and-steady-state-convergence-of-constant-stepsize-nonsmooth-contractive-sa/
LOCATION:WID 1145\, 330 N Orchard Street\, Madison\, WI\, 53715\, United States
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240228T123000
DTEND;TZID=America/Chicago:20240228T133000
DTSTAMP:20260514T200325
CREATED:20240315T164015Z
LAST-MODIFIED:20240315T173213Z
UID:2832-1709123400-1709127000@ifds.info
SUMMARY:SILO: Reinforcement Learning with Robustness and Safety Guarantees
DESCRIPTION:Dileep Kalathil\, TAMU
URL:https://ifds.info/event/silo-reinforcement-learning-with-robustness-and-safety-guarantees/
LOCATION:Orchard View Room\, 330 N. Orchard Street\, 3rd Floor NE\, Madison\, Wisconsin\, 53715\, United States
CATEGORIES:SILO
ATTACH;FMTTYPE=image/png:https://ifds.info/wp-content/uploads/2022/10/SILO-1024x683-1-e1665597390709.png
END:VEVENT
END:VCALENDAR