BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20220313T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20221106T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220211T123000
DTEND;TZID=America/Los_Angeles:20220211T133000
DTSTAMP:20260425T171836
CREATED:20220325T195326Z
LAST-MODIFIED:20220325T195446Z
UID:1910-1644582600-1644586200@ifds.info
SUMMARY:ML Opt@ UW: Vincent Roulet
DESCRIPTION:Speaker: Vincent Roulet \nTitle: Complexity Bounds of Iterative Linearization Algorithms for Discrete-Time Nonlinear Control \nAbstract: We revisit the nonlinear optimization approach to discrete-time nonlinear control and optimization algorithms based on iterative linearization. While widely popular in many domains\, these algorithms have mainly been analyzed from an asymptotic viewpoint. We establish non-asymptotic complexity bounds and global convergence for a class of generalized Gauss-Newton algorithms relying on iterative linearization of the nonlinear control problem\, henceforth calling iterative linear quadratic regulator or differential dynamic programming algorithms as subroutines. The sufficient conditions for global convergence are examined for multi-rate sampling schemes given the existence of a feedback linearization scheme. We illustrate the algorithms in synthetic experiments and provide a software library based on reverse-mode automatic differentiation to reproduce the numerical results.
URL:https://ifds.info/event/ml-opt-uw-vincent-roulet/
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220218T123000
DTEND;TZID=America/Los_Angeles:20220218T133000
DTSTAMP:20260425T171836
CREATED:20220325T194026Z
LAST-MODIFIED:20220325T195105Z
UID:1900-1645187400-1645191000@ifds.info
SUMMARY:ML Opt @ UW: Yifang Chen
DESCRIPTION:Speaker: Yifang Chen  \nTitle: Active Multi-Task Representation Learning \nAbstract: To leverage the power of big data from source tasks and overcome the scarcity of the target task samples\, representation learning based on multi-task pretraining has become a standard approach in many applications. However\, up until now\, choosing which source tasks to include in the multi-task learning has been more art than science. In this paper\, we give the first formal study on resource task sampling by leveraging the techniques from active learning. We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance. Theoretically\, we show that for the linear representation class\, to achieve the same error rate\, our algorithm can save up to a textit{number of source tasks} factor in the source task sample complexity\, compared with the naive uniform sampling from all source tasks. We also provide experiments on real-world computer vision datasets to illustrate the effectiveness of our proposed method on both linear and convolutional neural network representation classes. 
URL:https://ifds.info/event/ml-opt-uw-yifang-chen/
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20220225T123000
DTEND;TZID=America/Chicago:20220225T133000
DTSTAMP:20260425T171836
CREATED:20220325T193820Z
LAST-MODIFIED:20220325T194938Z
UID:1896-1645792200-1645795800@ifds.info
SUMMARY:ML-Opt @ UWash: Krishna Pillutla
DESCRIPTION:Title: MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers \nAbstract: As major progress is made in open-ended text generation\, measuring how close machine-generated text is to human language remains a critical open problem. We introduce MAUVE\, a comparison measure for open-ended text generation\, which directly compares the learnt distribution from a text generation model to the distribution of human-written text using divergence frontiers. MAUVE scales up to modern text generation models by computing information divergences in a quantized embedding space. Through an extensive empirical study on three open-ended generation tasks\, we find that MAUVE identifies known properties of generated text\, scales naturally with model size\, and correlates with human judgments\, with fewer restrictions than existing distributional evaluation metrics.
URL:https://ifds.info/event/ml-opt-uwash-zaid-harchaoui/
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR