BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IFDS
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20240310T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20241103T070000
END:STANDARD
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20240308T133000
DTEND;TZID=America/Chicago:20240308T143000
DTSTAMP:20260409T133929
CREATED:20240318T213102Z
LAST-MODIFIED:20240318T213102Z
UID:2890-1709904600-1709908200@ifds.info
SUMMARY:Low-Rank Structures in Optimal Transport
DESCRIPTION:Bio: Meyer Scetbon is currently a Research Scientist at Microsoft Research. He completed his PhD at Institut Polytechnique de Paris\, advised by M. Cuturi. He did\, as a visiting student\, his MS theses UW and Technion\, on kernel-based viewpoints on deep neural networks advised by Z. Harchaoui\, and end-to-end signal and image denoising advised by M. Elad\, respectively. \n\n\n\n\n\nAbstract: Optimal transport (OT) plays an increasingly important role in machine learning (ML) to compare probability distributions. Yet\, it poses\, in its original form\, several challenges when used for applied problems: (i) computing OT between discrete distributions amounts to solving a large and expensive network flow problem which requires a supercubic complexity in the number of points; (ii) estimating OT using sampled measures is doomed by the curse of dimensionality. These issues can be mitigated using an entropic regularization\, solved with the Sinkhorn algorithm\, which improves on both statistical and computational aspects. While much faster\, entropic OT still requires a quadratic complexity with respect to the number of points and therefore remains prohibitive for large-scale problems. In this talk\, I will present new regularization approaches for the OT problem\, as well as its quadratic extension\, the Gromov-Wasserstein (GW) problem\, which impose low-rank structures on the admissible couplings. This results in the development of new algorithms that enjoy a linear complexity both in time and memory with respect to the number of points\, enabling their applications in the large-scale setting where millions of points need to be compared. Additionally\, I will show that these new regularization schemes have better statistical performances compared to the entropic approach\, that they naturally interpolate between the Maximum Mean Discrepancy (MMD) and OT\, and that they offer general clustering methods for arbitrary geometry.Website: <https://meyerscetbon.github.io/_pages/publications/>
URL:https://ifds.info/event/low-rank-structures-in-optimal-transport/
LOCATION:CSE (Allen) 403
CATEGORIES:MLOpt@UWash
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240315T133000
DTEND;TZID=America/Los_Angeles:20240315T143000
DTSTAMP:20260409T133929
CREATED:20240318T212943Z
LAST-MODIFIED:20240318T212943Z
UID:2888-1710509400-1710513000@ifds.info
SUMMARY:Optimized Decision Making via Active Learning of Stochastic Hamiltonians
DESCRIPTION:Speaker: Prof. Chandrajit Bajaj \, UT Austin \nAbstract: A Hamiltonian represents the energy of a dynamical system in phase space with coordinates of position and momentum. The Hamilton’s equations of motion are obtainable as coupled symplectic differential equations.  In this talk I shall show how optimized decision making (action sequences) can be obtained via a reinforcement learning problem wherein the agent interacts with the unknown environment to simultaneously learn a Hamiltonian surrogate and the optimal action sequences using Hamilton dynamics\, by invoking the Pontryagin Maximum Principle. We use optimal control theory to define an optimal control gradient flow\, which guides the reinforcement learning process of the agent to progressively optimize the Hamiltonian while simultaneously converging to the optimal action sequence. Extensions to stochastic Hamiltonians leading to stochastic action sequences and the free-energy principle shall also be discussed. This is joint work with  Taemin Heo\, Minh Nguyen.
URL:https://ifds.info/event/optimized-decision-making-via-active-learning-of-stochastic-hamiltonians/
LOCATION:CSE (Allen) 403
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR