BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20220313T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20221106T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20220114T123000
DTEND;TZID=America/Los_Angeles:20220114T133000
DTSTAMP:20260425T160152
CREATED:20220325T200020Z
LAST-MODIFIED:20220325T200040Z
UID:1923-1642163400-1642167000@ifds.info
SUMMARY:ML Opt@ UW: Yue Sun
DESCRIPTION:Speaker: Yue Sun   \nTitle: Analysis of Policy Gradient Descent for Control: Global Optimality via Convex Parameterization   \nAbstract: Policy gradient descent is a popular approach in reinforcement learning due to its simplicity. Recent work has investigated the optimality and convergence properties of this method when applied in certain control problems. In this work\, we connect policy gradient descent (applied to a nonconvex problem formulation) with classical convex parameterizations in control theory\, to show the gradient dominance property for the nonconvex cost function. Such a connection between nonconvex and convex landscapes holds for continuous/discrete time LQR\, distributed optimal control\, minimizing the $cL_2$ gain\, among others. To the best of our knowledge\, this work offers the first result unifying the landscape analysis of a broad class of control problems.
URL:https://ifds.info/event/ml-opt-uw-yue-sun/
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR