IFDS is located in the Central and Pacific Time zone. Please note the zone when accessing a particular event.

Loading Events

Speaker: Yue Sun  

Title: Analysis of Policy Gradient Descent for Control: Global Optimality via Convex Parameterization  

Abstract: Policy gradient descent is a popular approach in reinforcement learning due to its simplicity. Recent work has investigated the optimality and convergence properties of this method when applied in certain control problems. In this work, we connect policy gradient descent (applied to a nonconvex problem formulation) with classical convex parameterizations in control theory, to show the gradient dominance property for the nonconvex cost function. Such a connection between nonconvex and convex landscapes holds for continuous/discrete time LQR, distributed optimal control, minimizing the $cL_2$ gain, among others. To the best of our knowledge, this work offers the first result unifying the landscape analysis of a broad class of control problems.

Go to Top