Seminar Event Detail


Financial/Actuarial Mathematics

Date:  Wednesday, January 05, 2022
Location:  Zoom Virtual (2:00 PM to 3:00 PM)

Title:  Learning in Linear Quadratic Framework: From Single-agent to Multi-agent

Abstract:   Linear quadratic framework is widely studied in the literature of stochastic control and game theory due to its simple structure, tractable solution, and various real-world applications. In this talk we discuss the theoretical convergence of policy gradient methods, one of the most popular reinforcement learning algorithms, in several linear quadratic problems. In the single agent setting, we show the global convergence of such methods to the optimal solution in the setting of known and unknown parameters. We also illustrate the performance of the algorithms in the optimal liquidation problem. In the multi-agent linear quadratic games, we show that the policy gradient method enjoys global convergence to the Nash equilibrium provided that there is a certain amount of noise in the system. The noise can either come from the underlying dynamics, or carefully designed explorations from the agents. This talk is based on joint work with Prof. Ben Hambly (University of Oxford) and Prof. Renyuan Xu (University of Southern California).

Files:


Speaker:  Huining Yang
Institution:  Oxford

Event Organizer:     

 

Edit this event (login required).
Add new event (login required).
For access requests and instructions, contact math-webmaster@umich.edu

Back to previous page
Back to UM Math seminars/events page.