Zeit: | 25. Mai 2023 |
---|---|
Download als iCal: |
|
Prof. Dominic Liao-McPherson
Department of Mechanical Engineering
The University of British Columbia
Vancouver, Canada
Thursday 2023-05-25 10:00 a.m.
IST Seminar Room 2.255 - Pfaffenwaldring 9 - Campus Stuttgart-Vaihingen
Abstract
Many iterative algorithms in optimization, games, and learning can be viewed as dynamical systems with inputs (measurements, historical data, user feedback), internal states (decision variables, state estimates, Lagrange multipliers), outputs (residuals, actuator commands), and uncertainties (noise, unknown parameters). The last few years have witnessed a growing interest in studying how learning, optimization and game-theoretic algorithms behave when placed in closed loop with noisy, uncertain, and dynamic physical systems and conversely how systems theory can be leveraged to both analyze existing algorithms and synthesize new ones. Notable recent examples include applying robust control tools (such as integral quadratic constraints) to analyze and synthesize gradient methods, repurposing optimization algorithms as feedback controllers for physical systems, and studying reinforcement learning algorithms using hybrid systems theory.
This dynamical systems perspective on algorithms is crucial for tackling some of the challenges arising when analyzing and synthesizing the algorithms that underlie modern autonomous, AI-driven, and socio-technical systems. These include algorithms that: (i) operate online (e.g., running algorithms with streaming data), (ii) include humans in the loop (e.g., autonomous driving, recommender systems), (iii) are interconnected with physical systems (e.g., optimization-based feedback controllers, real-time MPC), (iv) involve self-interested decision makers with shared resources (e.g., traffic/logistic networks), or (v) need to be robust to severe uncertainty (e.g., exogenous disturbances, intrinsic randomness, or non-stationarity). These new and timely applications are driving the development of new theoretical tools that synergistically build on control, optimization, game, and learning theory as well as novel algorithms and computational techniques that are specialized for noisy, time-varying, and dynamic environments.
In this talk, I discuss two scenarios where optimization algorithms are used directly as controllers for physical systems. First, I present Time-distributed Optimization (TDO), a unifying framework for studying the system theoretic consequences of computational limits in the context of Model Predictive Control (MPC). I show that it is possible to recover the stability and robustness properties of optimal MPC despite limited computational resources and illustrate how a system-theoretic view of algorithms can be exploited to certify the closed-loop system. Further, I illustrate the applicability of the these methods in the real-world through diesel engine, and autonomous driving examples. Second, I discuss Feedback Equilibrium Seeking (FES), a design framework for dynamic feedback controllers that track solution trajectories of time-varying generalized equations, such as local minimizers of nonlinear programs or competitive equilibria (e.g., Nash) of non-cooperative games. I present tracking error, stability, and robustness results for the sampled-data cased and provide illustrative examples in DC power grids and supply chains.
Biographical Information
Dominic Liao-McPherson obtained his BASc (with High Honours) in Engineering Science (Aerospace Option) from the University of Toronto in 2015 and his PhD in Aerospace Engineering and Scientific Computing from the University of Michigan in 2020. He was a postdoctoral scholar at the ETH Zürich Automatic Control Lab until the end of 2022 and is now an Assistant Professor at the University of British Columbia in Vancouver (Canada) where he leads the Algorithms, Optimization, and Control Lab. His research interests lie at the interface of control systems, optimization, computation and game theory with applications in robotics, energy systems, and manufacturing.