Abstract
Accelerated by rapid advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems in areas such as autonomous driving and robotics. These exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex systems due to imperfect learning, system unknowns, and uncertain environments. Conformal prediction (CP) — a statistical tool for uncertainty quantification — has gained popularity due to its ability to deal with these challenges. However, CP-based safety guarantees assume i.i.d. data, an assumption that is violated when system changes induce shifts in the underlying data distribution.
In this talk, I will provide new insight to design safe controllers under distribution shifts using robust CP. I will begin by advocating for the use of CP due to its simplicity, generality, and efficiency as opposed to existing optimization-based verification techniques. I will then provide an introduction to CP and summarize existing work that uses CP to design probabilistically safe controllers in dynamic environments. Subsequently, we will look into interactive settings where the system’s behavior may change the environment's behavior, and vice versa. This circular dependency creates an interaction-driven distribution shift that invalidates existing safety guarantees. To deal with this chicken-and-egg problem, we propose an iterative framework that episodically updates the controller while robustly maintaining safety guarantees by quantifying the potential impact of a controller update on the environment's behavior. We realize this via adversarially robust CP where we perform a regular CP step in each episode using observed data under the current controller, but then transfer safety guarantees across controller updates by analytically adjusting the CP result to account for distribution shifts. Lastly, we will discuss ways to deal with more general distribution shifts that go beyond this interactive setting using adaptive and distributionally robust CP.
Biographical Information
Lars Lindemann is currently an Asst. Professor for Algorithmic Systems Theory in the Automatic Control Laboratory at ETH Zürich. From 2023 to 2025 he was an Asst. Professor in the Thomas Lord Dept. of Computer Science at the University of Southern California. From 2020 to 2022 he was a Postdoctoral Fellow in the Dept. of Electrical and Systems Engineering at the University of Pennsylvania. He received his Ph.D. degree in Electrical Engineering from KTH Royal Institute of Technology in 2020. His research interests include systems and control theory, formal methods, machine learning, and autonomous systems. Prof. Lindemann received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and the Student Best Paper Award (as an advisor) at the 60th IEEE Conference on Decision and Control. He was finalist for the Best Paper Award (as an advisor) at the 2024 International Conference on Cyber-Physical Systems, the Best Paper Award at the 2022 Conference on Hybrid Systems: Computation and Control, and the Best Student Paper Award at the 2018 American Control Conference.