Abstract
Model-based and predictive control provide a principled framework for decision-making under constraints, uncertainty, and long-term performance objectives. However, in many autonomous systems, accurate models, constraints, and objectives are only partially known, time-varying, or context-dependent. Learning offers powerful tools to address these limitations—but raises fundamental questions regarding safety, robustness, interpretability, and closed-loop performance.
This talk discusses systematic methods for integrating learning into model-based and predictive control architectures while preserving performance and formal guarantees. The focus is on learning-augmented representations of system dynamics, constraints, and objectives, and on the resulting implications for feasibility, stability, and optimality. Methodologically, the presentation builds on two-level and episodic optimization structures, separating fast, safety-critical control decisions from slower learning and adaptation processes.
Gaussian processes, neural networks, and Bayesian optimization are employed as learning approaches. Emphasis is placed on uncertainty quantification, constraint tightening, and safe exploration, enabling learning-based improvement without violating safety requirements. The role of incomplete, heterogeneous, and non-stationary data is addressed through domain adaptation and structured learning approaches.
The presented concepts are illustrated using representative case studies from robotics and autonomous systems, including physical human–robot interaction, autonomous driving, and repetitive control tasks.
The talk concludes with an outlook on context-aware and foundation-model-based extensions, where semantic and task-level information can be incorporated into predictive control loops—opening new directions for principled, learning-enabled decision-making in complex systems.
Biographical Information
Rolf Findeisen is Professor of Control Engineering at Technische Universität Darmstadt, where he leads the Control and Cyber-Physical Systems Laboratory. He received his doctorate from the University of Stuttgart (IST) and previously held a professorship in Systems Theory and Automatic Control at Otto-von-Guericke University Magdeburg. His work combines methods from systems theory, optimal control, and machine learning, with applications in robotics, human-assistance systems, autonomous vehicles, process and energy systems, and biomedical engineering. He has been fortunate to spend time as a (visiting) researcher at institutions including ETH Zürich, EPFL, MIT, UC Berkeley, and Imperial College London. He is actively involved in the international control community through the IEEE Control Systems Society and the International Federation of Automatic Control.