Objective
We develop approaches to design (approximate) optimal controllers for problems where this task is made difficult by lack of knowledge of the system's model, computational complexity, or special hierarchical structure required in the controller.
We have used Model Predictive Control (MPC) to design approximate optimal controllers for problems with modeling uncertainty, disturbances and multi-agent systems.
We are also interested in theoretical questions in Reinforcement Learning (RL) with implications to the control of dynamical systems. For example, policy gradient methods play a central role in policy-based methods. In practical applications, however, only noisy gradient estimates are typically available, making it essential to understand how such noise influences both the convergence of the algorithm and the closed-loop performance of the resulting dynamical system.
Beyond single-level policies, the structure of the learning architecture itself has a profound impact on both performance and theoretical tractability. Hierarchical control architectures, well established in classical control for decomposing and solving complex tasks, provide a natural framework for organizing learning components. Choosing the right control architecture is therefore a meaningful step toward safer and more reliable integration of learning components in closed-loop systems.
Tools
We use tools from control theory, such as input-to-state stability and singularly perturbed ODEs, and combine them with tools from stochastic approximation, optimization and statistical learning theory to provide both deterministic and thereotical guarantees.
Representative publications
Reinforcement Learning
- Song B, Weissmann S., Staudigl M., Iannelli A. - "A Stochastic Gradient Descent Approach to Design Policy Gradient Methods for LQR", arXiv: 2602.18933. Link
- Manenti M., Iannelli A. - "Convergence and Stability of Q-learning in Hierarchical Reinforcement Learning", arXiv 2511.17351. Link
- Song B, Iannelli A. - "The role of identification in data‐driven policy iteration: A system theoretic study". International Journal of Robust and Nonlinear Control (special issue: Data-driven Control: Theory and Applications), 2024 Link
- Song B, Iannelli A. - “Robustness of online identification-based policy iteration to noisy data”, at-Automatisierungstechnik, 2025. Link
Model Predictive Control
- Faliero F., Capello E., Iannelli A. - "Robust Adaptive Model Predictive Control for Tracking in Interconnected Systems via Distributed Optimisation" - International Journal of Robust and Nonlinear Control, 2026. Link
- Arcari E., Iannelli A., Carron A., Zeilinger M. - "Stochastic MPC with robustness to bounded parametric uncertainty" - IEEE Transactions on Automatic Control, 2026. Link
- Parsi A., Iannelli A., Smith R.S. - "An explicit dual control approach for constrained reference tracking of uncertain linear systems" - IEEE Transactions on Automatic Control, 2022. Link
- Parsi A., Iannelli A., Smith R.S. - "Scalable tube model predictive control of uncertain linear systems using ellipsoidal sets" - International Journal of Robust and Nonlinear Control, 2022. Link
Collaborations
S. Weissmann, M. Staudigl (University of Mannheim), A. Parsi, R. S. Smith (ETH Zürich), F. Faliero, E. Capello (University of Turin), A. Carron, M. Zeilinger (ETH Zürich).