Extremum Seeking Control

A Lie Bracket Framework for Extremum Seeking Control

torus

Currently, a lot of research effort is centered around the interplay between control, learning and optimization. This is driven by extensive research initiatives in artificial intelligence, by the steadily increasing online computability and by the wish to build networked, autonomous or intelligent systems in all sorts of application domains. Extremum seeking is control and learning framework to solve problems without the need of a model that intrinsically relies on an interplay between control, learning and optimization. For example, the stabilization of an a priori known steady-state behavior is a common control task. However, there are many applications where the desired steady-state behavior is unknown or changes over time. For a combustion engine, for example, a typical task is to find and stabilize an optimal operating point in order to maximize the efficiency or to minimize emissions. Due to the complex dynamics and changing operation conditions the optimal operating point of a combustion engine is often unknown or is steadily changing. Extremum seeking theory, for example, enables a model-free design of algorithms to find and stabilize an a priori unknown optimal steady-state behavior. Our research focuses on the development of methods to analyze and design extremum seeking systems based on Lie bracket approximations. We develop extremum seeking algorithms to solve extremum seeking problems with constraints, stabilization problems, distributed extremum seeking problems for robotic applications and multi-agent systems or extremum seeking problems with partial model information.

  • Cooperations:
  • Karl Henrik Johansson, KTH Royal Institute of Technology, Stockholm, Sweden
  • Milos Stankovic, University of Belgrade, Belgrade, Serbia
  • Miroslav Krstic, University of California San Diego (UCSD), San Diego (CA), USA
  • Emanuele Garone, Université Libre de Bruxelles, Brussels, Belgium
To the top of the page