Feasibility analysis in MPC (core CENIIT project)

MPC is based on repeatedly solving optimization problems in order to come up with the current control input u(k) given the state x(k). A fundamental question when solving these optimization problems in closed-loop is whether the MPC controller will stabilize the system. It turns out that this is not necessarily the case, and a large body of research has been done over the years in order to tweak the optimization problem in order to create a stabilizing controller.

An even more fundamental question is the question of recursive feasibility: given that the MPC controller computes an optimal input at time k, will the optimization problem be feasible at the next time instant k+1. Also this topic has been investigated thoroughly over the years, and we have methods to guarantee this. However, the tweaks necessary might be unintuitive, or small changes in the problem setup can make them inapplicable. Hence, it might be interesting to be able to answer the basic validation question "Here is an MPC controller, is it recursively feasible?"

A problematic fact is that the set of states which satisfy recursive feasibility can be highly complex, thus making it hard to prove that all initially feasible states satisfy this property. As an example, the figure below shows (in light-grey) the set of states in a 2D example which lose feasibility, i.e., an MPC controller started in the dark-grey state will run without problems for some iterations, and then run into infeasibility and no control input can be computed. The dark-grey states are recursively feasible.

The recursive feasibility validation problem is addressed in the following paper.

Johan Löfberg (2012) Oops! I cannot do it again: Testing for recursive feasibility in MPC. Automatica, 48(3):550-555. ((URL)) (BibTeX)

In short, we apply bilevel programming in order to analyze the system of equations arising from the KKT-conditions of the MPC problem. By generalizing the ideas, robust recursive feasibility is analyzed using the same framework, i.e., we study if recursive feasibility remains despite bounded disturbances acting on the system.

Current research is devoted to hopefully extending the framework to stability analysis, and reducing the computational effort required to compute the certificates for recursive feasibility.

YALMIP (core CENIIT project)

A long lasting effort is the development of the MATLAB based optimization centric modeling language YALMIP, used in all projects referenced on this page, and used internationally by a large number of researchers, universities and companies. Current focus is on robust optimization (as described below), bilevel modeling (to support research described above) and improved simulation performance (allowing us to simulate MPC-based controllers using high-level YALMIP without any significant overhead).

Robust modeling (core CENIIT project)

No model is perfect, but still we use models to compute optimal decisions. In many cases, an approximate model is sufficient and the optimal decision based on this model works fine. However, in some cases it is crucial to incorporate knowledge about uncertainty in the decision making process.

One approach to take uncertainty into account is so called robust optimization. Here, the typical strategy is to come up with decisions which optimizes for the worst-case scenario (in contrast to a probabilistic approach typically targeting expected behavior, risk, variance etc). Deriving the so called robust counterparts of an uncertain problem is possible for some generic problem classes, but the modeling easily becomes complex and error prone, and making a small change in the uncertainty description can lead to completely different models. To combat this, a systematic modeling layer to YALMIP has been developed, allowing users to model uncertain models in a high-level language and leaving the derivation of robust counterparts to the modeling language.

Johan Löfberg (2012) Automatic robust convex programming. Optimization Methods and Software, 27(1). ((URL)) (BibTeX)

The idea is currently pursued further, and generalizations tying together advanced robust optimization and high-level convex programming have been implemented. A journal paper is currently under preparation.

Applications of MPC in aerospace applications (Phd student funding by VINNOVA/SAAB)

The overall aim is to investigate and apply optimal control methods, and MPC in particular, to aerospace applications. The project aims to get an answer to how these control methods can be used in aeronautical applications that places great demands on robustness, and are very time critical. In the area of MPC the project intend to look at problems as feasibility and stability of the optimization algorithm and to develop good algorithms to generate solutions that can be easily implemented in a time critical system.

In an agile aircraft, it is often desirable control the system to its limits. As an example, we might want to fly at a maximal angle-of-attack, but we may not go beyond this limit. Hence, we want to stabilize the system at a stationary point placed next to a hard limit. This particular setup leads to some theoretical issues in the classical stability theory for MPC, and we propose a solution for this in

Daniel Simon , Johan Löfberg and Torkel Glad (2012) Reference Tracking MPC using Terminal Set Scaling. In 2012 Conference on Decision and Control. Maui, Hawaii. ((URL)) (BibTeX)

The results have been extended and an improved computational strategy, based on ideas from robust optimization, is presented in

Daniel Simon , Johan Löfberg and Torkel Glad (2013) Reference Tracking MPC using Dynamic Terminal Set Transformation. Submitted to IEEE Transactions on Automatic Control.. (BibTeX)

A somewhat different line of research targets systems with nonlinear dynamics. Adding nonlinear models to an MPC framework leads to complex nonlinear nonconvex optimization problems. In the paper below, we propose a combination of classical MPC, constructive methods from nonlinear control (exact linearizations), and interval analysis, to control a special class of nonlinear systems.

Daniel Simon , Johan Löfberg and Torkel Glad (2013) Reference Tracking MPC using Dynamic Terminal Set Transformation. Submitted to IEEE Transactions on Automatic Control.. (BibTeX)

MPC in a buffer tank control problem (Phd student funding PIC-LI/SSF)

A classical problem in process control is regulation of levels in buffers, such as tanks temporarily holding a liquid product between two stations in a chemical process. This is a problem which sounds deceptively simple, but practice reveals significant challenges.

In the project, which was run in collaboration with Perstorp AB, we studied buffer tanks which were fed which infrequent but large flow changes. In such a scenario, it is not obvious how the buffer tank should be controlled. Trying to keep the level at the middle of the tank is not necessarily the best approach, since it limits the amount of additional flow that can accommodated. Similarly, it is perhaps not suitable to run the buffer at a very low level, since this might cause the tank to run empty if the flow decreases drastically.

Our idea was to pose the problem as a robust MPC problems, i.e., derive an optimal control strategy where we minimize the variation of the outflow, while guaranteeing lower and upper levels despite abrupt flow variations. Based on insight from the behavior of the robust optimal MPC controller, a simple PI controller was developed.

Peter Rosander, Alf Isaksson, Johan Löfberg and Krister Forsman (2012) Practical control of surge tanks suffering from frequent inlet flow upsets. In Proceedings of the IFAC Conference on Advances in PID Control. Brescia, Italy. ((URL)) (BibTeX)

Peter Rosander defended his licenciate thesis in May 2012.

Peter Rosander (2012) Averaging level control in the presence of frequent inlet flow upsets.. ((URL)) (BibTeX)

Approximation of MPC control laws (in collaboration with Slovak University of Technology)

It was realized in the late 20th century that the optimal input given a state, u(x(k)), can be computed explicitly for some MPC setups (mixed-integer linear and quadratic representable problems). This explicit map, computed using multi-parametric programming, is a piecewise affine function, which unfortunately can become very complex. For applications involving small process computers and/or fast sampling time, storing and evaluating this function can become prohibitive. To combat this, one approach is to approximate the piecewise function with another function which is easier to represent and evaluate. In the project, we use methods from robust optimization in order to efficiently compute low-order polynomial approximations of piecewise affine functions, while guaranteeing that stability is preserved. In the figure below, we see the 7th order polynomial approximation (right) of a piecewise affine control law (left).

The details are available in

Michal Kvasnica , Johan Löfberg and Miroslav Fikar (2011) Stabilizing polynomial approximation of explicit MPC. Automatica, 47(10):2292-2297. ((URL)) (BibTeX)

Explicit MPC for linear parameter varying systems (in collaboration with ETH Zurich)

Explicit MPC controllers were developed in the early 21st century for increasingly more complex setups. It started with simple nominal linear MPC problems, and was soon extended to hybrid systems which could be addressed using multi-parametric mixed-integer programming, and problems involving uncertainty, i.e., robust MPC.

In the paper

Thomas Besselmann , Johan Löfberg and Manfred Morari (2012) Explicit MPC for LPV systems: Stability and Optimality. IEEE Transactions on Automatic Control, 57(9):2322 - 2332. ((URL)) (BibTeX)

we present results on yet another extension of explicit MPC. We address the case when there is time-varying parametric uncertainty in the system, but the uncertainty can be measured. This is a situation which typically is addressed using gain-scheduling, and we show how this can be done in a structured and theoretically appealing way by employing dynamic programming and robust optimization.

Related Msc projects

Model Predictive Control for Active Magnetic Bearings, 2012. Joakim Lundh, performed at ABB Corporate Research

Efficient Modeling of Hybrid Systems, 2012. Jan Drgona, visiting student from Slovak University of Technology