System Controlling: Model predictive control
Processing math: 100%

Saturday, November 17, 2012

Model predictive control

Introduction

The Model Predictive Controller (MPC) theory was developed after 1960, being a special case of the optimal control theory. In the MPC theory the dynamic optimization problem will be rewritten as  a static optimization problem. In the MPC theory they are using the varying control horizon theorem. The static optimization problem will be solved at each time step.
The MPC has the following properties:
  • possibility to constrain the control signal
  • adopts the time delays in the system 
  • possibility to compensate the sensors error
  • simple implementations of MIMO and LTI systems
We consider the discrete LTI sytem:
xk+1=Φkxk+Γkukyk=Cxk
The control objective at time $k$ to minimize the following quadratic cost function:
Jk(u,xk)=Nj=1||ek+j|k||2Qj+||Δuk+j1|k||2Rj

where
$e_{k}=y_k-y^{ref}_{k}$ is the error between the reference trajectory and the measured trajectory,
$\Delta u_{k+j|k}=u_{k+j|k}-u_{k+j-1|k}$ is the change of the input signal between two steps.  
In the $k$ step we already know the $u_{k-1}$, hence we can write
uk|k=Δuk|k+uk1uk+1|k=Δuk+1|k+Δuk|k+uk1uk+Nc1|k=Δuk+Nc1|k++Δuk|k+uk1

The states predictions will be as follows:
xk+1|k=Φxk|k+Γ(Δuk|k+uk1)xk+2|k=Φ2xk|k+(Φ+I)ΓΔuk|k+Γuk+1|k+(Φ+I)Γuk1|kxk+Nc|k=ΦNcxk|k+(ΦNc1++Φ+I)ΓΔuk|k++Γuk+Nc1|k+(ΦNc1++Φ+I)Γuk1|k

After some algebraic operations we can write the equations from above in matrix form:
[xk+1|kxk+Nc|kxk+Nc+1|kxk+N|k]=[ΦΦNcΦNc+1ΦN]xk+[ΓNc1i=0ΦiΓNci=0ΦiΓNi=0ΦiΓ]uk1+[Γ0Nci=0ΦiΓΦΓ+ΓN1i=0ΦiΓNNci=0ΦiΓ][Δuk+1|kΔuk+Nc1|k]

yk+j|k=Cxk+j|k


From the $5^{th}$ and $6^{th}$ equation we can write the following:
yk=Φxk+Γuk1+GyΔuk

And the error is
ek=yrefkΦxkΓuk1


The idea behind the MPC is to rewrite the dynamic optimisation problem as a static optimization problem:
Jk(u,xk)=||ykyrefk||2Qj+||Δuk||2Rj

where
ek=[ek+1|kek+N|k],Δuk=[Δuk+1|kΔuk+Nc1|k]

Q=[Q1000Q2000QN],R=[R1000R2000RNc1]
 
From the $8-9$ equations we can deduce
Jk(u,xk)=||GyΔukEk||2Qj+||Δuk||2Rj=[ΔuTkGTyETk]Q[ΔukGyEk]+ΔuTkRΔuk=ΔuTk[GTyQGy+R]Δuk2ETkQGyΔuk+ETkQEk=12ΔuTkHΔuk+fTΔuk+const

where
H=2[GTyQGy+R]

f=2GTyQEk

To get the optimal control signal with the inequality conditions, we have to solve the  $min_{\Delta u_k}\left\{J_k(u,x_k)\right\}$. There is a well know solution of this optimal problem called quadratic programming. In Matlab there is the $quadprog$ function to solve the optimization problem. There is also some C/C++ implementations, one that I know is the ACADO Toolkit. One of the disadvantages of the solution is the time consumption, because of this the sampling time will be bigger therefore it could increase the instability of the system.  
Differentiating (12) the decisions $u_k$ and setting the result equal to zero the optimal control sequence is obtained as seen below:
Δuk=[GTyQGy+R]1GTyQEk

Note that the optimal control deviation ($\Delta u_k$ ) gives offset free control in steady state. The expression above is true if we assume that the control signal changes between two steps is zero. 

Stability check

Hence stability can easily be established a posteriori by checking that the eigenvalues of the closed loop system matrix $\Phi+\Gamma K$ are strictly inside the unit circle, where
K=[GTyQGy+R]1GTyQ

$N_c$ is the control horizon
$N$ is the prediction horizon

No comments:

Post a Comment