Introduction
The Model Predictive Controller (MPC) theory was developed after 1960, being a special case of the optimal control theory. In the MPC theory the dynamic optimization problem will be rewritten as a static optimization problem. In the MPC theory they are using the varying control horizon theorem. The static optimization problem will be solved at each time step.
The MPC has the following properties:
- possibility to constrain the control signal
- adopts the time delays in the system
- possibility to compensate the sensors error
- simple implementations of MIMO and LTI systems
We consider the discrete LTI sytem:
xk+1=Φkxk+Γkukyk=Cxk
The control objective at time $k$ to minimize the following quadratic cost function:
Jk(u,xk)=N∑j=1||ek+j|k||2Qj+||Δuk+j−1|k||2Rjwhere
$e_{k}=y_k-y^{ref}_{k}$ is the error between the reference trajectory and the measured trajectory,
$\Delta u_{k+j|k}=u_{k+j|k}-u_{k+j-1|k}$ is the change of the input signal between two steps.
In the $k$ step we already know the $u_{k-1}$, hence we can write uk|k=Δuk|k+uk−1uk+1|k=Δuk+1|k+Δuk|k+uk−1⋮uk+Nc−1|k=Δuk+Nc−1|k+⋯+Δuk|k+uk−1
The states predictions will be as follows:
xk+1|k=Φxk|k+Γ(Δuk|k+uk−1)xk+2|k=Φ2xk|k+(Φ+I)ΓΔuk|k+Γuk+1|k+(Φ+I)Γuk−1|k⋮xk+Nc|k=ΦNcxk|k+(ΦNc−1+⋯+Φ+I)ΓΔuk|k+⋯+Γuk+Nc−1|k+(ΦNc−1+⋯+Φ+I)Γuk−1|k
After some algebraic operations we can write the equations from above in matrix form:
[xk+1|k⋮xk+Nc|kxk+Nc+1|k⋮xk+N|k]=[Φ⋮ΦNcΦNc+1⋮ΦN]xk+[Γ⋮∑Nc−1i=0ΦiΓ∑Nci=0ΦiΓ⋮∑Ni=0ΦiΓ]uk−1+[Γ⋯0⋮⋱⋮∑Nci=0ΦiΓ⋯ΦΓ+Γ⋮⋱⋮∑N−1i=0ΦiΓ⋯∑N−Nci=0ΦiΓ][Δuk+1|k⋮Δuk+Nc−1|k]yk+j|k=Cxk+j|k
From the $5^{th}$ and $6^{th}$ equation we can write the following:
yk=Φ∗xk+Γ∗uk−1+GyΔuk
And the error is
ek=yrefk−Φ∗xk−Γ∗uk−1
The idea behind the MPC is to rewrite the dynamic optimisation problem as a static optimization problem:
Jk(u,xk)=||yk−yrefk||2Qj+||Δuk||2Rj
where
ek=[ek+1|k⋮ek+N|k],Δuk=[Δuk+1|k⋮Δuk+Nc−1|k]
Q=[Q10⋯00Q2⋯0⋮⋮⋱⋮00⋯QN],R=[R10⋯00R2⋯0⋮⋮⋱⋮00⋯RNc−1]
From the $8-9$ equations we can deduce
Jk(u,xk)=||GyΔuk−Ek||2Qj+||Δuk||2Rj=[ΔuTkGTy−ETk]Q[ΔukGy−Ek]+ΔuTkRΔuk=ΔuTk[GTyQGy+R]Δuk−2ETkQGyΔuk+ETkQEk=12ΔuTkHΔuk+fTΔuk+const
where
H=2[GTyQGy+R]
f=−2GTyQEk
To get the optimal control signal with the inequality conditions, we have to solve the $min_{\Delta u_k}\left\{J_k(u,x_k)\right\}$. There is a well know solution of this optimal problem called quadratic programming. In Matlab there is the $quadprog$ function to solve the optimization problem. There is also some C/C++ implementations, one that I know is the ACADO Toolkit. One of the disadvantages of the solution is the time consumption, because of this the sampling time will be bigger therefore it could increase the instability of the system.
Differentiating (12) the decisions $u_k$ and setting the result equal to zero the optimal control sequence is obtained as seen below:
Δuk=[GTyQGy+R]−1GTyQEk
Note that the optimal control deviation ($\Delta u_k$ ) gives offset free control in steady state. The expression above is true if we assume that the control signal changes between two steps is zero.
Stability check
Hence stability can easily be established a posteriori by checking that the eigenvalues of the closed loop system matrix $\Phi+\Gamma K$ are strictly inside the unit circle, where
K=[GTyQGy+R]−1GTyQ
$N_c$ is the control horizon
$N$ is the prediction horizon
No comments:
Post a Comment