The Basic Basics to Control model | Looking at the data science

Prompt
In this article we will:
1. Introduction
Model Certivulitive Control (MPC) is a popular way of control over the best control problem (OCP) solved Interative with the renewed state of the renewed with each ITERACE.
In one OCP using a statue of the plant to get open open control over the specified timonon. Because the model cannot pull 100% culture method, and because in real world system is associated with noise and disorder, one works only for the first part of Lop Loop, and always Ways a state of resolving the OCP. This closes the loop and creates a response.
Maths after them are simple and accurate (especially compared with the objects such as robust control) and easy to show the MPC controller. Some books can effectively manage the difficult and soft issues in the Kingdom and control (difficult issues are strong, and soft castles are compulsory for cost operations) and how are these issues often used!). The only CONS is that you need to solve the problems of good use “online” in real time, which can be a problem when you control the instant system or have computer resources.
1.2 Example of Working
Throughout the article I will consider twice as a double-line compiler. Continuous System reads:
[
begin{align*}
dot x _1
with (t in mathbb {r} ) to point time. Here (x_1

If we force controlling it into a pieceweeze every time 0.1 seconds, we get the timeline:
[
begin{align*}
x_{k+1} &= A x_k + Bu_k,
end{align*}
]
with (k in mathbb {Z} ), there,
[
begin{align*}
A :=
left(
begin{array}{cc}
1 & 0.1\
0 & 1
end{array}
right), ,,
B :=
left(
begin{array}{c}
0.05\
1
end{array}
right)
end{align*}
]
and (x_k in mathbb {r} {r} in, mathbb {r ). (Note that I use it
You can use the function of the SCIPIstem of SCIPy to get this Discrete Time Program, next:
import numpy as np
from scipy.signal import cont2discrete
A = np.array([[0, 1],[0, 0]])
B = np.array([[0],[1]])
C = np.array([[1, 0],[0, 1]])
D = np.array([[0, 0],[0, 0]])
dt = 0.1 # in seconds
discrete_system = cont2discrete((A, B, C, D), dt, method='zoh')
A_discrete, B_discrete, *_ = discrete_system
2. The right control problem
We will consider the following Discrete-Time Control Composer Great (OCP):
[
begin{equation}
{mathrm{OCP}(bar x):}begin{cases}
minlimits_{mathbf{u},mathbf{x}} quad & sum_{k=0}^{K-1} left(x_k^{top}Qx_k + u_k^top R u_k right)+ x_{K}^top Q_K x_{K}\
mathrm{subject to:}quad & x_{k+1} = Ax_k + Bu_k, &k in[0:K-1]& Dots (1) \
Quad & x_0 = bar x, & & Dots (2) \
quad & x_k in [-1,1] Times (- Infty, Infty), & k in[1:K]& Dots (3) \
quad & u_k in [-1,1]& k in[0:K-1]& Dots (4)
end {charges}
Finally {equation}
]
there,
- (K in mathbb {r} _ { geq 0} ) you mean Good Ncomo Where we solve OCP,
- (k in mathbb {z} ) mean a discrete step,
- (([p:q]) with (p, q in mathbb {Z} ), say a set of numbers ( {p } )
- ( bar x in mathbb {r} ^ 2 ) you mean The original state of a strong system,
- (x_k in mathbb {r} ^ 2 ) You mean require stage (k ),
- (u in mathbb {r} ) you mean touch stage (k ),
- (Q in mathbb {r} {3 times 2} m into Mathbb {r ) ) times ) scalar).
In addition, we will let,
- ( mathbf {U}: = (U_0, u_1, … 1} {k-1 ) ^ To control the order,
- ( mathbf {x}: = (x_0, x_1, … {k + 1) ^ {2 The order of the state.
Hardship, we will say that the couple (( {} {} {*}, times ^ {2 solution to ( mathrm {OCP} ( bar {x}) ) as long as you minimize costs over all pairs, that is,
[
begin{equation*}
J(mathbf{u}^{*}, mathbf{x}^{*}) leq J(mathbf{u}, mathbf{x}),quad forall (mathbf{u},mathbf{x})in Omega
end{equation*}
]
there (J: mathbb {r} Times Times mathbb {r} {3 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2. (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2 (2
[
begin{equation*}
J(mathbf{u},mathbf{x}) :=left( sum_{k=0}^{K-1 }x_k^top Q x_k + u_k^top R u_k right) + x_K^top Q_K x_K
end{equation*}
]
and ( omega ) means everything in pairs,
[
Omega :={(mathbf{u},mathbf{x})in mathbb{R}^{K}times mathbb{R}^{2(K+1)} : (1)-(4),, mathrm{hold}}.
]
Therefore, the highest control of control to detect sequence and state sequence, (((((( mathbf {u }, (F )[-1,1] Times (- Infty, Infty) ), (u_k in [-1,1] ), to all (k ). The work of costs is important to the management of control. Not only in the sense of ensuring that the controller behaves well (for example, protecting the Erratic signals) but also to explain Equilibrium point The status of the closed loop will last. Much of this in paragraph 4.
Note that { mathrm {OCP} ( bar x) ) disaster in respect of the original State, ( bar X ). This comes from a basic idea after the MPC: that the proper contract problem is resolved through the renewed program.
2.1 Installing SOCP Solver Codes
Casadi's OPTI The stack makes it easy to set and solve the OCP.
First, some representations:
from casadi import *
n = 2 # state dimension
m = 1 # control dimension
K = 100 # prediction horizon
# an arbitrary initial state
x_bar = np.array([[0.5],[0.5]]) # 2 x 1 vector
# Linear cost matrices (we'll just use identities)
Q = np.array([[1. , 0],
[0. , 1. ]])
R = np.array([[1]])
Q_K = Q
# Constraints for all k
u_max = 1
x_1_max = 1
x_1_min = -1
We now describe the flexibility of the problem choices:
opti = Opti()
x_tot = opti.variable(n, K+1) # State trajectory
u_tot = opti.variable(m, K) # Control trajectory
Next, forcing powerful issues and still enemies the cost:
# Specify the initial condition
opti.subject_to(x_tot[:, 0] == x_bar)
cost = 0
for k in range(K):
# add dynamic constraints
x_tot_next = get_x_next_linear(x_tot[:, k], u_tot[:, k])
opti.subject_to(x_tot[:, k+1] == x_tot_next)
# add to the cost
cost += mtimes([x_tot[:,k].T, Q, x_tot[:,k]]) +
mtimes([u_tot[:,k].T, R, u_tot[:,k]])
# terminal cost
cost += mtimes([x_tot[:,K].T, Q_K, x_tot[:,K]])
def get_x_next_linear(x, u):
# Linear system
A = np.array([[1. , 0.1],
[0. , 1. ]])
B = np.array([[0.005],
[0.1 ]])
return mtimes(A, x) + mtimes(B, u)
Code MTIMES ([x_tot[:,k].T, q, x_tot[:,k]]) Uses matrix repetition, (x_k ^ { top} q x_k ).
Now we add obstacles to control and Kingdom challenges,
# constrain the control
opti.subject_to(opti.bounded(-u_max, u_tot, u_max))
# constrain the position only
opti.subject_to(opti.bounded(x_1_min, x_tot[0,:], x_1_max))
and solve:
# Say we want to minimise the cost and specify the solver (ipopt)
opts = {"ipopt.print_level": 0, "print_time": 0}
opti.minimize(cost)
opti.solver("ipopt", opts)
solution = opti.solve()
# Get solution
x_opt = solution.value(x_tot)
u_opt = solution.value(u_tot)
We can set a solution with reposo's Plot_Solution () Work.
from MPC_tutorial import plot_solution
plot_solution(x_opt, u_opt.reshape(1,-1)) # must reshape u_opt to (1,K)

3. The model of the prediction
Solution of ( Mathrm {OCP} ( bar X) ), (((( ((( ^ { ^ { ^ { ^ { Open Loop Control, ( mathbf {u} ^ {*} ). Now Close the loop By solving Iterative ( mathrm {OCP} ( bar x) ) to update the original situation (this is the Alpc algorithm).
[
begin{aligned}
&textbf{Input:} quad mathbf{x}^{mathrm{init}}inmathbb{R}^2 \
&quad bar x gets mathbf{x}^{mathrm{init}} \
&textbf{for } k in [0:infty) textbf{:} \
&quad (mathbf{x}^{*}, mathbf{u}^{*}) gets argmin mathrm{OCP}(bar x)\
&quad mathrm{apply} u_0^{*} mathrm{ to the system} \
&quad bar x gets mathrm{measured state at } k+1 \
&textbf{end for}
end{aligned}
]
3.1 To enter ALPC algorithm codes
Rest is quite straight. First of all, I will include all our previous code at work:
def solve_OCP(x_bar, K):
....
return x_opt, u_opt
Note that I am finished in the first position, ( bar X ), and predicting sounds, (K ). MPC Loop is made:
x_init = np.array([[0.5],[0.5]]) # 2 x 1 vector
K = 10
number_of_iterations = 150 # must of course be finite!
# matrices of zeros with the correct sizes to store the closed loop
u_cl = np.zeros((1, number_of_iterations))
x_cl = np.zeros((2, number_of_iterations + 1))
x_cl[:, 0] = x_init[:, 0]
x_bar = x_init
for i in range(number_of_iterations):
_, u_opt = solve_OCP(x_bar, K)
u_opt_first_element = u_opt[0]
# save closed loop x and u
u_cl[:, i] = u_opt_first_element
x_cl[:, i+1] = np.squeeze(get_x_next_linear(x_bar,
u_opt_first_element))
# update initial state
x_bar = get_x_next_linear(x_bar, u_opt_first_element)
Also, we can adjust the closed loop solution.
plot_solution(x_cl, u_cl)

Note that he “measured” the state of the plant through the Get_x_loar “work (). In other words, I think our model is 100% right.
Here is the site of the loop closed from the first crowds.

4. Other topics
4.1 Fitness and Re-formation
Two most important features of the MPC controller Could be reset of OCP updated with Iteratively and tighten of the loop closed loop. In other words, if I resolved OCP during (K ), is there an OCP solution at a time when there is an OCP solution, is asymptotically steppone.
Ensure that MPC controller shows these two locations involving carefully costs and obstacles, and choosing long predictions. From Our Example, remember that matriculation in cost activities were selected to:
[
Q = left( begin{array}{cc}
1 & 0\
0 & 1
end{array}
right),, Q_K = left( begin{array}{cc}
1 & 0\
0 & 1
end{array}
right),, R = 1.
]
In other words, the OCP is punishing the provincial and origin distance and thus throws us there. Since you may have guessed, if you predict sounds, (k (you can also view this by doing a little (K ) in the code.)
4.2 Other Articles
The MPC is a valid research field and there are many interesting articles you can check.
What if the full situation can be measured? This is related to visualization including Output Mpc. What if I'm not interested in asymptotic strength? This (often) is related MPC of Economic. How do I make the controller thunder To the noise and disturbance? There are a few ways to deal with this, with Tube MPC They are probably well-known.
Future articles can focus on some of these articles.
5. To learn more
Here are some common and good books in the MPC.
[1] GRüne, L., & Pannek, J. (2016). The control of the nonlinear model for predictions.
[2] Rawlings, JB, Mayne, DQ, & Diehl, M. (2020). Exemplary management management: Theory, Complication, and Coverages.
[3] Kouvaritakis, B., & CANNON, M. (2016). The model prediction control: Classic, Robst and Stochastic.



