1/28/2011

[最佳控制] Finite Horizon LQR with Penalty on Final State

考慮 LQR 問題如下

成本函數
\[{V_N}({x_0},u): = \sum\limits_{i = 1}^{N - 1} {\underbrace {l(x\left( i \right),u\left( i \right))}_{{\rm{stage}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{cost}}}}  + \underbrace {{l_N}({x_N})}_{{\rm{terminal}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{cost}}}\]其中 \[\left\{ \begin{array}{l}
u: = \{ u(0),u(1),...,u(N - 1)\} \\
l(x,u): = \frac{1}{2}({x^T}Qx + {u^T}Ru)\\
{l_N}(x): = \frac{1}{2}{x^T}{P_f}x
\end{array} \right.\]另外假設 $Q \ge 0$  positive semi-definite 且 $R >0$ positive definite。

我們的目標:找到 $u^*=\{u^*(0),u^*(1),...,u^*(N-1)\}$ 使得
\[\begin{array}{l}
\mathop {\min }\limits_u {V_N}\left( {x\left( 0 \right),u} \right)\\
s.t.\;\;{x^ + } = Ax + Bu
\end{array}\]

現在利用 Backward Dynamic Programming,我們從最後的狀態 $x(N)$ 逐步回推最佳解,亦即觀察
\[{V_N}({x_0},u): = l({x_0},{u_0}) + l({x_1},{u_1}) + ... + \underbrace {l({x_{N - 1}},{u_{N - 1}}) + {l_N}({x_N})}_{{u_{N - 1}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{affects}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{only}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{there!}}}\]由 Optimality Principle 逐步求解的最佳解 必為整體最佳的一環,故此我們先最佳化 下列子問題
\[ \begin{array}{l}
\mathop {\min }\limits_{{u_{N - 1}}} l({x_{N - 1}},{u_{N - 1}}) + {l_N}({x_N})\\
s.t.\begin{array}{*{20}{c}}
{}
\end{array}{x_N} = A{x_{N - 1}} + B{u_{N - 1}}
\end{array}\]現在將拘束條件帶入,我們可得到以下無拘束最佳化問題:
\[\small \begin{array}{l}
\left\{ \begin{array}{l}
\mathop {\min }\limits_{{u_{N - 1}}} l({x_{N - 1}},{u_{N - 1}}) + {l_N}({x_N})\\
s.t.\begin{array}{*{20}{c}}
{}
\end{array}{x_N} = A{x_{N - 1}} + B{u_{N - 1}}
\end{array} \right.\\
 \Rightarrow \mathop {\min }\limits_{{u_{N - 1}}} \frac{1}{2}\left( {x_{N - 1}^TQx_{N - 1}^{} + u_{N - 1}^TRu_{N - 1}^{}} \right) + \frac{1}{2}{\left( {A{x_{N - 1}} + B{u_{N - 1}}} \right)^T}{P_f}\left( {A{x_{N - 1}} + B{u_{N - 1}}} \right)\\
 \Rightarrow \mathop {\min }\limits_{{u_{N - 1}}} \frac{1}{2}\left[ {x_{N - 1}^TQx_{N - 1}^{} + \underbrace {u_{N - 1}^TRu_{N - 1}^{}}_{{V_1}} + \underbrace {{{\left( {A{x_{N - 1}} + B{u_{N - 1}}} \right)}^T}{P_f}\left( {A{x_{N - 1}} + B{u_{N - 1}}} \right)}_{{V_2}}} \right] \ \ \ \ \ (*)
\end{array}\]現在我們可利用下列結果
------------
FACT:
\[\left\{ \begin{array}{l}
{V_1} = \frac{1}{2}{\left( {x - a} \right)^T}\Phi \left( {x - a} \right)\\
{V_2} = \frac{1}{2}{\left( {\Theta x - b} \right)^T}\Gamma \left( {\Theta x - b} \right)
\end{array} \right.\]且 $\Phi $ positive definite 與 $\Theta $ 為 positive semi-definite。則其和可表為\[V = {V_1} + {V_2} = \frac{1}{2}{\left( {x - v} \right)^T}\Omega \left( {x - v} \right) + d\]其中
\[\left\{ \begin{array}{l}
\Omega  = \Phi  + {\Theta ^T}\Gamma \Theta \\
v = {\left( {\Phi  + {\Theta ^T}\Gamma \Theta } \right)^{ - 1}}\left( {\Phi a + {\Theta ^T}\Gamma b} \right)\\
d = V\left( v \right)
\end{array} \right.\]
------------
觀察式 $(*)$,我們可利用上述 FACT 比較係數
\[\Phi  = R;\Theta  = B;\Gamma  = {P_f};v = u_{N - 1}^*;b =  - A{x_{N - 1}};a = 0
\]故在此階段的最佳解 $u_{N-1}^*$ 為
\[u_{N - 1}^* = \underbrace { - {{\left( {R + {B^T}{P_f}B} \right)}^{ - 1}}{B^T}{P_f}A}_{{K_{N - 1}}}{x_{N - 1}} = {K_{N - 1}}{x_{N - 1}}
\]且在此階段的 optimal cost (又稱 optimal cost to go) 為
\[V_{N - 1}^*\left( {{x_{N - 1}}} \right) = d + \frac{1}{2}x_{N - 1}^TQx_{N - 1}^{}
\]其中 $d$ 為前述 FACT 中的常數項,我們計算如下
\[\small \begin{array}{*{20}{l}}
{d = V\left( v \right) = {V_1}\left( v \right) + {V_2}\left( v \right)}\\
{\begin{array}{*{20}{c}}
{}&{}
\end{array} = \frac{1}{2}{{\left( {x - a} \right)}^T}\Phi \left( {x - a} \right) + \frac{1}{2}{{\left( {\Theta x - b} \right)}^T}\Gamma \left( {\Theta x - b} \right)}\\
{\begin{array}{*{20}{c}}
{}&{}
\end{array} = \frac{1}{2}{{\left( {{K_{N - 1}}{x_{N - 1}}} \right)}^T}R\left( {{K_{N - 1}}{x_{N - 1}}} \right) + \frac{1}{2}{{\left( {\left( {A + B{K_{N - 1}}} \right){x_{N - 1}}} \right)}^T}{P_f}\left( {\left( {A + B{K_{N - 1}}} \right){x_{N - 1}}} \right)}\\
{\begin{array}{*{20}{c}}
{}&{}
\end{array} = \frac{1}{2}x_{N - 1}^T\left[ {K_{N - 1}^TR{K_{N - 1}} + {{\left( {A + B{K_{N - 1}}} \right)}^T}{P_f}\left( {A + B{K_{N - 1}}} \right)} \right]{x_{N - 1}}}
\end{array}
\]故 optimal cost to go
\[\small \begin{array}{l}
V_{N - 1}^*\left( {{x_{N - 1}}} \right) = d + \frac{1}{2}x_{N - 1}^TQx_{N - 1}^{}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = \frac{1}{2}x_{N - 1}^T\left[ {K_{N - 1}^TR{K_{N - 1}} + {{\left( {A + B{K_{N - 1}}} \right)}^T}{P_f}\left( {A + B{K_{N - 1}}} \right)} \right]{x_{N - 1}} + \frac{1}{2}x_{N - 1}^TQx_{N - 1}^{}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = \frac{1}{2}x_{N - 1}^T\left[ {Q + K_{N - 1}^TR{K_{N - 1}} + {{\left( {A + B{K_{N - 1}}} \right)}^T}{P_f}\left( {A + B{K_{N - 1}}} \right)} \right]{x_{N - 1}}
\end{array}\]如果我們把最佳解 $u_{N - 1}^* = {K_{N - 1}}{x_{N - 1}} =  - {\left( {R + {B^T}{P_f}B} \right)^{ - 1}}{B^T}{P_f}A{x_{N - 1}}$ 帶入 optimal cost to go,可得
\[\small \begin{array}{l}
V_{N - 1}^*\left( {{x_{N - 1}}} \right) = \frac{1}{2}x_{N - 1}^T\left[ {Q + K_{N - 1}^TR{K_{N - 1}} + {{\left( {A + B{K_{N - 1}}} \right)}^T}{P_f}\left( {A + B{K_{N - 1}}} \right)} \right]{x_{N - 1}}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = \frac{1}{2}x_{N - 1}^T\left[ {Q + {A^T}{P_f}A + K_{N - 1}^T\left( {R + {B^T}{P_f}B} \right){K_{N - 1}} + 2K_{N - 1}^T{B^T}{P_f}A} \right]{x_{N - 1}}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = \frac{1}{2}x_{N - 1}^T\underbrace {\left[ {Q + {A^T}{P_f}A - {A^T}{P_f}B{{\left( {R + {B^T}{P_f}B} \right)}^{ - 1}}{B^T}{P_f}A} \right]}_{{M_{N - 1}}}{x_{N - 1}}
\end{array}\]但注意到我們並不曉得 $x_{N-1}$ 故我們需要繼續回頭解 $x_{N-2}$ 亦即 接著求解
\[\begin{array}{l}
\left\{ \begin{array}{l}
\mathop {\min }\limits_{{u_{N - 2}}} l\left( {{x_{N - 2}},{u_{N - 2}}} \right) + V_{N - 1}^*\left( {{x_{N - 1}}} \right)\\
s.t.\begin{array}{*{20}{c}}
{}
\end{array}{x_{N - 1}} = A{x_{N - 2}} + B{u_{N - 2}}
\end{array} \right.\\
 \Rightarrow \left\{ \begin{array}{l}
\mathop {\min }\limits_{{u_{N - 2}}} l\left( {{x_{N - 2}},{u_{N - 2}}} \right) + \frac{1}{2}x_{N - 1}^T{M_{N - 1}}{x_{N - 1}}\\
s.t.\begin{array}{*{20}{c}}
{}
\end{array}{x_{N - 1}} = A{x_{N - 2}} + B{u_{N - 2}}
\end{array} \right.
\end{array}\]但注意到先前我們已經解了
\[\left\{ \begin{array}{l}
\mathop {\min }\limits_{{u_{N - 1}}} l\left( {{x_{N - 1}},{u_{N - 1}}} \right) + \frac{1}{2}x_N^T{P_f}{x_N}\\
s.t.\begin{array}{*{20}{c}}
{}
\end{array}{x_N} = A{x_{N - 1}} + B{u_{N - 1}}
\end{array} \right.\]故讀者可直接比對上述兩者,即可得知只要將 前述我們所推出的解其中的 $P_f$ 矩陣換成 $M_{n-1}$ 即可馬上得到 $N-2$ stage的最佳解 與 optimal cost to go。亦即
\[\left\{ \begin{array}{l}
u_{N - 2}^* = {K_{N - 2}}{x_{N - 2}} =  - {\left( {R + {B^T}{M_{N - 1}}B} \right)^{ - 1}}{B^T}{M_{N - 1}}A{x_{N - 2}}\\
V_{N - 2}^*\left( {{x_{N - 2}}} \right) = \frac{1}{2}x_{N - 2}^T{M_{N - 2}}{x_{N - 2}}\\
{M_{N - 2}} = Q + {A^T}{M_{N - 1}}A - {A^T}{M_{N - 1}}B{\left( {R + {B^T}{M_{N - 1}}B} \right)^{ - 1}}{B^T}{M_{N - 1}}A
\end{array} \right.\]

Summary
透過跌代求解 Riccati equation 進而得到一組控制力序列$u(0),u(1),...u(N-1)$;亦即透過下列跌代式:
\[\begin{array}{l}
for\begin{array}{*{20}{c}}
{}
\end{array}k = N - 1,N - 2,...,0\\
\left\{ \begin{array}{l}
u_{}^*\left( k \right) = K\left( k \right)x\left( k \right),\begin{array}{*{20}{c}}
{}&{}
\end{array}\\
K\left( k \right) =  - {\left( {{B^T}M\left( {k + 1} \right)B + R} \right)^{ - 1}}{B^T}M\left( {k + 1} \right)A,
\end{array} \right.\\
\\
for\begin{array}{*{20}{c}}
{}
\end{array}k = N,N - 1,...,0\\
\left\{ \begin{array}{l}
M\left( {k - 1} \right): = Q + {A^T}M\left( k \right)A - {A^T}M\left( k \right)B{\left( {{B^T}M\left( k \right)B + R} \right)^{ - 1}}{B^T}M\left( k \right)A\\
M\left( N \right): = {P_f}\\
V_{}^*\left( k \right) = \frac{1}{2}{x^T}\left( k \right)M\left( {k + 1} \right)x\left( k \right)
\end{array} \right.
\end{array}\]

但注意到最佳解並不一定保證系統穩定 (Kalman (1960b, p.113) 指出系統的 optimality 並不保證 stability,亦即 系統採用 optimal control law 並不保證系統閉迴路穩定。),現在我們將透過以下例子顯示 儘管 $Q >0, R>0$ 且 $N \ge 1$ 所求得的最佳控制力並不保證系統閉迴路穩定。

Example: Finite Horizon LQ control
考慮離散系統 $x(k+1) = Ax(k) + B u(k)$ 且 $y(k) = Cx(k)$;其中
\[A = \left[ {\begin{array}{*{20}{c}}
{4/3}&{ - 2/3}\\
1&0
\end{array}} \right];B = \left[ {\begin{array}{*{20}{c}}
1\\
0
\end{array}} \right];C = \left[ {\begin{array}{*{20}{c}}
{ - 2/3}&1
\end{array}} \right]\]現在考慮  $N=5$ 與 $N=7$ ,試設計 有限維度 LQ 控制器時的最佳解。
Solution
上述系統對應的 轉移函數如下
\[G\left( z \right) = \frac{{ - 2/3z + 1}}{{{z^2} - 4/3z + 2/3}}\]注意到此系統的 開迴路 零點 (zero) 落在 $z = 3/2 = 1.5$;亦即具有 不穩定 零點 ( 因為離散系統穩定範圍落在單位圓 $|z| \le 1$ 之內。) 現在我們利用 LQ 控制器,令參數矩陣 $R:=0.001$ 且 $Q := C^TC = P_f$ 且 $Q \ge 0$ 為半正定矩陣;此時若我們額外再加入一點小擾動
\[\begin{array}{l}
Q: = {C^T}C + 0.001I\\
\begin{array}{*{20}{c}}
{}&{}
\end{array} = \left[ {\begin{array}{*{20}{c}}
{ - 2/3}\\
1
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{ - 2/3}&1
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
{0.001}&0\\
0&{0.001}
\end{array}} \right]\\
\begin{array}{*{20}{c}}
{}&{}
\end{array} = \left[ {\begin{array}{*{20}{c}}
{4/9}&{ - 2/3}\\
{ - 2/3}&1
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
{0.001}&0\\
0&{0.001}
\end{array}} \right]\\
\begin{array}{*{20}{c}}
{}&{}
\end{array} = \left[ {\begin{array}{*{20}{c}}
{4/9 + 0.001}&{ - 2/3}\\
{ - 2/3}&{1.001}
\end{array}} \right]
\end{array}\]接著我們逐步跌代求解最佳控制力序列 $u(4),u(3),u(2),u(1),u(0)$ 如下:
$k = N = 5$,我們有
\[\begin{array}{l}
 \Rightarrow \left\{ \begin{array}{l}
u_{}^*\left( 4 \right) = K\left( 4 \right)x\left( 4 \right),\begin{array}{*{20}{c}}
{}&{}
\end{array}\\
K\left( 4 \right) = \left[ {\begin{array}{*{20}{c}}
{0.1629}&{0.6652}
\end{array}} \right]\\
M\left( 5 \right) = Q\\
V_{}^*\left( 5 \right) = \frac{1}{2}{x^T}\left( 5 \right)Qx\left( 5 \right)
\end{array} \right. \Rightarrow \left\{ \begin{array}{l}
u_{}^*\left( 3 \right) = K\left( 3 \right)x\left( 3 \right),\begin{array}{*{20}{c}}
{}&{}
\end{array}\\
K\left( 3 \right) = \left[ {\begin{array}{*{20}{c}}
{0.1518}&{0.6652}
\end{array}} \right],\\
M\left( 4 \right): = \left[ {\begin{array}{*{20}{c}}
{0.4489}&{ - 0.666}\\
{ - 0.666}&{1.0014}
\end{array}} \right]\\
V_{}^*\left( 4 \right) = \frac{1}{2}{x^T}\left( 4 \right)M\left( 4 \right)x\left( 4 \right)
\end{array} \right.\\
 \Rightarrow \left\{ \begin{array}{l}
u_{}^*\left( 2 \right) = K\left( 2 \right)x\left( 2 \right),\begin{array}{*{20}{c}}
{}&{}
\end{array}\\
K\left( 2 \right) = \left[ {\begin{array}{*{20}{c}}
{0.1257}&{0.6652}
\end{array}} \right]\\
M\left( 3 \right): = \left[ {\begin{array}{*{20}{c}}
{0.4568}&{ - 0.666}\\
{ - 0.666}&{1.0014}
\end{array}} \right]\\
V_{}^*\left( 3 \right) = \frac{1}{2}{x^T}\left( 3 \right)M\left( 3 \right)x\left( 3 \right),
\end{array} \right. \Rightarrow \left\{ \begin{array}{l}
u_{}^*\left( 1 \right) = K\left( 1 \right)x\left( 1 \right),\begin{array}{*{20}{c}}
{}&{}
\end{array}\\
K\left( 1 \right) = \left[ {\begin{array}{*{20}{c}}
{0.0724}&{0.6653}
\end{array}} \right],\\
M\left( 2 \right): = \left[ {\begin{array}{*{20}{c}}
{0.4741}&{ - 0.666}\\
{ - 0.666}&{1.0014}
\end{array}} \right]\\
V_{}^*\left( 2 \right) = \frac{1}{2}{x^T}\left( 2 \right)M\left( 2 \right)x\left( 2 \right)
\end{array} \right.\\
 \Rightarrow \left\{ \begin{array}{l}
u_{}^*\left( 0 \right) = K\left( 0 \right)x\left( 0 \right),\begin{array}{*{20}{c}}
{}&{}
\end{array}\\
K\left( 0 \right) = \left[ {\begin{array}{*{20}{c}}
{ - 0.0256}&{0.6653}
\end{array}} \right],\\
M\left( 1 \right): = \left[ {\begin{array}{*{20}{c}}
{0.5098}&{ - 0.666}\\
{ - 0.666}&{1.0014}
\end{array}} \right]\\
V_{}^*\left( 1 \right) = \frac{1}{2}{x^T}\left( 1 \right)M\left( 1 \right)x\left( 1 \right)
\end{array} \right.
\end{array}\]注意到若我們計算閉迴路系統的極點可得
$$eig(A+ BK_{N=5}(0)) = \{1.307, 0.001\}$$ 亦即 使用此最佳控制力會導致閉迴路系統不穩定 $(z = 1.307 \ge 1)$ 但讀者可自行試驗若改考慮 $N=7$時候,閉迴路極點會變成
\[
eig(A+ BK_{N=7}(0)) = \{0.989, 0.001\}
\]亦即系統被穩定化。若我們考慮 $N=\infty$ 時可以得到
\[
eig(A+ BK_{N=\infty}(0)) = \{0.664, 0.001\}
\]此結果稱作 infinite horizon control law。會在之後再行介紹。

沒有留言:

張貼留言

[隨筆] A+焦慮的世代

接住A+世代學生 當了老師之後發現要"接住"學生確實不容易,撇開老師自身可能也有需要被接住的問題不談。我這幾年常常感受到這世代的學生們有著很大的徬徨,不太清楚未來的方向,但是卻有著非得要拿到A/A+不可的糾結,於是課優先選甜涼課,實習競賽投好投滿。好像看著同學...