2/23/2015

[線性代數] Ax=b 何時有解? 何時解為唯一?

令 $A \in \mathbb{R}^{m \times n}$ 為 實數矩陣 (有 $m$ rows 與 $n$ columns 故 $A$ 可不為方陣 )。我們通常有興趣求解下列形式的線性方程組
\[
A {\bf x} = {\bf b}
\] 其中 ${\bf b} \in \mathbb{R}^m$ 且 ${\bf x} \in \mathbb{R}^n$ 為未知數待求。

往下閱讀之前,強烈建議讀者先回憶 以下幾個 線性代數 中的 基本名詞定義:
  • 向量空間 (Vector Space)
  • 子空間 (Subspace)
  • 線性獨立 與 線性相關 (Linear Independence & Dependence)
  • 矩陣的秩 (Rank) 
  • 線性生成 (Span)
  • 基底 (Basis)
  • 維度 (Dimension)
  • 正交空間 (Orthogonal Space)
待上述基本概念補齊之後再往下閱讀:


線性代數基本定理 (Fundamental Theorem of Linear Algebra)
對於 $A {\bf x} = {\bf b}$ 何時有解 以及 何時有唯一解 給出了完整的條件 (參閱 G. Strang, "Linear Algebra and its Applications") 。亦即 任意矩陣 $A_{m \times n}$ 且 $rank(A)=r$ ,則我們可將 $\mathbb{R}^n$ 空間 與 $\mathbb{R}^m$ 空間分解到四個基本子空間:
  1. 列空間(Column space) or  (Range space ) $:= \mathcal{R}(A) \subset \mathbb{R}^m$; rank $r$
  2. 零空間 (Null Space) or kernel $:=\mathcal{N}(A) \subset \mathbb{R}^n$; rank $n-r$
  3. 行空間 (Row space) $:= \mathcal{R}(A') \subset \mathbb{R}^n $; rank $r$
  4. 左零空間 (Left null space) $:= \mathcal{N}(A') \subset \mathbb{R}^m $; rank $m-r$

Comments:
1. 前述 線性代數基本定理 有些學者亦稱為 秩-零度定理(Rank-Nullity Theorem)
2. $\mathcal{N}(A)$ 稱為 Null Space of $A$ 定義為 $\mathcal{N}(A) := \{{\bf x} \in \mathbb{R}^n : A{\bf x} = {\bf 0} \in \mathbb{R}^m\}$
3.  $\mathcal{R}(A)$ 又稱為 Range Space of $A$ 定義為 $\mathcal{R}(A) := \{ A{\bf x} \in \mathbb{R}^m : {\bf x} \in \mathbb{R}^n\}$


FACT 1: $\mathcal{N}(A)$ 為 subspace of $\mathbb{R}^n$ 且 $\mathcal{R}(A)$ 為 subspace of $\mathbb{R}^m$


以下為重要的結果:

Theorem: 何時解存在?何時解唯一
對任意 $\bf b$, $A {\bf x} = {\bf b}$ 的 解 存在 若且唯若 $A$ 的 rows 彼此線性獨立。
$Ax = b$ 的解 為 唯一解 若且唯若 $A$ 矩陣的 columns 彼此線性獨立。
Proof: omitted.

上述有等價定理以 rank條件表示:

Theorem 2: 對任意線性系統 $A {\bf x} = {\bf b}$ ,其解 ${\bf x}$ 存在若且唯若 $$
rank(A)=rank( [A|{\bf b}])
$$
其解 ${\bf x}$ 為唯一若且唯若
\[
rank(A) = rank([A|{\bf b}]) = dim({\bf b})
\]


Exercise:
給定矩陣
\[A = \left[ {\begin{array}{*{20}{c}}
1&3&5&0&7\\
0&0&0&1&2\\
0&0&0&0&0
\end{array}} \right]\]
(a) 試求 $rank A$
(b) 試求 $\mathcal{N}(A)$ 及其基底 與 維度
(c) 試求 $\mathcal{R}(A)$ 及其基底 與 維度
(d) 試求 $\mathcal{R}(A')$ 及其基底 與 維度
(e) 試求 $\mathcal{N}(A')$ 及其基底 與 維度

2/21/2015

[最佳估計] 狀態估測器的收斂性

考慮離散時間 LTI 系統
\[\begin{array}{l}
x(k + 1) = Ax(k) + w(k)\\
y(k) = Cx\left( k \right) + v(k)
\end{array}\]其中 $x \in \mathbb{R}^n, u \in \mathbb{R}^m, y \in \mathbb{R}^p$ 且 $x(0) = x_0$;

基本狀態估計問題:給定初始估計誤差 ( i.e., $\bar x(0) \neq x_0$) 且 不考慮 雜訊 無外部干擾的情況 ($w(k) = v(k)=0$),我們想問 $\hat x(k) \to x(k)$ as $k \to \infty$ ?

=================
Theorem: (Convergence of Estimator Cost)
給定無 noise 量測輸出 ${\bf y}( T) = \{Cx(0), CAx(0),..., CA^T x(0)\}$ 則最佳估測器的 cost $V_T^*({\bf y}(T)) $ 在 $T \to \infty$ 時收斂 。
=================

Proof:
由於
\[V_T^{} = \frac{1}{2}\left( \begin{array}{l}
\left| {\hat x\left( 0 \right) - \bar x\left( 0 \right)} \right|_{{{\left( {{P^ - }\left( 0 \right)} \right)}^{ - 1}}}^2\\
\begin{array}{*{20}{c}}
{}&{}
\end{array} + \sum\limits_{k = 0}^{T - 1} {|\hat x\left( {k + 1} \right) - A\hat x\left( k \right)|_{{Q^{ - 1}}}^2}  + \sum\limits_{k = 0}^T {\left| {y\left( k \right) - C\hat x\left( k \right)} \right|_{{R^{ - 1}}}^2}
\end{array} \right)\]故我們分三個步驟證明 $V_T^*({\bf y}(T)) $ 在 $T \to \infty$ 時收斂 :
首先證明 sequence $\{V_T^*\}$ 有上界(bounded above);接著我們證明 $\{V_T^*\}$ 為 nondecreasing。則由步驟一與步驟二可推論 $\{V_T^*\}$ 必定收斂。

現在我們開始證明 $V_T^*({\bf y}(T)) $ 有界:
由於我們的目標為 $\min_{\hat{ {\bf x}}(k)} V_k$ 但我們並知道 怎樣的 $\hat x$ 可以幫助我們達成此目標,故我們先暫取 $\hat x(0) :=  x_0$ (並不一定為最佳解!) 則我們有
\[\begin{array}{l}
\left\{ \begin{array}{l}
\hat x(1) = A\hat x(0)\\
\hat x(2) = A\hat x(1) = {A^2}\hat x(0)\\
 \vdots
\end{array} \right.\\
 \Rightarrow y\left( k \right) = C{A^k}{x_0}
\end{array}\]故若將此結果帶入我們的 cost 可得
\[\bar{V}_T^{} = \frac{1}{2}\left| {\hat x\left( 0 \right) - \bar x\left( 0 \right)} \right|_{{{\left( {{P^ - }\left( 0 \right)} \right)}^{ - 1}}}^2 < \infty
\]但注意到此並非最佳 cost,若代入最佳解 則我們必有
\[
V_T^* \le \bar V_T
\]故可推論 sequence $\{V_T^*\}$ 必定 bounded above。

接著我們證明 optimal cost sequence $\{V_T^*\}$ 為 nondecreasing。
給定量測輸出 ${\bf y}( T) = \{Cx(0), CAx(0),..., CA^T x(0)\}$,定義 在時間 $T$ 的最佳狀態 sequence 為
\[\left\{ {\hat x\left( {0} \right),\hat x\left( {1} \right),...,\hat x\left( {T} \right)} \right\}
\]現在我們比對 $T$ 時刻的 optimal cost 與 $T-1$ 時刻的 optimal cost 可知
\[V_T^* - \frac{1}{2}\left( {\left| {y\left( {T - 1} \right) - C\hat x\left( {T - 1} \right)} \right|_{{R^{ - 1}}}^2 + |\hat x\left( T \right) - A\hat x\left( k \right)|_{{R^{ - 1}}}^2} \right) \ge V_{T - 1}^*\]此說明了 sequence $\{V_T^*\}$ 為 nondecreasing。

故 由  optimal cost sequence $\{V_T^*\}$  bounded above 與 $\{V_T^*\}$ 為 nondecreasing,我們可推論當 $T \to \infty$  $\{V_T^*\}$ 必定收斂。 $\square$


注意到 optimal estimator cost $V_T^*$ 的收斂性與 系統可觀測性無關,但若我們要求我們的估計狀態 $\hat x \to x$ 則系統的觀測性將扮演重要腳色,我們將此結果記做以下定理:

==========================
Theorem: Estimator Convergence
考慮控制系統 $(A,C)$ 可觀測 且 $Q,R >0$ 為正定矩陣 且 給定一組無雜訊量測輸出
\[
{\bf y}(T) := \{Cx(0), CAx(0), ..., CA^T x(0)\}
\]則 最佳狀態估測 收斂到原本系統狀態;亦即
\[
\hat x(T) \to x(T) \;\; \text{ as $T \to \infty$}
\]==========================

Proof:
利用 在時刻 $T+n-1$ 的最佳解作為在時刻 $T-1$的 decision variables ,則前述 Theorem 告訴我們可寫
\[\small V_{T + n - 1}^* - \frac{1}{2}\left[ {\sum\limits_{k = T - 1}^{T + n - 2} {|\hat x\left( {k + 1} \right) - A\hat x\left( k \right)|_{{Q^{ - 1}}}^2}  + \sum\limits_{k = T}^{T + n - 1} {\left| {y\left( k \right) - C\hat x\left( k \right)} \right|_{{R^{ - 1}}}^2} } \right] \ge V_{T - 1}^*\]做變數變換 令 $j = k - T$ 我們可得
\[ \small V_{T + n - 1}^* - \frac{1}{2}\left[ {\sum\limits_{j =  - 1}^{n - 2} {|\hat x\left( {T + j + 1} \right) - A\hat x\left( {T + j} \right)|_{{Q^{ - 1}}}^2}  + \sum\limits_{j = 0}^{n - 1} {\left| {y\left( {j + T} \right) - C\hat x\left( {j + T} \right)} \right|_{{R^{ - 1}}}^2} } \right] \ge V_{T - 1}^*\]注意到當 $T \to \infty$時, $\{V_T^*\}$ 收斂 且 $Q^{-1}, R^{-1} >0$ 故上式
\[\begin{array}{l}
\underbrace {V_{T + n - 1}^* - V_{T - 1}^*}_{ \to 0} \ge \frac{1}{2}\left[ \begin{array}{l}
\sum\limits_{j =  - 1}^{n - 2} {|\hat x\left( {T + j + 1} \right) - A\hat x\left( {T + j} \right)|_{{Q^{ - 1}}}^2} \\
\begin{array}{*{20}{c}}
{}&{}
\end{array} + \sum\limits_{j = 0}^{n - 1} {\left| {y\left( {j + T} \right) - C\hat x\left( {j + T} \right)} \right|_{{R^{ - 1}}}^2}
\end{array} \right]\\
 \Rightarrow \sum\limits_{j =  - 1}^{n - 2} {|\hat x\left( {T + j + 1} \right) - A\hat x\left( {T + j} \right)|_{{Q^{ - 1}}}^2}  + \sum\limits_{j = 0}^{n - 1} {\left| {y\left( {j + T} \right) - C\hat x\left( {j + T} \right)} \right|_{{R^{ - 1}}}^2}  \to 0\\
 \Rightarrow \left\{ \begin{array}{l}
\hat x\left( {T + j + 1} \right) - A\hat x\left( {T + j} \right) \to 0,\begin{array}{*{20}{c}}
{}
\end{array}\forall j =  - 1,...,n - 2\\
y\left( {j + T} \right) - C\hat x\left( {j + T} \right) \to 0,\begin{array}{*{20}{c}}
{}
\end{array}\forall j = 0,...,n - 1
\end{array} \right. \ \ \ \ \ \ (**)
\end{array}\]令 $\hat w_T(j) := \hat x(T+j+1|T+n-1) - A \hat x(T+j|T+n-1)$ 並且透過系統方程 $x(k+1) = Ax(k) + w(k)$ 我們有
\[\begin{array}{l}
\left[ {\begin{array}{*{20}{c}}
{\hat x\left( {T|T + n - {\rm{1}}} \right)}\\
{\hat x\left( {T{\rm{ + 1}}|T + n - {\rm{1}}} \right)}\\
 \vdots \\
{\hat x\left( {T + n - 1|T + n - {\rm{1}}} \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
I\\
A\\
 \vdots \\
{{A^{n - 1}}}
\end{array}} \right]\hat x\left( {T|T + n - {\rm{1}}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} + \left[ {\begin{array}{*{20}{c}}
0&{}&{}&{}\\
I&0&{}&{}\\
 \vdots & \vdots & \ddots &{}\\
{{A^{n - 2}}}&{{A^{n - 3}}}& \cdots &I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{{\hat w}_T}\left( 0 \right)}\\
{{{\hat w}_T}\left( 1 \right)}\\
 \vdots \\
{{{\hat w}_T}\left( {n - 2} \right)}
\end{array}} \right]\ \ \ \ (*)
\end{array} \]且由於我們的量測輸出 滿足
\[\left\{ \begin{array}{l}
y\left( T \right) = Cx\left( T \right)\\
y\left( {T + 1} \right) = CAx\left( T \right)\\
 \vdots \\
y\left( {T + n - 1} \right) = C{A^{n - 1}}x\left( T \right)
\end{array} \right. \Rightarrow \left[ {\begin{array}{*{20}{c}}
{y\left( T \right)}\\
{y\left( {T + 1} \right)}\\
 \vdots \\
{y\left( {T + n - 1} \right)}
\end{array}} \right] = Ox\left( T \right)\]其中 $O$ 為 observability matrix。現在用上式減去 同乘 $C$ 矩陣 後的 $(*)$ 可得
\[\begin{array}{l}
\left[ {\begin{array}{*{20}{c}}
{y\left( T \right)}\\
{y\left( {T + 1} \right)}\\
 \vdots \\
{y\left( {T + n - 1} \right)}
\end{array}} \right] - C\left[ {\begin{array}{*{20}{c}}
{\hat x\left( {T|T + n - {\rm{1}}} \right)}\\
{\hat x\left( {T{\rm{ + 1}}|T + n - {\rm{1}}} \right)}\\
 \vdots \\
{\hat x\left( {T + n - 1|T + n - {\rm{1}}} \right)}
\end{array}} \right] = O\left[ {x\left( T \right) - \hat x\left( {T|T + n - {\rm{1}}} \right)} \right]\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} + \left[ {\begin{array}{*{20}{c}}
0&{}&{}&{}\\
C&0&{}&{}\\
 \vdots & \vdots & \ddots &{}\\
{C{A^{n - 2}}}&{C{A^{n - 3}}}& \cdots &C
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{{\hat w}_T}\left( 0 \right)}\\
{{{\hat w}_T}\left( 1 \right)}\\
 \vdots \\
{{{\hat w}_T}\left( {n - 2} \right)}
\end{array}} \right]
\end{array}\]現在用 $(**)$ 可知
\[\begin{array}{l}
\underbrace {\left[ {\begin{array}{*{20}{c}}
{y\left( T \right)}\\
{y\left( {T + 1} \right)}\\
 \vdots \\
{y\left( {T + n - 1} \right)}
\end{array}} \right] - C\left[ {\begin{array}{*{20}{c}}
{\hat x\left( {T|T + n - {\rm{1}}} \right)}\\
{\hat x\left( {T{\rm{ + 1}}|T + n - {\rm{1}}} \right)}\\
 \vdots \\
{\hat x\left( {T + n - 1|T + n - {\rm{1}}} \right)}
\end{array}} \right]}_{ \to 0} = O\left[ {x\left( T \right) - \hat x\left( {T|T + n - {\rm{1}}} \right)} \right]\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} + \underbrace {\left[ {\begin{array}{*{20}{c}}
0&{}&{}&{}\\
C&0&{}&{}\\
 \vdots & \vdots & \ddots &{}\\
{C{A^{n - 2}}}&{C{A^{n - 3}}}& \cdots &C
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{{\hat w}_T}\left( 0 \right)}\\
{{{\hat w}_T}\left( 1 \right)}\\
 \vdots \\
{{{\hat w}_T}\left( {n - 2} \right)}
\end{array}} \right]}_{ \to 0}\\
 \Rightarrow O\left[ {x\left( T \right) - \hat x\left( {T|T + n - {\rm{1}}} \right)} \right] \to 0
\end{array}\]又因為 observability matrix $O$ 有 linear independent columns,故我們可推知
\[x\left( T \right) - \hat x\left( {T|T + n - {\rm{1}}} \right) \to 0 \;\; \text{as $T \to \infty$}\]亦即
\[\hat x\left( {T|T + n - {\rm{1}}} \right) \to x\left( T \right)
\]再者如果我們觀察 $(*)$ 可以發現因為 $\hat w_T (j) \to 0$ 當 $T \to \infty$,故
\[\hat x\left( {T + n - 1|T + n - {\rm{1}}} \right) \to {A^{n - 1}}\hat x\left( {T|T + n - {\rm{1}}} \right) \;\;\;\; \text{ as $T \to \infty$}\]又因為 $A^{n-1} x(T) = x(T + n -1)$ ,我們有
\[\begin{array}{l}
x\left( {T + n - 1} \right) - \hat x\left( {T + n - 1|T + n - {\rm{1}}} \right) \to {A^{n - 1}}x\left( T \right) - {A^{n - 1}}\hat x\left( {T|T + n - {\rm{1}}} \right)\\
 \Rightarrow x\left( {T + n - 1} \right) - \hat x\left( {T + n - 1|T + n - {\rm{1}}} \right) \to {A^{n - 1}}\underbrace {\left[ {x\left( T \right) - \hat x\left( {T|T + n - {\rm{1}}} \right)} \right]}_{ \to 0}\\
 \Rightarrow x\left( {T + n - 1} \right) \to \hat x\left( {T + n - 1|T + n - {\rm{1}}} \right)\\
 \Rightarrow x\left( {T + n - 1} \right) \to \hat x\left( {T + n - 1} \right)\\
 \Rightarrow x\left( j \right) \to \hat x\left( j \right),\begin{array}{*{20}{c}}
{}
\end{array}\forall j \to \infty
\end{array}\]至此得證

2/19/2015

[控制理論] 離散線性系統的 追蹤 與 調節 問題

Setpoint Tracking
考慮離散線性系統
\[\left\{ \begin{array}{l}
x\left( {k + 1} \right) = Ax\left( k \right) + Bu\left( k \right)\\
y\left( k \right) = Cx\left( k \right)
\end{array} \right.\] $x \in \mathbb{R}^n, y \in \mathbb{R}^p, u \in \mathbb{R}^m$;一般而言,控制系統中常見的 追蹤(setpoint tracking)問題 (或稱 servo problem) 如下:
給定 setpoint  $y_{sp}$ (e.g., 步階訊號 or 常數),我們希望在系統達到穩態(steady-state) 之後,系統的輸出 $ y  = y_{sp} $。

若我們考慮給定的 setpoint $y_{sp} = 0$ 則稱此類問題為 regulation problem。

回憶在最佳控制理論中的 LQR 方法給予我們對於線性系統的 regulation 問題提供一組最佳解,故我們的想問是否能將此法應用在 一般的追蹤問題?

答案是肯定的,僅需引入 deviation variable 做基本 座標轉換 即可。

現在我們令 $y_{sp}$ 為 output setpoint,定義系統穩態時候的狀態 與 控制力為 $(x_s, u_s)$ 則線性系統方程可改寫為
\[\left\{ \begin{array}{l}
{x_s} = A{x_s} + B{u_s}\\
{y_s} = C{x_s}
\end{array} \right.\]注意到對於穩態時 我們希望 $y_s = y_{sp}$ 故 我們有 $y_{sp} = C x_s$ 現在我們將上式改寫為矩陣形式
\[\left\{ \begin{array}{l}
{x_s} = A{x_s} + B{u_s}\\
{y_{sp}} = C{x_s}
\end{array} \right. \Rightarrow \left[ {\begin{array}{*{20}{c}}
{I - A}&{ - B}\\
C&0
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_s}}\\
{{u_s}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
0\\
{{y_{sp}}}
\end{array}} \right] \ \ \ \ (*)
\]上述矩陣方程有解 (亦即可以解出 $x_s, u_s$),且系統控制力無拘束,則我們可以定義 deviation variables 如下
\[\left\{ \begin{array}{l}
\tilde x\left( k \right): = x\left( k \right) - {x_s}\\
\tilde u\left( k \right): = u\left( k \right) - {u_s}
\end{array} \right.\]現在觀察
\[\begin{array}{l}
\tilde x\left( {k + 1} \right) = x\left( {k + 1} \right) - {x_s}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = Ax\left( k \right) + Bu\left( k \right) - \left( {A{x_s} + B{u_s}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = A\left( {x\left( k \right) - {x_s}} \right) + B\left( {u\left( k \right) - {u_s}} \right)\\
 \Rightarrow \tilde x\left( {k + 1} \right) = A\tilde x\left( k \right) + B\tilde u\left( k \right)
\end{array}\]此表示我們的 deviation variable 仍然滿足原本給定的線性系統。且此時若考慮使用 deviation variable $\tilde{x}, \tilde{u}$ 則 我們的 setpoint tracking problem 被改寫為 regulation problem ;也就是說我們要找到一組 $\tilde{u}(k)$ 使得 $\tilde x(k) \to 0$ (此等價為 $x(k) \to x_s$,故 $C x(k) \to C x_s = y_{sp}$)。解完此 regulation problem 之後,真實的控制力 $u(k) = \tilde u(k) + u_s$。

Comment:
注意到上述論述建立在 $(n + p) \times (n + m)$ 的矩陣方程
\[\left\{ \begin{array}{l}
{x_s} = A{x_s} + B{u_s}\\
{y_{sp}} = C{x_s}
\end{array} \right. \Rightarrow \left[ {\begin{array}{*{20}{c}}
{I - A}&{ - B}\\
C&0
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_s}}\\
{{u_s}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
0\\
{{y_{sp}}}
\end{array}} \right] \ \ \ \ (*)
\]有解。但何時才有解/與此解是否唯一的問題 我們並未解決 !! 以下我們將對此點進行討論。


對任意 setpoint $y_{sp}$, 上述矩陣方程 $(*)$ 有解 的充分條件為:$(*)$ 要有 linearly independent rows;亦即 $p \le m$ 也就是說我們需要 控制力的數目 $m$ 與 量測輸出的數目 $p$ 至少要相等)。但在實際情況上卻是常常相反,我們可能會獲得非常多的量測輸出數目 (因為裝了很多 sensor),但實際可以調控的變數卻少於 sensor 數目。為了要解決此問題,我們可選定某矩陣 $H$ 並引入新的 控制變數 $r \in \mathbb{R}^{n_c}$ 為 量測輸出的線性組合;亦即
\[
r := Hy
\]如此一來,若 $p > m$ 情況發生時,我們可選一部份的 輸出 $n_c \le m$ 作為控制變數 並且 對此引入的控制變數 $r$ 也給定所需的 setpoint $r_{sp}$。

另外若 $m > p$ (控制力數目 大於 量測輸出的數目) 時,則對某些 $H$ 與 $r_{sp}$,矩陣方程 $(*)$ 有解 但此時唯一性並不被保證,故我們會希望有唯一解,此時需要對 穩態控制力  $u_s$ 也給定 setpoint $u_{sp}$。


總和以上所述,我們可以建構以下 Steady-State Target Problem
========================
Steady-State Target Problem
考慮最佳化問題
\[
\min_{x_s, u_s} \frac{1}{2} (|u_s - u_{sp}|_{R_s}^2 + |y_s - y_{sp}|_{Q_s}^2)
\]subject to
\[\left\{ \begin{array}{l}
{x_s} = A{x_s} + B{u_s}\\
{y_{sp}} = C{x_s}
\end{array} \right. \Rightarrow \left[ {\begin{array}{*{20}{c}}
{I - A}&{ - B}\\
C&0
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_s}}\\
{{u_s}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
0\\
{{y_{sp}}}
\end{array}} \right]
\]與 $E u_s \le e$ 與 $F C x_s \le f$。其中 $R_s$ 為 正定矩陣。且對 控制變數的 setpoints $r_{sp}$,上述 target problem 為 feasible 。
========================

有了 steady-state 的解之後我們可以便可以求解 當初我們引入 deviation variable 的 regulation problem:
========================
Dynamic Regulation Problem
考慮以下 cost function
\[\begin{array}{l}
V\left( {\tilde x\left( 0 \right),{\bf{\tilde u}}} \right) = \frac{1}{2}\sum\limits_{k = 0}^{N - 1} {\left| {\tilde x\left( k \right)} \right|_Q^2 + \left| {\tilde u\left( k \right)} \right|_R^2} \\
s.t.\\
\tilde x\left( {k + 1} \right) = A\tilde x\left( k \right) + B\tilde u\left( k \right)
\end{array}\]其中 $\tilde x(0) = \hat x(k) - x_s$ (此初始條件表示  透過 steady-state $x_s$ 平移估計狀態 $\hat x$ 而得。);且所求得的 regulator 將會求解以下的 regulation problem
 \[\begin{array}{l}
\mathop {\min }\limits_{{\bf{\tilde u}}} V\left( {\tilde x\left( 0 \right),{\bf{\tilde u}}} \right)\\
s.t.\\
E\tilde u \le e - E{u_s}\\
FC\tilde x \le f - FC{x_s}
\end{array}\]===================
上述 regulation problem 的 optimal cost 為 $V^*(\tilde x(0))$ 且對應的控制力為 $\tilde {u}^0 (\tilde{x}(0))$



ref: J. B. Rawlings and D. Q. Mayne, "Model Predictive Control: Theory and Design"

2/15/2015

[線性系統] Hautus Lemma 與 控制性/可穩定性質

以下我們介紹 線性系統理論中關於 控制性的 重要結果
==========================
Lemma: Hautus Lemma for Controllability
一個離散線性系統
\[
x(k+1) = A x(k) + Bu(k)
\]為 controllable 或稱 (A,B) controllable 若且唯若
\[
rank[\lambda I - A\;\; B] =n, \;\; \forall \lambda \in \mathbb{C}
\]其中 $\mathbb{C}$ 表示 任意 complex number 所形成的集合。
==========================
Proof: omitted.

Comment:
1. 上述 Lemma 等價為 矩陣 $[\lambda I - A\;\; B]$ 有 $n$ 個 independent rows (full row rank) 。

2. 也許讀者會好奇為何需要如此多 控制性 檢驗工具? 為什麼不使用 可控性矩陣(controllability matrix) 檢驗法就好? 事實上此 Hautus Lemma 主要是功用是大幅簡化 理論證明,但一般實際 檢驗 系統的 可控性 仍多仰賴  可控性矩陣(controllability matrix) 檢驗法。

3. 注意到上述條件需要檢驗 "任意" complex number $\lambda$ 此條件明顯過於嚴苛。不過我們可以注意到 若 $\lambda $ 並非為 $A$ 矩陣的特徵值,則 $rank[\lambda I - A\;\; B] =n$ 必然成立,故我們只需檢驗 $\lambda$ 為 $A$ 矩陣的特徵值部分即可。故

==========================
Lemma (Modified): Hautus Lemma for Controllability
一個離散線性系統
\[
x(k+1) = A x(k) + Bu(k)
\]為 controllable 或稱 (A,B) controllable 若且唯若
\[
rank[\lambda I - A\;\; B] =n, \;\; \forall \lambda \in eig(A)
\]==========================
Proof: omitted.


以下我們將討論拓展到 stabilizability。若現在我們將原本系統 $x(k+1) = A x(k) + Bu(k)$ 透過similar transformation (關於相似轉換細節請參閱 [線性系統] 控制性矩陣 與 非奇異轉換 (Controllability matrix & Non-singular transformation)) 轉成以下 partitioned system
\[\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( {k + 1} \right)}\\
{{x_2}\left( {k + 1} \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}&{{A_{12}}}\\
0&{{A_{22}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( k \right)}\\
{{x_2}\left( k \right)}
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
{{B_1}}\\
0
\end{array}} \right]u\left( k \right)\]其中 $(A_{11}, B_1)$ 為 controllable。上式又稱為 controllability canonical form。注意到若我們觀察此系統的 contorllability matrix
\[C = \left[ {\begin{array}{*{20}{c}}
{{B_1}}&{{A_{11}}{B_1}}&{{A_{11}}^2{B_1}}& \cdots &{{A_{11}}^n{B_1}}\\
0&0&0& \cdots &0
\end{array}} \right]\]由於 $(A_{11}, B_1)$ 為 controllable 故 controllability matrix $C$ 上排具有 $n_1$ 個 independent row。但是 $C$ 的 下排 $n_2$ rows 均為 $0$ ,故必定不滿足 full row rank test,此系統 uncontrollable。


Exercise: 試證 uncontrollable mode 為 $x_2(k)$。


儘管有 uncontrollable mode ,但我們可以退而求其次,若此 uncontrollable mode 為 stable,則我們仍可控制此系統,此類系統稱作 stabilizable。

===========
Definition: Stabilizability
考慮以下 partitioned system
\[\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( {k + 1} \right)}\\
{{x_2}\left( {k + 1} \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}&{{A_{12}}}\\
0&{{A_{22}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( k \right)}\\
{{x_2}\left( k \right)}
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
{{B_1}}\\
0
\end{array}} \right]u\left( k \right)\]其中 $(A_{11}, B_1)$ 為 controllable。若 $A_{22}$ 為 stable 則此 partitioned system 稱作 stabilizable。=========


接著我們可以拓展前述討論的 Hautus Lemma 到 Stabilizability 之中。
================
Lemma: (Hautus Lemma for Stabilizability)
一個離散線性系統
\[
x(k+1) = A x(k) + Bu(k)
\] 為 stabilizable 若且唯若
\[
rank[\lambda I - A\;\; B] =n, \;\; \forall |\lambda| \ge 1
\]================

Proof:
$(\Rightarrow)$ 假設系統 stabilizable,我們要證明  $rank[\lambda I - A\;\; B] =n, \;\; \forall |\lambda| \ge 1 $成立。

注意到 系統 stabilizable,故由 stabilizability 定義可知
考慮以下 partitioned system
\[\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( {k + 1} \right)}\\
{{x_2}\left( {k + 1} \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}&{{A_{12}}}\\
0&{{A_{22}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( k \right)}\\
{{x_2}\left( k \right)}
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
{{B_1}}\\
0
\end{array}} \right]u\left( k \right)\]其中 $(A_{11}, B_1)$ 為 controllable 且 $A_{22}$ 為 stable 。故我們有
 \[rank[\lambda I - A\;\;B] = rank\left[ {\begin{array}{*{20}{c}}
{\lambda I - {A_{11}}}&{ - {A_{12}}}&{{B_1}}\\
0&{\lambda I - {A_{22}}}&0
\end{array}} \right] \ \ \ \ \ (*)
\] 且 若我們觀察 矩陣 $[\lambda I - A_{11}\;\; B_1]$ 與 $[\lambda I - A_{22}]$ 的 row,可發現若 $|\lambda| \ge 1$ 則這些 rows 互為 independent。故由 Hautus Lemma for controllability 可知 ,對  $|\lambda| \ge 1$而言,$(*)$ 的 rows 為 independent 故  $rank[\lambda I - A\;\; B] =n, \;\; \forall |\lambda| \ge 1 $


$(\Leftarrow)$ 假設 $rank[\lambda I - A\;\; B] =n, \;\; \forall |\lambda| \ge 1 $成立,我們要證明 系統 stabilizable 。亦即 考慮以下 partitioned system
\[\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( {k + 1} \right)}\\
{{x_2}\left( {k + 1} \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
{{A_{11}}}&{{A_{12}}}\\
0&{{A_{22}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{x_1}\left( k \right)}\\
{{x_2}\left( k \right)}
\end{array}} \right] + \left[ {\begin{array}{*{20}{c}}
{{B_1}}\\
0
\end{array}} \right]u\left( k \right)\]其中 $(A_{11}, B_1)$ 為 controllable 且 $A_{22}$ 為 stable 注意 partitioned system 為 controllability canonical form,故 $(A_{11}, B_1)$ 已為 controllable ,故我們僅須證明 $A_{22}$ 為 stable 。

由 假設 $rank[\lambda I - A\;\; B] =n, \;\; \forall |\lambda| \ge 1 $ 我們可知 對任意 $|\lambda| \ge 1 $,
\[rank[\lambda I - A\;\;B] = rank\left[ {\begin{array}{*{20}{c}}
{\lambda I - {A_{11}}}&{ - {A_{12}}}&{{B_1}}\\
0&{\lambda I - {A_{22}}}&0
\end{array}} \right]\]具有 independent rows 。此暗示了 上述矩陣下排 $[\lambda I - A_{22} ]$ 亦有 full row rank $\forall |\lambda| \ge 1$ 故 $A_{22}$ 的 eigenvalue 必 $<1$ 故 $A_{22} $ 為 stable。

2/10/2015

[最佳控制] 離散時間 穩態 LQR 控制問題 (2) - LQR convergence

回憶離散時間穩態LQR問題: 定義 cost function
\[\begin{array}{l}
V\left( {x,{\bf{u}}} \right): = \frac{1}{2}\sum\limits_{k = 0}^\infty  {{x^T}\left( k \right)Qx\left( k \right) + {u^T}\left( k \right)Ru\left( k \right)} \\
s.t.\\
x\left( {k + 1} \right) = Ax\left( k \right) + Bu\left( k \right)\\
x\left( 0 \right) = x
\end{array}\]且 $Q, R >0$ 為正定矩陣。若 $(A,B)$ 為可控制 則 最佳化問題
\[
\min_{\bf u} V(x,{\bf u})
\]解存在 且唯一 (why?)。

現在我們將此解 (控制力序列) 記做 ${\bf u}^*(x)$;其中第一項為 $u^*(x)$。且對應的最佳控制律 $K_{\infty}(\cdot)$  故我們有
\[
K_\infty(x) = u^*(x) = {\bf u}^* (0;x)
\]

以下定理說明若可控制系統 採用 穩態 LQR 控制,則保證閉迴路穩定度:

=====================
Theorem: Steady State LQR Guarantees the Closed-Loop Stability
對 $(A,B)$ 可控制,穩態LQR問題 且 $Q,R >0$ 保證閉迴路系統
\[
x(k+1) = A x(k) + B K_\infty (x)
\]穩定。
=====================
Comment:
線性系統中,漸進穩定度 等價 漸進收斂 $x(k) \to 0$

Proof:
首先證明 穩態 cost function 有界。注意到由於  $(A,B)$ 可控制,由定義可知存在 一組控制力
\[
{\bf u} = \{u(0), u(1),...,u(n-1)\}
\] 使得 控制後的系統能在有限時間 $n$ 步之內,可由任意給定初始狀態 $x(0)$ 移動到 給定的終止狀態 $x(n) = 0$;且 在時間 $ n$ 之後的控制力 $ \{u(n+1), u(n+2),...\} = \{0,0,0,...\}$ 故原本無窮和的 cost 函數變為有限和
\[\begin{array}{l}
V\left( {x,{\bf{u}}} \right): = \frac{1}{2}\sum\limits_{k = 0}^\infty  {{x^T}\left( k \right)Qx\left( k \right) + {u^T}\left( k \right)Ru\left( k \right)} \\
 \Rightarrow V\left( {x,{\bf{u}}} \right) = \frac{1}{2}\sum\limits_{k = 0}^n {{x^T}\left( k \right)Qx\left( k \right) + {u^T}\left( k \right)Ru\left( k \right)}
\end{array}\]且由於 成本函數為 strictly convex in $\bf u$ 且 $R>0$ ( $u$ 沒有 vanish) 故 此穩態LQR最佳問題的解為唯一。

現在我們觀察 costs to go 的數列滿足
\[
V_{k+1} = V_k - 1/2 (x(k)'Qx(k) + u(k)' R u(k))
\]其中 $V_k := V^0(x(k))$ 為 在時刻 $k$ 對應 狀態為 $x(k)$ 與 對應最佳控制力 $u(k) = u^0(x(k))$ 的 cost。故此數列 $\{V_k\}$ 為非遞增 且 有下界 (為 $0$)。可推知此數列必定收斂;故
\[
|V_{k+1} - V_k| \to 0\;\;\;\; \text{as $k \to \infty$}
\]因此
\[\left\{ \begin{array}{l}
x\left( k \right)'Qx\left( k \right)\; \to 0\\
u\left( k \right)'{\rm{ }}R{\rm{ }}u\left( k \right) \to 0
\end{array} \right.\]又因為 $Q,R >0$ 故可推知
\[\left\{ \begin{array}{l}
x\left( k \right)\; \to 0\\
u\left( k \right) \to 0
\end{array} \right.\]亦即閉迴路系統狀態收斂 (閉迴路穩定!)。 $\square$

Comments 
1. $R>0$ 條件用以保證控制律唯一性。實際上若使用者不關心控制律,可將其設為參數非常小的正定矩陣
2. 上述定裡假設可從 controllability 推廣到 stabilizability。
3. $Q>0$ 的條件可被推廣到 $Q \ge 0$ 與 $(A,Q)$ detectable。


2/07/2015

[最佳控制] 線性系統的最佳參數估計 - Kalman Filter (1)

此文章基於前篇 [最佳控制] 線性系統的最佳參數估計 - Kalman Filter (0);強烈建議讀者先參閱前篇再行閱讀此文。此文將利用 計算 probability density 方式來求解 Kalman filter。


考慮離散時間動態系統
\[
x(k+1) = Ax(k) + w(k);\;\; y(k) = Cx(k) + v(k)
\]且 $w \sim N(0,Q)$ 為製程雜訊 process noise 或稱 干擾 process disturbance;$v \sim N(0,R)$ 為量測雜訊 (measurement noise) ; $x(0) \sim N(\bar x(0), Q(0))$;$x \in \mathbb{R}^n, A \in \mathbb{R}^{n \times n}, C \in \mathbb{R}^{p \times n}, y \in \mathbb{R}^p$,Kalman filter 便是在試圖回答:假設狀態未知我們只能拿到量測輸出,則最佳的狀態估計該是如何?


First Step : $k=0$
假設 初始狀態 $x(0)$ 為 mean $\bar x(0)$ 且 convariance matrix $Q(0)$ 的 normal distrbuted 隨機向量,亦即
\[
x(0) \sim N(\bar x(0), Q(0))
\]接著獲得 初始 (受雜訊污染的) 量測輸出 $y(0)$ 滿足下式
\[
y(0) = C x(0) + v(0)
\]其中 $v(0) \sim N(0, R)$ 為 雜訊 (measurement noise)。

我們的目標:獲得 conditional density $p_{x(0)|y(0)}(x(0) | y(0))$ ,則我們的狀態估計 $\hat x$ 即可透過此 conditional density 求得
$$\hat x := \arg \max_x p_{x(0)|y(0)}(x(0) | y(0))$$

Comments:
1. 若未知 $\bar x(0)$ 或者 $Q(0)$ 則我們通常選 $\bar x(0) :=0$ 且 $Q(0)$ 很大 來表示我們對初始狀態所知甚少 (noninformative prior)。
2. 若量測過程中受到大的雜訊,則我們會給予較大的 $R$。若量測過程十分精確沒有太多雜訊汙染我們的輸出 $y$ 則 $R$ 較小。
3. Conditional density 描述了在我們獲得 初始量測輸出 $y(0)$ 之後我們對於 $x(0)$ 的了解。


現在回歸我們的目標,究竟該如何推得 $p_{x(0)|y(0)}(x(0) | y(0))$ ?
首先考慮 $(x(0), y(0))$ 如下
\[\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{y\left( 0 \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{v\left( 0 \right)}
\end{array}} \right]
\]假設 $v(0)$ 與 $x(0)$ 彼此互為獨立。注意到  $x(0), v(0)$ 為 joint normal 亦即
\[\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{v\left( 0 \right)}
\end{array}} \right] \sim N\left( {\left[ {\begin{array}{*{20}{c}}
{\bar x\left( 0 \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{Q\left( 0 \right)}&0\\
0&{{R}}
\end{array}} \right]} \right)\]
上述 $[x(0) \;\; y(0)]$ 為  $[x(0) \;\; v(0)]$ 線性轉換 且 故我們可以馬上知道
\[\begin{array}{l}
\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{y\left( 0 \right)}
\end{array}} \right]\sim N\left( {\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{\bar x\left( 0 \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{Q\left( 0 \right)}&0\\
0&R
\end{array}} \right]{{\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]}^T}} \right)\\
 \Rightarrow \left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{y\left( 0 \right)}
\end{array}} \right] \sim N\left( {\left[ {\begin{array}{*{20}{c}}
{\bar x\left( 0 \right)}\\
{C\bar x\left( 0 \right)}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{Q\left( 0 \right)}&0\\
0&R
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
I&{{C^T}}\\
0&I
\end{array}} \right]} \right)\\
 \Rightarrow \left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{y\left( 0 \right)}
\end{array}} \right]\sim N\left( {\left[ {\begin{array}{*{20}{c}}
{\bar x\left( 0 \right)}\\
{C\bar x\left( 0 \right)}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{Q\left( 0 \right)}&{Q\left( 0 \right){C^T}}\\
{CQ\left( 0 \right)}&{CQ\left( 0 \right){C^T} + R}
\end{array}} \right]} \right)
\end{array}\]現在有了 $x(0), y(0)$ 的 joint density, 注意到此 joint density 為 normal,故若要計算 conditonal density of $x(0)$ given $y(0)$,亦即 $p_{x(0)|y(0)}(x(0)|y(0))$ ;則我們可以使用前述文章討論的 FACT 3 來求得,亦即
\[
p_{x(0)|y(0)}(x(0)|y(0)) = n(x(0), m, P)
\]其中
\[\left\{ {\begin{array}{*{20}{l}}
\begin{array}{l}
m = \bar x\left( 0 \right) + L\left( 0 \right)(y\left( 0 \right) - C\bar x\left( 0 \right))\\
L\left( 0 \right): = Q\left( 0 \right){C^T}\left( {CQ\left( 0 \right){C^T} + R} \right)_{}^{ - 1}
\end{array}\\
{P = Q\left( 0 \right) - Q\left( 0 \right){C^T}\left( {CQ\left( 0 \right){C^T} + R} \right)_{}^{ - 1}CQ\left( 0 \right)}
\end{array}} \right.\]則 optimal state estimation  $\hat x$ 及為 具有最大 conditional density 的 $x(0)$;對於 normal distribution 而言,此 $\hat x$ 剛好為 mean;故我們選 $\hat x(0) := m$;且上式中 $P := P(0)$ 表示獲得 量測輸出 $y(0)$ 之後的 variance

Next Step: State Prediction
考慮現在狀態從 $k=0$ 移動到 $k=1$,我們有
\[
x(1) = Ax(0) + w(0)
\]亦可將其寫成線性轉換型式:
\[x\left( 1 \right) = \left[ {\begin{array}{*{20}{c}}
A&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{w\left( 0 \right)}
\end{array}} \right]\]其中 $w(0) \sim N(0, Q)$ 為 系統干擾 (disturbance) 或稱 製程雜訊 (process noise)。若 狀態受到大的外部擾動,則可以想見 會有較大的 $Q$ convaraince matrix。同理若外部干擾較小則有較小的雜訊。

故我們目標:要計算 conditional density $p_{x(1)|y(0)}(x(1),y(0))$

現在我們需要 conditional on joint density $(x(0),w(0))$ given $y(0)$,假設 $w(0)$ 與 $x(0),v(0)$ 彼此互為獨立,則我們可寫
\[\left( {\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{w\left( 0 \right)}
\end{array}} \right]|y\left( 0 \right)} \right)\sim N\left( {\left[ {\begin{array}{*{20}{c}}
{\hat x\left( 0 \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{P\left( 0 \right)}&0\\
0&Q
\end{array}} \right]} \right)\]故現在利用 前述文章討論的 FACT 3' 來求得  conditional density,亦即
 \[\begin{array}{l}
\left( {\left[ {\begin{array}{*{20}{c}}
{x\left( 0 \right)}\\
{w\left( 0 \right)}
\end{array}} \right]|y\left( 0 \right)} \right)\sim  N\left( {\left[ {\begin{array}{*{20}{c}}
{\hat x\left( 0 \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{P\left( 0 \right)}&0\\
0&Q
\end{array}} \right]} \right)\\
 \Rightarrow \left( {x\left( 1 \right)|y\left( 0 \right)} \right)\sim N\left( {\left[ {\begin{array}{*{20}{c}}
A&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{\hat x\left( 0 \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
A&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{P\left( 0 \right)}&0\\
0&Q
\end{array}} \right]{{\left[ {\begin{array}{*{20}{c}}
A&I
\end{array}} \right]}^T}} \right)\\
 \Rightarrow \left( {x\left( 1 \right)|y\left( 0 \right)} \right)\sim N\left( {A\hat x\left( 0 \right),AP\left( 0 \right){A^T} + Q} \right)
\end{array}\]故 conditional density 仍為 normal
\[
p_{x(1)|y(0)}(x(1)|y(0)) = n(x(1), \hat x^-(1), P^-(1))
\]其中
\[\left\{ \begin{array}{l}
{{\hat x}^ - }\left( 1 \right) = A\hat x\left( 0 \right)\\
{P^ - }\left( 1 \right) = AP\left( 0 \right){A^T} + Q
\end{array} \right.\]接著我們僅需 遞迴重複上述步驟 $k=2,3,4,...$ 即可。以下我們給出總結:

Summary
定義 量測輸出從初始 直到 時間 $k$ 則
\[
{\bf y}(k) := \{y(0),y(1),...,y(k)\}
\]在 時間 $k$ 時,conditional density with data ${\bf y}(k-1)$ 為 normal
\[
p_{x(k)| {\bf y}(k-1)}(x(k) | {\bf y}(k-1)) = n(x(k), \hat x^-(k),  P^-(k))
\]上述 mean 與 covariance matrix 有上標 $^-$ 號表示此估計為僅僅透過 量測 ${\bf y}(k-1)$ 的輸出,並未獲得 $k$ 時刻的量測輸出。 (表示用過去 $k-1$ 資料 預測 $k$ 的狀態! ) 注意到在 $k=0$,遞迴起始於 $\hat x^- (0) = \bar x(0)$ 與 $P^-(0) = Q(0)$。

接著我們會獲得 $y(k)$ 滿足
\[\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{y\left( k \right)}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{v\left( k \right)}
\end{array}} \right]
\]上式表示線性轉換。由於 量測雜訊 $v(k)$ 與 $x(k)$ 以及 ${\bf y}(k-1)$ 彼此獨立,故我們有  density of $(x(k), v(k))$
\[\underbrace {\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{v\left( k \right)}
\end{array}} \right]}_{ = \left( {\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{v\left( k \right)}
\end{array}} \right]|{\bf{y}}\left( {k - 1} \right)} \right)}\sim N\left( {\left[ {\begin{array}{*{20}{c}}
{{{\hat x}^ - }\left( k \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{P^ - }\left( k \right)}&0\\
0&R
\end{array}} \right]} \right)\]透過線性轉換結果,可得
\[\begin{array}{l}
\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{y\left( k \right)}
\end{array}} \right]\sim N\left( {\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{{\hat x}^ - }\left( k \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{{P^ - }\left( k \right)}&0\\
0&R
\end{array}} \right]{{\left[ {\begin{array}{*{20}{c}}
I&0\\
C&I
\end{array}} \right]}^T}} \right)\\
 \Rightarrow \left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{y\left( k \right)}
\end{array}} \right]\sim N\left( {\left[ {\begin{array}{*{20}{c}}
{{{\hat x}^ - }\left( k \right)}\\
{C{{\hat x}^ - }\left( k \right)}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{P^ - }\left( k \right)}&{{P^ - }\left( k \right){C^T}}\\
{C{P^ - }\left( k \right)}&{C{P^ - }\left( k \right){C^T} + R}
\end{array}} \right]} \right)
\end{array}\]注意到 $\{{\bf y}(k-1), y(k)\} \equiv {\bf y}(k)$ 故利用 conditional density 結果可得
\[
p_{x(k)|{\bf y} (k)}(x(k)| {\bf y}(k)) = n(x(k), \hat x(k), P(k))
\]其中
\[\left\{ {\begin{array}{*{20}{l}}
\begin{array}{l}
\hat x\left( k \right) = {{\hat x}^ - }\left( k \right) + L\left( k \right)(y\left( k \right) - C{{\hat x}^ - }\left( k \right))\\
L\left( k \right) = {P^ - }\left( k \right){C^T}\left( {C{P^ - }\left( k \right){C^T} + R} \right)_{}^{ - 1}
\end{array}\\
{P\left( k \right) = {P^ - }\left( k \right) - {P^ - }\left( k \right){C^T}\left( {C{P^ - }\left( k \right){C^T} + R} \right)_{}^{ - 1}C{P^ - }\left( k \right)}
\end{array}} \right.\]現在我們用下列模型來預估 基於 $k$ 時刻量測值 預估 $k+1$ 時刻
\[x\left( {k + 1} \right) = \left[ {\begin{array}{*{20}{c}}
A&I
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{w\left( k \right)}
\end{array}} \right]\]由於 $w(k)$ 與 $x(k)$ 以及 $\bf y$$(k)$ 彼此獨立,故 joint density of $(x(k),w(k))$ 可寫為
\[\left[ {\begin{array}{*{20}{c}}
{x\left( k \right)}\\
{w\left( k \right)}
\end{array}} \right]\sim N\left( {\left[ {\begin{array}{*{20}{c}}
{\hat x\left( k \right)}\\
0
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{P\left( k \right)}&0\\
0&Q
\end{array}} \right]} \right)\]故 conditional density 為
\[{p_{x\left( {k + 1} \right)|{\bf{y}}\left( k \right)}}\left( {x\left( {k + 1} \right)|{\bf{y}}\left( k \right)} \right) = n\left( {x\left( {k + 1} \right),{{\hat x}^ - }\left( {k + 1} \right),{P^ - }\left( {k + 1} \right)} \right)\]其中
\[\left\{ \begin{array}{l}
{{\hat x}^ - }\left( {k + 1} \right) = A\hat x\left( k \right)\\
{P^ - }\left( {k + 1} \right) = AP\left( k \right){A^T} + Q
\end{array} \right.\]

[最佳控制] 線性系統的最佳參數估計 Kalman Filter (0)- 預備知識

這次要介紹 Kalman filter 或稱 Optimal Linear State Estmator,以下我們將簡單介紹一些在下一篇文章需要使用的一些結果。

Preliminary
回憶若 $x$ 為 mean $m$ 且 variance $\sigma^2$ 的 Normal 隨機變數 則 我們表示為 $x \sim N(m, \sigma^2)$ 。其 probability density function 可記做
\[
\frac{1}{\sqrt{2 \pi} \sigma} \exp(\frac{-1}{2 \sigma^2}(x-m)^2)
\]現在若我們拓展上述結果到 多個隨機變數 (又稱 random vector 隨機向量) 的情況,令 $x$ 為Normal 隨機向量 記做
\[
x \sim N(m,P); \;\;
p_x(x) := n(x,m,P)
\]上述符號 表示 $x$ 為 normal distributed 且 mean vector $m$ 與 convariance matrix $P$。另外 $n(x,m,P)$ 表示 normal probability density function
\[\;n\left( {x,m,P} \right): = \frac{1}{{{{\left( {2\pi } \right)}^{n/2}}{{\left( {\det P} \right)}^{1/2}}}}\exp \left( { - \frac{1}{2}{{\left( {x - m} \right)}^T}{P^{ - 1}}\left( {x - m} \right)} \right)\]
Comment:
若 $x \in \mathbb{R}^n$ 則 mean vector $m \in \mathbb{R}^n$ 且 convariance matrix $P \in \mathbb{R}^{n \times n}$ 且為 實數 對稱 正定 矩陣 (正定條件用以確保 $P^{-1}$ 存在,使得上述的 probability density function 可以被定義)。若 $P$ 不為對 正定 我們稱為 singular normal distribution 或稱 degenerate normal

Example 
若 $n=2$ 則我們可以繪製 normal density function; 比如說
\[m = \left[ {\begin{array}{*{20}{c}}
0\\
0
\end{array}} \right];\begin{array}{*{20}{c}}
{}&{}
\end{array}{P^{ - 1}} = \left[ {\begin{array}{*{20}{c}}
{3.5}&{2.5}\\
{2.5}&{4.0}
\end{array}} \right]\]則我們可繪製


現在我們看幾個 之後會使用到的基本結果:

================
FACT 1: Joint independent normals
若隨機向量 $x  \sim N(m_x,P_x)$ 與 $y  \sim N(m_y,P_y)$ 為 normally distributed 且 彼此互為獨立,則其 joint density $p_{x,y}(x,y)$ 如下
\[
p_{x,y}(x,y) = p_{x}(x)p_{y}(y)=n(x,m_x,P_x) \cdot n(y,m_y,P_y)
\]且
\[\left[ {\begin{array}{*{20}{c}}
x\\
y
\end{array}} \right] \sim N\left( {\left[ {\begin{array}{*{20}{c}}
{{m_x}}\\
{{m_y}}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{P_x}}&0\\
0&{{P_y}}
\end{array}} \right]} \right)\]================


================
FACT 2: Linear Transformation of a normal
若 $x \sim N(m, P)$ 且 $y$ 為 $x$ 的線性轉換;亦即對任意矩陣 $A$, $y = Ax$ 則
\[
y \sim N(Am , APA^T)
\]================

================
FACT 3: Conditional of a joint normal
若 $x,y$ 為 jointly normal distributed (no independent assumption)
\[\left[ {\begin{array}{*{20}{c}}
x\\
y
\end{array}} \right]\sim N\left( {\left[ {\begin{array}{*{20}{c}}
{{m_x}}\\
{{m_y}}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)\]則 conditional density of $x$ given $y$ 仍為 Normal 亦即
\[
(x|y) \sim N(m,P)
\]其 probability density function 為 $ p_{x|y} (x|y) = n(x,m,P)$ 其中 conditional mean vector $m$ 與 conditional convariance matrix $P$ 分別為
\[\begin{array}{l}
m = {m_x} + {P_{xy}}P_y^{ - 1}(y - {m_y})\\
P = {P_x} - {P_{xy}}P_y^{ - 1}{P_{yx}}
\end{array}\]================
注意到上式中 conditional mean $m$ 為 random vector (depends on $y$) 。

Proof:
回憶 conditional density of $x$ given y 定義
\[
p_{x|y}(x,y) := \frac{p_{x,y}(x,y)}{p_y(y)}
\]注意到 $(x,y)$ 為 joint normal 故我們有
\[\small \begin{array}{l}
{p_y}\left( y \right): = \frac{1}{{{{\left( {2\pi } \right)}^{{n_y}/2}}{{\left( {\det {P_y}} \right)}^{1/2}}}}\exp \left( { - \frac{1}{2}{{\left( {y - {m_y}} \right)}^T}{P_y}^{ - 1}\left( {y - {m_y}} \right)} \right)\\
{p_{x,y}}\left( {x,y} \right): = \frac{1}{{{{\left( {2\pi } \right)}^{\left( {{n_x} + {n_y}} \right)/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}\exp \left( { - \frac{1}{2}{{\left[ {\begin{array}{*{20}{c}}
{x - {m_x}}\\
{y - {m_y}}
\end{array}} \right]}^T}{{\left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]}^{ - 1}}\left[ {\begin{array}{*{20}{c}}
{x - {m_x}}\\
{y - {m_y}}
\end{array}} \right]} \right)
\end{array}\]亦即
\[\begin{array}{l}
{p_{x|y}}\left( {x|y} \right) = \frac{{{p_{x,y}}\left( {x,y} \right)}}{{{p_y}\left( y \right)}}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = \frac{{\frac{1}{{{{\left( {2\pi } \right)}^{\left( {{n_x} + {n_y}} \right)/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}\exp \left( { - \frac{1}{2}{{\left[ {\begin{array}{*{20}{c}}
{x - {m_x}}\\
{y - {m_y}}
\end{array}} \right]}^T}{{\left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]}^{ - 1}}\left[ {\begin{array}{*{20}{c}}
{x - {m_x}}\\
{y - {m_y}}
\end{array}} \right]} \right)}}{{\frac{1}{{{{\left( {2\pi } \right)}^{{n_y}/2}}{{\left( {\det {P_y}} \right)}^{1/2}}}}\exp \left( { - \frac{1}{2}{{\left( {y - {m_y}} \right)}^T}{P_y}^{ - 1}\left( {y - {m_y}} \right)} \right)}}\\
\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array} = \frac{{{{\left( {\det {P_y}} \right)}^{1/2}}\exp \left( { - \frac{1}{2}\left( {{{\left[ {\begin{array}{*{20}{c}}
{x - {m_x}}\\
{y - {m_y}}
\end{array}} \right]}^T}{{\left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]}^{ - 1}}\left[ {\begin{array}{*{20}{c}}
{x - {m_x}}\\
{y - {m_y}}
\end{array}} \right] - {{\left( {y - {m_y}} \right)}^T}{P_y}^{ - 1}\left( {y - {m_y}} \right)} \right)} \right)}}{{{{\left( {2\pi } \right)}^{{n_x}/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}
\end{array}\]注意到若我們取 $P:= P_x - P_{xy}P_y^{-1}P_{yx}$ 則 利用 Matrix inversion Lemma 可知
\[{\left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]^{ - 1}} = \left[ {\begin{array}{*{20}{c}}
{{P^{ - 1}}}&{ - {P^{ - 1}}{P_{xy}}P_y^{ - 1}}\\
{ - P_y^{ - 1}{P_{yx}}{P^{ - 1}}}&{P_y^{ - 1} + P_y^{ - 1}{P_{yx}}{P^{ - 1}}{P_{xy}}P_y^{ - 1}}
\end{array}} \right]\]將此結果帶回我們可得
\[\begin{array}{l}
{p_{x|y}}\left( {x|y} \right) = \frac{{{p_{x,y}}\left( {x,y} \right)}}{{{p_y}\left( y \right)}}\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \frac{{{{\left( {\det {P_y}} \right)}^{1/2}}\exp \left( { - \frac{1}{2}\left( \begin{array}{l}
{\left( {x - {m_x}} \right)^T}{P^{ - 1}}\left( {x - {m_x}} \right) - 2{\left( {y - {m_y}} \right)^T}P_y^{ - 1}{P_{yx}}{P^{ - 1}}\left( {x - {m_x}} \right)\\
\begin{array}{*{20}{c}}
{}&{}
\end{array} + {\left( {y - {m_y}} \right)^T}\left( {P_y^{ - 1}{P_{yx}}{P^{ - 1}}{P_{xy}}P_y^{ - 1}} \right)\left( {y - {m_y}} \right)
\end{array} \right)} \right)}}{{{{\left( {2\pi } \right)}^{{n_x}/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \frac{{{{\left( {\det {P_y}} \right)}^{1/2}}\exp \left( { - \frac{1}{2}\left( {{{\left[ {{{\left( {x - {m_x}} \right)}^T} - {P_{xy}}P_y^{ - 1}\left( {y - {m_y}} \right)} \right]}^T}{P^{ - 1}}\left[ {{{\left( {x - {m_x}} \right)}^T} - {P_{xy}}P_y^{ - 1}\left( {y - {m_y}} \right)} \right]} \right)} \right)}}{{{{\left( {2\pi } \right)}^{{n_x}/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \frac{{{{\left( {\det {P_y}} \right)}^{1/2}}\exp \left( { - \frac{1}{2}\left( {{{\left[ {\left( {x - {m_x}} \right) - {P_{xy}}P_y^{ - 1}\left( {y - {m_y}} \right)} \right]}^T}{P^{ - 1}}\left[ {\left( {x - {m_x}} \right) - {P_{xy}}P_y^{ - 1}\left( {y - {m_y}} \right)} \right]} \right)} \right)}}{{{{\left( {2\pi } \right)}^{{n_x}/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}
\end{array}\]接著令 $m := m_x + P_{xy} P_y^{-1}(y-m_y)$ 可得
\[{p_{x|y}}\left( {x|y} \right) = \frac{{{p_{x,y}}\left( {x,y} \right)}}{{{p_y}\left( y \right)}} = \frac{{{{\left( {\det {P_y}} \right)}^{1/2}}\exp \left( { - \frac{1}{2}{{\left( {x - m} \right)}^T}{P^{ - 1}}\left( {x - m} \right)} \right)}}{{{{\left( {2\pi } \right)}^{{n_x}/2}}{{\left( {\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)}^{1/2}}}}\]注意到
\[\det \left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right] = \det {P_y}\det P\]故可得
\[{p_{x|y}}\left( {x|y} \right) = \frac{1}{{{{\left( {2\pi } \right)}^{{n_x}/2}}{{\left( {\det P} \right)}^{1/2}}}}\exp \left( { - \frac{1}{2}{{\left( {x - m} \right)}^T}{P^{ - 1}}\left( {x - m} \right)} \right) = n(x,m,P) \ \ \ \ \ \square
\]


如果要推導 最佳估測器 我們需要上述結果衍生:

================
FACT 1': Joint independent normals
若 $p_{x|z} (x|z) = n(x, m_x, P_x)$ 為 normal ,令 $y \sim N(m_y, P_y)$ 且 與 $x,z$ 彼此獨立 則 conditional joint density of $(x,y)$ given $z$ 為
\[{p_{x,y|z}}\left( {\left[ {\begin{array}{*{20}{c}}
x\\
y
\end{array}} \right]|z} \right) = n\left( {\left[ {\begin{array}{*{20}{c}}
x\\
y
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{m_x}}\\
{{m_y}}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{P_x}}&0\\
0&{{P_y}}
\end{array}} \right]} \right)\]================

================
FACT 2': Linear Transformation of a Normal
若 $p_{x|z}(x|z)=  n(x,m, P)$ 且 $y$ 為 $x$ 的線性轉換;亦即 $y = Ax$ 則
\[{p_{y|z}}(y|z) = n(y,Am,AP{A^T})\]================

================
FACT 3': Conditional of a joint normal
若 $x,y$ 為 jointly normal distributed (no independent assumption)
\[{p_{x,y|z}}\left( {\left[ {\begin{array}{*{20}{c}}
x\\
y
\end{array}} \right]|z} \right) = n\left( {\left[ {\begin{array}{*{20}{c}}
x\\
y
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{m_x}}\\
{{m_y}}
\end{array}} \right],\left[ {\begin{array}{*{20}{c}}
{{P_x}}&{{P_{xy}}}\\
{{P_{yx}}}&{{P_y}}
\end{array}} \right]} \right)\]則 conditional density of $x$ given $y,z$ 仍為 Normal ,記為
\[
p_{x|y,z} (x|y,z) = n(x,m,P)
\]其中 conditional mean vector $m$ 與 conditional convariance matrix $P$ 分別為
\[\begin{array}{l}
m = {m_x} + {P_{xy}}P_y^{ - 1}(y - {m_y})\\
P = {P_x} - {P_{xy}}P_y^{ - 1}{P_{yx}}
\end{array}\]
================

Proof:
由 conditional density of $x$ given $y,z$ 定義可知
\[{p_{x|y,z}}\left( {x|y,z} \right) = \frac{{{p_{x,y,z}}\left( {x,y,z} \right)}}{{{p_{y,z}}\left( {y,z} \right)}}\]現在對等號右方 分子分母同乘 $p(z)$ 可得
\[\begin{array}{l} {p_{x|y,z}}\left( {x|y,z} \right) = \frac{{{p_{x,y,z}}\left( {x,y,z} \right)}}{{{p_{y,z}}\left( {y,z} \right)}}\\ \begin{array}{*{20}{c}} {}&{}&{} \end{array} = \frac{{{p_{x,y,z}}\left( {x,y,z} \right)}}{{{p_z}\left( z \right)}}\frac{{{p_z}\left( z \right)}}{{{p_{y,z}}\left( {y,z} \right)}}\\ \begin{array}{*{20}{c}} {}&{}&{} \end{array} = {p_{x,y|z}}\left( {x,y|z} \right) \cdot \frac{1}{{{p_{y|z}}\left( {y|z} \right)}} \end{array}\]則由先前 FACT 3 可計算 $p(y,z)$ 並且帶入 $p(x,y|z)$ 即可求得所求。$\square$


 ref: J. B. Rawlings and D. Q. Mayne, "Model Predictive Control: Theory and Design".

2/05/2015

[機器學習] Prediction with Expert Advice - exp strategy

Prediction with Expert Advice, PWEA 又稱 repeated game ( 預測者 v.s. 外部環境)

此 PWEA 運作過程如下:
假設有 $N$ 個專家(expert);對 預測回合  $t := 1,2,...T$,
STEP1:環境會產生 1. 專家建議 $\{f_{i,t}\}_{i=1}^N$ 與 2. 真實輸出 $y_t$ (我們想要預測此值,但目前未知)
STEP2 :我們 獲得 前述 由環境產生的 該回合 專家建議
STEP3:我們由專家建議 建構 該回合的 預測函數 $\hat p_t$
STEP4:一旦完成 $\hat p_t$,我們可獲得真實輸出 $y_t$,此時 可計算 loss $l(\hat p_t, y_t)$ 且 對每一個 (第 $i$ 個) 專家 而言,可計算其對應 loss $l(f_{i,t},y_t)$

目標:使 cumulative regret 最小
\[{R_T}: = \sum\limits_{t = 1}^T l ({{\hat p}_t},{y_t}) - \underbrace {\mathop {\min }\limits_{i = 1,...,N} \sum\limits_{t = 1}^T l ({f_{i,t}} - {y_t})}_{{\rm{Best\;\;expert\;\;in\;\;hindersight}}}\]最好的情況是 $R(T) = o(T) $ 亦即 當 $T \to \infty$,$R_T/T \to 0$

General PWEA Strategy
對每一個專家 $i$ ,透過 給定適當的權重 weight $w_{i,t}$ 來反映我們對此專家的信心。最常見的方法為使用 Weighted Average or Weighted majority。
\[{\hat p_t}: = \frac{{\sum\limits_{i = 1}^N {{w_{i,t}}{f_{i,t}}} }}{{\sum\limits_{i = 1}^N {{w_{i,t}}} }}
\]
那麼權重該如何計算?
對時刻 $t+1$ 而言,我們定義 $\eta := \ln q$ 則
\[\begin{array}{l}
{w_{i,t + 1}} = {w_{i,t}}{\left( {\frac{1}{q}} \right)^{{l_{i,t}}}} = {w_{i,t}}{\left( {\frac{1}{q}} \right)^{l\left( {{f_{i,t}},{y_t}} \right)}}\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = {w_{i,t}}{\left( {{e^{ - \eta }}} \right)^{l\left( {{f_{i,t}},{y_t}} \right)}}
\end{array}\]

Comment
注意到
\[\begin{array}{l}
{w_{i,t + 1}} = {w_{i,t}}{\left( {{e^{ - \eta }}} \right)^{l\left( {{f_{i,t}},{y_t}} \right)}}\\
 \Rightarrow \left\{ \begin{array}{l}
{w_{i,2}} = {w_{i,1}}{\left( {{e^{ - \eta }}} \right)^{l\left( {{f_{i,1}},{y_1}} \right)}}\\
{w_{i,3}} = {w_{i,2}}{\left( {{e^{ - \eta }}} \right)^{l\left( {{f_{i,2}},{y_2}} \right)}} = {w_{i,1}}{\left( {{e^{ - \eta }}} \right)^{l\left( {{f_{i,1}},{y_1}} \right) + l\left( {{f_{i,2}},{y_2}} \right)}}\\
\begin{array}{*{20}{c}}
{}
\end{array} \vdots
\end{array} \right.
\end{array}\]故我們有
\[{w_{i,t}} = {w_{i,1}}{\left( {{e^{ - \eta }}} \right)^{\sum\limits_{s = 1}^{t - 1} {l\left( {{f_{i,s}},{y_s}} \right)} }}\]
且我們可進一步定義在時刻 $t$ 的總權重
\[
W_t := \sum_{i=1}^N w_{i,t}
\]

==================
Theorem: 假設 loss $l$ 為取值為 $[0,1]$ 的 convex 函數,且 $w_{i,1} :=1$  則 exponentially strategy 保證 cumulative regret
\[
R_T \le \frac{\ln N}{\eta} + \frac{\eta T}{8}
\]==================

Proof:
首先觀察 $\ln W_{T+1}/W_1$ 的下界
\[\begin{array}{l}
\ln \left( {\frac{{{W_{T + 1}}}}{{{W_1}}}} \right) = \ln \left( {\frac{{\sum\limits_{i = 1}^N {{w_{i,T + 1}}} }}{{\sum\limits_{i = 1}^N {{w_{i,1}}} }}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \ln \left( {\frac{{\sum\limits_{i = 1}^N {{w_{i,T + 1}}} }}{N}} \right) = \ln \left( {\sum\limits_{i = 1}^N {{w_{i,T + 1}}} } \right) - \ln \left( N \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} \ge \ln \left( {\mathop {\max }\limits_{1 \le i \le N} {w_{i,T + 1}}} \right) - \ln \left( N \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \ln \left( {\mathop {\max }\limits_{1 \le i \le N} {w_{i,1}}{{\left( {{e^{ - \eta }}} \right)}^{\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} }}} \right) - \ln \left( N \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \ln \left( {\mathop {\max }\limits_{1 \le i \le N} 1 \cdot {{\left( {{e^{ - \eta }}} \right)}^{\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} }}} \right) - \ln \left( N \right)\begin{array}{*{20}{c}}
{}&{}&{}
\end{array}\left( {{w_{i,1}} = 1} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \mathop {\max }\limits_{1 \le i \le N} \ln \left( {{e^{ - \eta \sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} }}} \right) - \ln \left( N \right)\begin{array}{*{20}{c}}
{}&{}&{}
\end{array}\left( {{\rm{monotoncity}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{of}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{log}}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \mathop {\max }\limits_{1 \le i \le N} \left( { - \eta \sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} } \right) - \ln \left( N \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} =  - \eta \mathop {\min }\limits_{1 \le i \le N} \left( {\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} } \right) - \ln \left( N \right)\begin{array}{*{20}{c}}
{}&{}&{}
\end{array}\left( {{\rm{max}} - f =  - \min f} \right) \ \ \ \ \ \ \ \ (*)
\end{array}\]接著我們推導  $\ln W_{T+1}/W_1$ 的上界如下:首先觀察
\[\ln \left( {\frac{{{W_{T + 1}}}}{{{W_T}}}} \right) = \ln \left( {\frac{{{W_{T + 1}}}}{{{W_T}}}\frac{{{W_T}}}{{{W_{T - 1}}}}...\frac{{{W_3}}}{{{W_2}}}\frac{{{W_2}}}{{{W_1}}}} \right) = \sum\limits_{t = 1}^T {\ln \left( {\frac{{{W_{t + 1}}}}{{{W_t}}}} \right)} \]故
\[\begin{array}{l}
\ln \left( {\frac{{{W_{t + 1}}}}{{{W_t}}}} \right) = \ln \left( {\frac{{\sum\limits_{i = 1}^N {{w_{i,t + 1}}} }}{{{W_t}}}} \right) = \ln \left( {\frac{{\sum\limits_{i = 1}^N {{w_{i,t}}{e^{ - \eta l\left( {{f_{i,t}},{y_t}} \right)}}} }}{{{W_t}}}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} = \ln \left( {E\left[ {{e^{ - \eta l\left( {{f_{I,t}},{y_t}} \right)}}} \right]} \right)\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array}\left( {P\left( {I = i} \right): = \frac{{{w_{i,t}}}}{{{W_t}}}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} \le \eta E\left[ { - l\left( {{f_{I,t}},{y_t}} \right)} \right] + \frac{{{\eta ^2}{{\left( {1 - 0} \right)}^2}}}{8}\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array}\left( {{\rm{Hoeffding's}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{inequality}}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} =  - \eta E\left[ {l\left( {{f_{I,t}},{y_t}} \right)} \right] + \frac{{{\eta ^2}}}{8}\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} \le  - \eta l\left( {E\left[ {{f_{I,t}}} \right],{y_t}} \right) + \frac{{{\eta ^2}}}{8}\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array}\left( {{\rm{Jensen's}}\begin{array}{*{20}{c}}
{}
\end{array}{\rm{inequality}}} \right)\\
\begin{array}{*{20}{c}}
{}&{}&{}
\end{array} =  - \eta l\left( {{{\hat p}_t},{y_t}} \right) + \frac{{{\eta ^2}}}{8}\begin{array}{*{20}{c}}
{}&{}&{}&{}
\end{array}
\end{array}\]上式最後一個等號成立因為:
\[{{\hat p}_t}: = \frac{{\sum\limits_{i = 1}^N {{w_{i,t}}{f_{i,t}}} }}{{\sum\limits_{i = 1}^N {{w_{i,t}}} }} = \frac{{\sum\limits_{i = 1}^N {{w_{i,t}}{f_{i,t}}} }}{{{W_t}}} = \sum\limits_{i = 1}^N {\underbrace {\frac{{{w_{i,t}}}}{{{W_t}}}}_{ = P\left( {I = i} \right)}} {f_{i,t}} = E\left[ {{f_{I,t}}} \right]\]故我們有
\[\ln \left( {\frac{{{W_{T + 1}}}}{{{W_1}}}} \right) \le \sum\limits_{t = 1}^T {\left( { - \eta l\left( {{{\hat p}_t},{y_t}} \right) + \frac{{{\eta ^2}}}{8}} \right)} \ \ \ \ \ \ \ (\star)
\]現在總結 $(*)$ 與 $(\star)$ 可推知
\[\begin{array}{l}
 - \eta \mathop {\min }\limits_{1 \le i \le N} \left( {\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} } \right) - \ln \left( N \right) \le \ln \left( {\frac{{{W_{T + 1}}}}{{{W_1}}}} \right) \le \sum\limits_{t = 1}^T {\left( { - \eta l\left( {{{\hat p}_t},{y_t}} \right) + \frac{{{\eta ^2}}}{8}} \right)} \\
 \Rightarrow  - \eta \mathop {\min }\limits_{1 \le i \le N} \left( {\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} } \right) - \ln \left( N \right) \le  - \eta \sum\limits_{t = 1}^T {l\left( {{{\hat p}_t},{y_t}} \right) + \frac{{{\eta ^2}}}{8}T} \\
 \Rightarrow \eta \sum\limits_{t = 1}^T {l\left( {{{\hat p}_t},{y_t}} \right)}  - \eta \mathop {\min }\limits_{1 \le i \le N} \left( {\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} } \right) \le \frac{{{\eta ^2}}}{8}T + \ln \left( N \right)\\
 \Rightarrow \underbrace {\sum\limits_{t = 1}^T {l\left( {{{\hat p}_t},{y_t}} \right)}  - \mathop {\min }\limits_{1 \le i \le N} \left( {\sum\limits_{s = 1}^T {l\left( {{f_{i,s}},{y_s}} \right)} } \right)}_{ = {R_T}} \le \frac{\eta }{8}T + \frac{{\ln \left( N \right)}}{\eta } \ \ \ \ \square
\end{array}\]另外我們亦可求解最佳 $\eta$ 滿足 如下:
\[\begin{array}{l}
\frac{d}{{d\eta }}\left( {\frac{\eta }{8}T + \frac{{\ln \left( N \right)}}{\eta }} \right) = 0\\
 \Rightarrow {\eta ^*} = \sqrt {\frac{{8\ln \left( N \right)}}{T}}
\end{array}\]



[最佳化] C^2 函數一階逼近的餘項積分表示

令 $f: \mathbb{R}^m \to \mathbb{R}$ 為 $C^2$-函數。對 $f$ 在 $y$ 附近使用一階泰勒展開: \[ T_y(x) := f(y) + \nabla f(y)^\top (x - y) \] 則其餘項 $R(x,y)$ 訂為 $$R(...