[隨機分析] Uniqueness and Existence theorem for S.D.E. (3) - An intermediate result (Upper bound) of Picard Iteration
延續前篇。繼續逐步完成 Existence 的證明:
現在我們要證明Picard iteration得到的跌代解的sequence $X_t^{(n)}$確實會收斂。
我們把Picard iteration記在下面:
==============================
Iteration Scheme (Picard Iteration):
給定 $X_t^{(0)}:=x_0$,與下列跌代過程:
\[
X_t^{(n+1)} = x_0 + \int_0^t \mu(s, X_s^{(n)})ds + \int_0^t \sigma(s,X_s^{(n)})dBs \ \ \ \ (*)
\]==============================
那麼我們的第一步是先證明跌代解為Cauchy sequence。接著再透過 $L^2$ space的completeness可推得此Sequence收斂。我們先證明下面的Lemma
===============================
Lemma
若 $\mu, \sigma$ 滿足Lipschitz condtion:
\[
| \mu(t,x) - \mu(t,y)|^2 + |\sigma(t,x) - \sigma(t,y)|^2 \leq K |x-y|^2
\]則存在一個常數 $C$ 使得 由Picard iteration所定義的隨機過程 $X_t^{(n)}$滿足下列不等式:
\[
E \left[ \displaystyle \sup_{0 \leq s \leq t} |X_s^{(n+1)} - X_s^{(n)}|^2 \right ] \leq C \int_0^t E \left[ | X_s^{(n)} - X_s^{(n-1)}|^2 \right ]
\]===============================
Proof:
我們需要找出一個常數$C$ 使得上述Lemma的不等式成立,故首先觀察不等式左邊:
\[
\begin{array}{l}
E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right]\\
= E\left[ {\mathop {\sup }\limits_{0 \le s \le t} {{\left| \begin{array}{l}
\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} ds\\
+ \int_0^t {\left[ {\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \right]} dB_s
\end{array} \right|}^2}} \right]
\end{array}
\]為了符號簡便起見,我們定義
\[\begin{array}{l}
{D_t}: = \int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} ds\\
{M_t}: = \int_0^t {\left[ {\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \right]} d{B_s}
\end{array}
\]由FACT: $(u+v) \leq 2(u^2 + v^2)$我們可知
\[
\begin{array}{l}
|X_s^{(n + 1)} - X_s^{(n)}{|^2} = |D_s^{} + M_s^{}{|^2} \le 2\left( {{D_s}^2 + M_s^2} \right)\\
\Rightarrow \mathop {\sup }\limits_{0 \le s \le t} {\left| {{D_s} + {M_s}} \right|^2} \le 2\mathop {\sup }\limits_{0 \le s \le t} {D_s}^2 + 2\mathop {\sup }\limits_{0 \le s \le t} M_s^2
\end{array}
\]現在我們分頭估計上式右方兩項:
首先對 $\displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_s}^2$做估計,透過Cauchy-Swarz inequality,我們可知
\[
\displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 = {\left( {\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} ds} \right)^2} \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le \left( {{{\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} }^2}ds} \right)\left( {\int_0^t {{1^2}} ds} \right)\\
\Rightarrow \displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 \le t\left( {{{\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} }^2}ds} \right)
\]再由Lipschitz condition: $| \mu(t,x) - \mu(t,y)|^2 + |\sigma(t,x) - \sigma(t,y)|^2 \leq K |x-y|^2
$可知
\[ \Rightarrow \displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 \le Kt\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds \\
\Rightarrow \displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 \le KT\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds \ \ \ \ (*)
\]
接著我們估計第二項:$ \displaystyle \mathop {\sup }\limits_{0 \le s \le t} M_s^2 $:
注意到 $M_t$ 項需要Ito Isometry的幫助來計算 $E[ \mathop {\sup }\limits_{0 \le s \le t} M_s^2 ]$,但我們會發現這次會有問題,因為我們會遭遇到 $\sup$ 在期望值裡面,故我們需先處理這個問題。
首先注意到因為$M_t$的積分變數 ${\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \in \mathcal{H}^2$ 故可知$M_t$ 為 Martingale,故我們可以利用Doob's maximal $L^p$ (在此 $p=2$) inequality:
----
\[||\mathop {\sup }\limits_{0 \le s \le t} {M_t}|{|_p} \le \frac{p}{{p - 1}}||{M_t}|{|_p} \\
\Leftrightarrow E\left[ {{{\left( {\mathop {\sup }\limits_{0 \le s \le t} \left| {{M_t}} \right|} \right)}^p}} \right]^{1/p} \leq \frac{p}{{p - 1}}E\left[ {{{\left| {{M_t}} \right|}^p}} \right]^{1/p}
\]---
現在利用 上述的 Doob's $L^2$ maximal inequality,令$p=2$可得到如下估計
\[\begin{array}{l}
E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le {\left( {\frac{2}{{2 - 1}}} \right)^2}E\left[ {{{\left| {{M_t}} \right|}^2}} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le 4E\left[ {{{\left| {{M_t}} \right|}^2}} \right]
\end{array}
\]現在我們成功從期望值中拔除了$sup$,故現在可以利用Ito isometry:
\[ \Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le 4E\left[ {{{\left| {{M_t}} \right|}^2}} \right] = 4E\left[ {\int_0^t {{{\left[ {\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \right]}^2}} ds} \right]
\]再由Lipschitz condtion:$| \mu(t,x) - \mu(t,y)|^2 + |\sigma(t,x) - \sigma(t,y)|^2 \leq K |x-y|^2$,我們可進一步得到如下估計
\[
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le 4KE\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right] \ \ \ \ (**)
\]現在我們把手邊有的結果整理,合併 $(*)$ 與 $(**)$ 可得
\[
\begin{array}{l}
E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] = E\left[ {2\mathop {\sup }\limits_{0 \le s \le t} {D_s}^2 + 2\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le 2E\left[ {\mathop {\sup }\limits_{0 \le s \le t} {D_s}^2} \right] + 2E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le 2TKE\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + 2 \cdot 4KE\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le K\left( {2T + 8} \right)E\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right]\\
\end{array}
\] 令 $C = K(2T + 8)$,我們得到
\[ \Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le C \cdot E\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right] \]故此得證 $\square$
現在我們要證明Picard iteration得到的跌代解的sequence $X_t^{(n)}$確實會收斂。
我們把Picard iteration記在下面:
==============================
Iteration Scheme (Picard Iteration):
給定 $X_t^{(0)}:=x_0$,與下列跌代過程:
\[
X_t^{(n+1)} = x_0 + \int_0^t \mu(s, X_s^{(n)})ds + \int_0^t \sigma(s,X_s^{(n)})dBs \ \ \ \ (*)
\]==============================
那麼我們的第一步是先證明跌代解為Cauchy sequence。接著再透過 $L^2$ space的completeness可推得此Sequence收斂。我們先證明下面的Lemma
===============================
Lemma
若 $\mu, \sigma$ 滿足Lipschitz condtion:
\[
| \mu(t,x) - \mu(t,y)|^2 + |\sigma(t,x) - \sigma(t,y)|^2 \leq K |x-y|^2
\]則存在一個常數 $C$ 使得 由Picard iteration所定義的隨機過程 $X_t^{(n)}$滿足下列不等式:
\[
E \left[ \displaystyle \sup_{0 \leq s \leq t} |X_s^{(n+1)} - X_s^{(n)}|^2 \right ] \leq C \int_0^t E \left[ | X_s^{(n)} - X_s^{(n-1)}|^2 \right ]
\]===============================
我們需要找出一個常數$C$ 使得上述Lemma的不等式成立,故首先觀察不等式左邊:
\[
\begin{array}{l}
E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right]\\
= E\left[ {\mathop {\sup }\limits_{0 \le s \le t} {{\left| \begin{array}{l}
\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} ds\\
+ \int_0^t {\left[ {\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \right]} dB_s
\end{array} \right|}^2}} \right]
\end{array}
\]為了符號簡便起見,我們定義
\[\begin{array}{l}
{D_t}: = \int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} ds\\
{M_t}: = \int_0^t {\left[ {\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \right]} d{B_s}
\end{array}
\]由FACT: $(u+v) \leq 2(u^2 + v^2)$我們可知
\[
\begin{array}{l}
|X_s^{(n + 1)} - X_s^{(n)}{|^2} = |D_s^{} + M_s^{}{|^2} \le 2\left( {{D_s}^2 + M_s^2} \right)\\
\Rightarrow \mathop {\sup }\limits_{0 \le s \le t} {\left| {{D_s} + {M_s}} \right|^2} \le 2\mathop {\sup }\limits_{0 \le s \le t} {D_s}^2 + 2\mathop {\sup }\limits_{0 \le s \le t} M_s^2
\end{array}
\]現在我們分頭估計上式右方兩項:
首先對 $\displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_s}^2$做估計,透過Cauchy-Swarz inequality,我們可知
\[
\displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 = {\left( {\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} ds} \right)^2} \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \le \left( {{{\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} }^2}ds} \right)\left( {\int_0^t {{1^2}} ds} \right)\\
\Rightarrow \displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 \le t\left( {{{\int_0^t {\left[ {\mu (s,X_s^{(n)}) - \mu (s,X_s^{(n - 1)})} \right]} }^2}ds} \right)
\]再由Lipschitz condition: $| \mu(t,x) - \mu(t,y)|^2 + |\sigma(t,x) - \sigma(t,y)|^2 \leq K |x-y|^2
$可知
\[ \Rightarrow \displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 \le Kt\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds \\
\Rightarrow \displaystyle \mathop {\sup }\limits_{0 \le s \le t} {D_t}^2 \le KT\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds \ \ \ \ (*)
\]
接著我們估計第二項:$ \displaystyle \mathop {\sup }\limits_{0 \le s \le t} M_s^2 $:
注意到 $M_t$ 項需要Ito Isometry的幫助來計算 $E[ \mathop {\sup }\limits_{0 \le s \le t} M_s^2 ]$,但我們會發現這次會有問題,因為我們會遭遇到 $\sup$ 在期望值裡面,故我們需先處理這個問題。
首先注意到因為$M_t$的積分變數 ${\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \in \mathcal{H}^2$ 故可知$M_t$ 為 Martingale,故我們可以利用Doob's maximal $L^p$ (在此 $p=2$) inequality:
----
\[||\mathop {\sup }\limits_{0 \le s \le t} {M_t}|{|_p} \le \frac{p}{{p - 1}}||{M_t}|{|_p} \\
\Leftrightarrow E\left[ {{{\left( {\mathop {\sup }\limits_{0 \le s \le t} \left| {{M_t}} \right|} \right)}^p}} \right]^{1/p} \leq \frac{p}{{p - 1}}E\left[ {{{\left| {{M_t}} \right|}^p}} \right]^{1/p}
\]---
現在利用 上述的 Doob's $L^2$ maximal inequality,令$p=2$可得到如下估計
\[\begin{array}{l}
E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le {\left( {\frac{2}{{2 - 1}}} \right)^2}E\left[ {{{\left| {{M_t}} \right|}^2}} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le 4E\left[ {{{\left| {{M_t}} \right|}^2}} \right]
\end{array}
\]現在我們成功從期望值中拔除了$sup$,故現在可以利用Ito isometry:
\[ \Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le 4E\left[ {{{\left| {{M_t}} \right|}^2}} \right] = 4E\left[ {\int_0^t {{{\left[ {\sigma (s,X_s^{(n)}) - \sigma (s,X_s^{(n - 1)})} \right]}^2}} ds} \right]
\]再由Lipschitz condtion:$| \mu(t,x) - \mu(t,y)|^2 + |\sigma(t,x) - \sigma(t,y)|^2 \leq K |x-y|^2$,我們可進一步得到如下估計
\[
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right] \le 4KE\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right] \ \ \ \ (**)
\]現在我們把手邊有的結果整理,合併 $(*)$ 與 $(**)$ 可得
\[
\begin{array}{l}
E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] = E\left[ {2\mathop {\sup }\limits_{0 \le s \le t} {D_s}^2 + 2\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le 2E\left[ {\mathop {\sup }\limits_{0 \le s \le t} {D_s}^2} \right] + 2E\left[ {\mathop {\sup }\limits_{0 \le s \le t} M_s^2} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le 2TKE\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right]\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + 2 \cdot 4KE\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right]\\
\Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le K\left( {2T + 8} \right)E\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right]\\
\end{array}
\] 令 $C = K(2T + 8)$,我們得到
\[ \Rightarrow E\left[ {\mathop {\sup }\limits_{0 \le s \le t} |X_s^{(n + 1)} - X_s^{(n)}{|^2}} \right] \le C \cdot E\left[ {\int_0^t {|X_s^{(n)} - X_s^{(n - 1)}{|^2}} ds} \right] \]故此得證 $\square$
留言
張貼留言