4.8. Optional stopping theorem

$\newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits}$ $\newcommand{\argmax}{\mathop{\mathrm{argmax}}\limits}$

In this section, we generalize the bounded version of optional stopping. After that as an example we will cover theorem regarding assymetric random walk.


Optional stopping theorem

Our first theorem will be the extension of theorem 4.2.9.

Let $(X_n)$ be a uniformly integrable submartingale and $N$ be a stopping time. Then $(X_{n\wedge N})$ is a uniformly integrable submartingale.

It is shown that $(X_{n\wedge N})$ is a submartingale in theorem 4.2.9. By Vitali's lemma $X_n$ converges almost surely and in $L^1$ to some $X_\infty.$ Since $x \mapsto x^+$ is convex and increasing, $X_n^+, X_{n\wedge N}^+$ are submartingales. Let $\tau=n, \sigma=n\wedge N$ then $\tau,\sigma$ are bounded stopping times. By Doob's inequality, $EX_{n\wedge N}^+ \le EX_n^+$ and $$\sup_n EX_{n\wedge N}^+ \le \sup_n EX_n^+ \le \sup_n E|X_n| < \infty.$$ By Submartingale convergence, $X_{n\wedge N} \to X_N$ a.s. and $E|X_N| < \infty.$ $$\begin{aligned} &E|X_{n\wedge N}| \mathbf{1}_{|X_{n\wedge N}| \ge a} \\ &\le E|X_{n\wedge N}| \mathbf{1}_{|X_{n\wedge N}| \ge a, N \le n} + E|X_{n\wedge N}| \mathbf{1}_{|X_{n\wedge N}| \ge a, N>n} \\ &= E|X_N| \mathbf{1}_{|X_N| \ge a} + E|X_n| \mathbf{1}_{|X_n| \ge a}. \end{aligned}$$ Since both terms on the right-hand side goes to 0 as $a\to\infty,$ $X_{n\wedge N}$ is uniformly integrable.

Next theorem is the unbounded version of Doob’s inequality.

Let $(X_n)$ be a uniformly integrable submartingale, $N$ be a stopping time. Then $$EX_0 \le EX_N \le EX_\infty$$ where $X_\infty = \lim_n X_n$ a.s.

By the previous theorem $X_{n\wedge N}$ is a uniformly integrable submartingale. By Doob's inequality $$EX_0 \le EX_{n\wedge N} \le EX_n.$$ By Vitali's lemma, $EX_n \to EX_\infty$ and $$\begin{aligned} \lim_n X_{n\wedge N} = \begin{cases} X_N &,~ N<\infty \\ X_\infty = X_N &,~ N=\infty \end{cases} \end{aligned}$$ Thus $X_{n\wedge N} \to X_N$ a.s. with $E|X_N| < \infty$ by Vitali's lemma and the desired result follows.

Finally we state and prove the main theorem.

Let $L \le M$ be stopping times and $(Y_{n\wedge M})$ be a uniformly integrable submartingale. Then $EY_L \le EY_M$ and $Y_L \le E(Y_M | \mathcal{F}_L)$ a.s.

Let $X_n = Y_{n\wedge M}$ then it directly follows that $EY_L \le EY_M.$ The rest of the proof is the same as the first proof of bounded stopping theorem.

Note that we do not need uniform integrability of $Y_n.$ The next theorem guarantees uniform integrability of stopped martingale of submartingale with uniformly bounded conditional increment.

Let $X_n$ be a submartingale with $E\left(\left|X_{n+1}-X_n\right| | \mathcal{F}_n\right) \le B$ a.s. and $N$ be a stopping time with $EN < \infty.$ Then $X_{n\wedge N}$ is uniformly integrable and $EX_0 \le EX_N.$

$$X_{n\wedge N} = X_0 + \sum\limits_{m=1}^n (X_m - X_{m-1}) \mathbf{1}_{m \le N}\\ |X_{n\wedge N}| \le |X_0| + \sum\limits_{m=1}^n |X_m - X_{m-1}| \mathbf{1}_{m \le N}$$ Let $Z$ be the right-hand side of the inequality. $$\begin{aligned} E|Z| &\le E|X_0| + \sum_m |X_m - X_{m-1}| \mathbf{1}_{m \le N} \\ &\le E|X_0| + \sum_m E\left( \mathbf{1}_{m\le N} E(\left|X_m - X_{m-1}\right| | \mathcal{F}_{m-1}) \right) \\ &\le E|X_0| + B \cdot \sum_m P(m \le N) \\ &= E|X_0| + B\cdot EN < \infty. \end{aligned}$$ Thus $Z$ is integrable and $X_{n\wedge N}$ is uniformly integrable. $EX_0 \le EX_N$ follows directly.


Assymetric random walk

As an application of optional stopping, we look into properties of assymetric random walk. We define assymetric random walk $S_n = \xi_1 + \cdots + \xi_n,$ $S_0 = 0$ where $\xi_i$’s are i.i.d. with $P(\xi_1=1)=p,$ $P(\xi_1=-1)=q,$ $p+q=1.$ Let $\text{Var}(\xi_1) = \sigma^2 < \infty$ and $\mathcal{F}_n = \sigma(\xi_1,\cdots,\xi_n)$ for $n\ge 1,$ $\mathcal{F}_0$ be a trivial $\sigma$-field. Let $\varphi(x) = (\frac{1-p}{p})^x.$

(a) $0 < p < 1 \implies \varphi(S_n)$ is a martingale.
(b) $T_x := \inf\{n: S_n=x\},$ $x\in\mathbb{Z}$ is a stopping time and $P(T_a < T_b) = \frac{\varphi(b)-\varphi(0)}{\varphi(b)-\varphi(a)}$ for $a<0<b.$
(c) $1/2<p<1$ and $a<0<b \implies$ $T_b < \infty$ a.s. and $P(T_a < \infty) < 1.$
(d) $1/2<p<1 \implies$ $ET_b = \frac{b}{2p-1},$ $b>0.$
expand proof
(b) Let $T_a \wedge T_b$ be a stopping time. By law of iterated logarithm, $$\begin{aligned} \limsup_n \frac{S_n - n(p-q)}{\sigma \sqrt{2n\log\log n}} &= 1 \text{ a.s.} \\ \liminf_n \frac{S_n - n(p-q)}{\sigma \sqrt{2n\log\log n}} &= -1 \text{ a.s.} \end{aligned}$$ thus $S_n \approx n(p-q) \pm \sigma\sqrt{2n\log\log n}.$ If $p>q,$ $\lim_n S_n = \infty$ a.s. and $T_b < \infty$ a.s. Similarly $T_a$ or $T_b$ is almost surely finite in any cases, so $N < \infty$ a.s. If $N \ge n,$ $a \le S_{n\wedge N} = S_n \le b.$ If $N < n,$ $S_{n\wedge N} = S_N = a$ or $b.$ $\varphi(S_{n\wedge N})$ is a bounded, thus uniformly integrable and closable martingale. Note that $S_N = a\mathbf{1}_{T_a < T_b} + b\mathbb{1}_{T_a > T_b}.$ Also note that $E\varphi(S_N) = 1$ since $1 = E\varphi(S_0) = E\varphi(S_{n\wedge N}) \to E\varphi(S_N).$ $$\begin{aligned} 1 = E\varphi(S_N) &= \varphi(a) P(T_a < T_b) + \varphi(b) P(T_a > T_b) \\ &= (\varphi(a) - \varphi(b)) P(T_a < T_b) + \varphi(b). \end{aligned}$$ Organizing both sides gives the result.
Observe that $T_\alpha < T_\beta$ for all $\beta<\alpha<0.$ Thus $\lim_{a\to-\infty} T_a = \infty.$ $$\begin{aligned} P(T_b < \infty) &= \lim_{a\to-\infty} P(T_b<T_a) \\ &= \lim_{a\to-\infty} \left(1 - \frac{\varphi(b) - 1}{\varphi(b) - \varphi(a)}\right) \\ &= \lim_{a\to-\infty} \frac{1-\varphi(a)}{\varphi(b) - \varphi(a)} = 1. \end{aligned}$$ Similarly, $P(T_a < \infty) = 1/\varphi(a) < 1.$
Observe that if $a<0,$ $(\inf_n S_n \le a) = (T_a < \infty).$ Since $$P(\inf_n S_n \le a) = P(T_a < \infty) = \begin{cases} \left(\frac{1-p}{p}\right)^{-a} &,~ a<0 \\ 1 &,~ a\ge 0 \end{cases}$$ we get $$\begin{aligned} E|\inf_n S_n| &= \sum_{a=-\infty}^\infty |a| P(\inf_n S_n = a) \\ &= \sum_{a=-\infty}^\infty |a| \left( \left(\frac{1-p}{p}\right)^{-a} - \left(\frac{1-p}{p}\right)^{-(a-1)} \right) \\ &= \sum_{a=-\infty}^\infty |a| \left(\frac{1-p}{p}\right)^{-a} \left(1 - \frac{1-p}{p}\right) < \infty. \end{aligned}$$ Thus $\inf_n S_n$ is integrable. Let $X_n = S_n - n(p-q)$ then $X_n$ is a martingale. Since $T_b < \infty$ a.s., $X_{n\wedge T_b}$ is also a martingale. $$\begin{aligned} ES_{n\wedge T_b} &= EX_{n\wedge T_b} + (p-q)E(T_b\wedge n) \\ &= \cancel{EX_0} + (p-q)E(T_b\wedge n). \end{aligned}$$ Note that $\inf_n S_n \le S_{n\wedge T_b} \le b$ and $|S_{n\wedge T_b}| \le |\inf_n S_n| + b$ for all $n.$ By DCT, $ES_{n\wedge T_b} \to ES_{T_b} = b.$ By MCT, $E(T_b \wedge n) \uparrow ET_b.$ Thus the desired result follows.


Acknowledgement

This post series is based on the textbook Probability: Theory and Examples, 5th edition (Durrett, 2019) and the lecture at Seoul National University, Republic of Korea (instructor: Prof. Sangyeol Lee).