2.1. Independence

$\newcommand{\argmin}{\mathop{\mathrm{argmin}}\limits}$ $\newcommand{\argmax}{\mathop{\mathrm{argmax}}\limits}$

First chapter was about the essential of measure theory. We especially focused on important result for finite measure or at least $\sigma$-finite measures. We defined a probability space as a measure space and a random variable as a measurable function in it.

Following chapters will cover two fundamental theory in convergence of random variables: the strong law of large numbers (Chapter 2) and the central limit theorem (Chapter 3). We start by assumung nice but in many real cases inadequate condition - mutual independence of random variables - and modify the result to achieve the stronger one. As a starting point, this subsection covers the notion of independence. The rest of the chapter is about law of large numbers.


Independence of random variables

As I pointed out earlier, it is natural to define a property of a function as a property of the related set (its domain). We do the same here: we define independence of $\sigma$-fields first.

Let $(\Omega, \mathcal{F}, P)$ be a probability space, $\mathcal{F}_1, \cdots, \mathcal{F}_n \subset \mathcal{F}$ be sub $\sigma$-fields, $E_1, \cdots, E_n$, $E_i \in \mathcal{F}_i$ be the events.
(i) $E_1,\cdots,E_n$ are independent if $P(\bigcap\limits_{i=1}^n E_i) = \prod\limits_{i=1}^n P(E_i).$
(ii) $\mathcal{F}_1,\cdots,\mathcal{F}_n$ are independent if $P(\bigcap\limits_{i=1}^n E_i) = \prod\limits_{i=1}^n P(E_i)$ for all $E_i \in \mathcal{F}_i.$

In fact to be extra specific we say it is $P$-mutually independent if the above condition is met. It gives us extra information about regarding probability measure ($P$) and that it is mutual. If we just write “independence” it means mutual independence. We drop $P$ if the measure is clear without confusion.

Independence of random variables are defined as independence of generated $\sigma$-fields.

Let $X_1,\cdots,X_n$ be random variables in $(\Omega, \mathcal{F}, P)$. $X_1,\cdots,X_n$ are independent if $\sigma(X_i)$, $i=1,\cdots,n$ are independent.

We sometimes write $X \perp Y$ for independence between $X$ and $Y$.

It is not necessary to check all possible products of the events just to check the independence of $\sigma$-fields. We can use $\pi$-$\lambda$ theorem.

$\mathcal{A}, \mathcal{B}$ are $\pi$-systems and are subsets of $\mathcal{F}$. If $P(A\cap B) = P(A)P(B)$ for all $A \in \mathcal{A}$ and $B \in \mathcal{B}$, then $\sigma(\mathcal{A}), \sigma(\mathcal{B})$ are independent.

  (1) For a fixed $A \in \mathcal{A}$, let $\mathcal{L}_A = \{ B \in \mathcal{F}: P(A \cap B) = P(A)P(B) \}$ be a $\lambda$-system containing $\mathcal{B}$. By $\pi$-$\lambda$ theorem, $\sigma(\mathcal{B}) \subset \mathcal{L}_A$.
  (2) Now for fixed $B \in \sigma(\mathcal{B})$ let $\mathcal{L}_B = \{ A \in \mathcal{F}: \mu(A \cap B) = \mu(A)\mu(B) \}$. Then similar to the above, $\mathcal{L}_B$ is a $\lambda$-system that contains $\mathcal{A}$ and $\sigma(\mathcal{A}) \subset \mathcal{L}_B$ follows.

It is clear that $\mathcal{P} = \{ X^{-1}(-\infty, x]:~ x\in (-\infty, \infty]\}$ is a $\pi$-system and $\sigma(\mathcal{P}) = \sigma(X)$, so the corollary directly follows.

$X_1,\cdots,X_n$ are independent if and only if $P(X_1 \le x_1, \cdots, X_n \le x_n) = \prod\limits_{i=1}^n P(X_i \le x_i)$ for all $x_i \in (-\infty, \infty].$


Existence of a sequence of independent random variables

In the following chapters, we will construct a sequence of independent random variables to state and prove limiting laws. It is important to mention that such sequence exists.

For a finite number $n\in\mathbb{N}$, we can construct $n$ independent random variables using product space. Given distribution functions $F_i$, $i=1,\cdots,n$, let $X_i$ with $P(X_i \le x) = F_i(x)$ and $X_i(\omega_1,\cdots,\omega_n)=\omega_i$. (i.e. $X_i$ is the coordinate-wise projection.) Let $(\Omega, \mathcal{F}, P) = (\mathbb{R}^n, \mathcal{B}(\mathbb{R}), P)$ where $P((a_1,b_1]\times\cdots\times(a_n,b_n]) = \prod\limits_{i=1}^n (F_i(b_i) - F_i(a_i))$ then $X_i$’s are independent.

Now we need infinite number of independent random variables. Consider $\mathbb{R}^\infty := \{(x_1,x_2,\cdots):~ x_i \in \mathbb{R}\}$, an infinite-dimensional product space of $\mathbb{R}$ and corresponding product $\sigma$-field $\mathcal{R}^\infty$.1 Kolmogorov’s extension theorem states that we can construct a unique probability measure on this space.

Given probability measures $P_n$ on $(\mathbb{R}^n, \mathcal{R}^n)$ that satisfies $$ P_{n+1}((a_1,b_1]\times\cdots\times(a_n,b_n]\times\mathbb{R}) = P_n((a_1,b_1]\times\cdots\times(a_n,b_n]) $$ there uniquely exists a probability measure $P$ on $(\mathbb{R}^\infty, \mathcal{R}^\infty)$ such that $$ P((a_1,b_1]\times\cdots\times(a_n,b_n]\times\mathbb{R}^{\infty-n}) = P_n((a_1,b_1]\times\cdots\times(a_n,b_n]). $$

Furthermore, if a measurable space $(S, \mathcal{S})$ is nice, that is, there is a one-to-one map $\varphi: S \to \mathbb{R}$ so that $\varphi$, $\varphi^{-1}$ are both measurable, then we can also construct a sequence of random elements $\{X_n\}_{n\in\mathbb{N}}: \Omega \to S.$



Acknowledgement

This post series is based on the textbook Probability: Theory and Examples, 5th edition (Durrett, 2019) and the lecture at Seoul National University, Republic of Korea (instructor: Prof. Johan Lim).

  1. I will cover the detail later when reviewing Convergence of Probability Measures, 2nd edition (Billingsley, 1999).