Skip to content →

Random Variables

Usually, we are not only interested in the outcomes of a random experiment or combinations thereof, but also the implications they might have – which might be quantified by a value and brings us to the concept of a random variable.

Definition

A random variable is a measurable function $f:X \to E$ from a set of possible outcomes $X$ to a measurable space $E$, i.e., it assigns values (from $E$) to the outcomes $X$ of a random experiment. It can be either discrete (countable image set, e.g. $E = \mathbb{Z}$) or continuous (uncountable image set, e.g. $E = \mathbb{R}$)

Intuition

Unlike an algebraic variable, whose value is well-determined by an equation, a random variable takes values in an entire range and cannot be predicted, for example, due to uncertainty in the initial conditions, too large complexity of the system, and other factors.

Examples for a random variable include

  • the measurement outcome itself, i.e. ‘head’ or ‘tail’ for flipping a coin, when $X = E$
  • combinations of measurement outcomes: e.g. the sum of rolling two dices
  • in practice: the values that stock prices, heart rate, $\dots$ assume

Note, that a random variable only assigns values to outcomes and carries no notion of time. The time evolution of a random variable is a stochastic process.

Independence of Random Variables

An important concept is the independence of two random variables:

Suppose a random experiment yields a random outcome $x_1\in X_1$, while another experiment yields $x_2\in X_2$. We can combine these into a single random variable $x=(x_1,x_2)$ that takes values in the product set $X=X_1 \times X_2$ and consider the probability $P(A)$ defined on subsets $A\subseteq X$.

The probability $P(x_1)$ of $x_1$ is then given by the marginal probability $P(x_1)=P(\{x_1\}\times X_2)$, i.e. the probability of the set of all $x=(x_1,x_2)$, where the first component has the definite value $x_1$ and the second component $x_2$ is arbitrary. The probability $P(x_2)$ of $x_2$ is defined analogously.

We say the random variables $x_1$ and $x_2$ are independent iff

$$
P(x_1,x_2) = P(x_1)P(x_2) \quad.
$$

Thus, two random variables are independent if the values assumed by one do not influence the values taken by the other one – they convey no information about each other.

Two subsequent tosses of an ideal coin are independent. A counter-example would be drawing from an urn without replacement: the first drawing changes the probabilities for the second drawing.

Excursion into Vector Spaces

One can consider random variables as elements of a function vector space, e.g. $L^2$ the space of all square-integrable functions. Being independent of each other then means that they are orthogonal with respect to the norm $||.||$ on that space.

Just like the Pythagorean theorem holds for the lengths of orthogonal vectors $x \perp y$

$$
||x + y||^2 = ||x||^2 + ||y||^2
$$

the variance is additive for two independent random variables $X$ and $Y$

$$
\sigma_{XY}^2 = \sigma_X^2 + \sigma_Y^2
$$

This article is part of the book project on stochastic processes.

Published in Probability Theory Stochastic Processes