In 1827 the English botanist Robert Brown observed that microscopic pollen grains suspended in water perform a continual swarming motion.
This phenomenon was first explained by Einstein in 1905 who said the motion comes from the pollen being hit by the molecules in the surrounding water. The mathematical derivation of the Brownian motion process was first done by Wiener in 1918, and in his honor it is often called the Wiener process.
A continuous-time continuous state-space stochastic process \(\{X(t),t\ge 0\}\) is called a Brownian motion process if
One way to visualize a Brownian motion process is as the limit of symmetric random walks: Let \(\{Z_n,n \ge 1\}\) be the symmetric random walk on the integers. If we now speed the process up and scale the jumps accordingly we get a Brownian motion process in the limit. More precisely, suppose we jump every \(\delta t\) and make a jump of size \(\delta x\). If we let \(Z(t)\) denote the position of the process at time t, then
\[ \begin{aligned} &Z(t) = \sum_{i=1}^{\lfloor t/\delta t\rfloor} \delta x Z_i\\ &E[Z(t)] = \sum_{i=1}^{\lfloor t/\delta t\rfloor} \delta x E[Z_i]=0\\ &var \left(Z(t) \right) = \sum_{i=1}^{\lfloor t/\delta t\rfloor} (\delta x)^2 var(Z_i)=\sum_{i=1}^{\lfloor t/\delta t\rfloor} (\delta x)^2=(\delta x)^2\lfloor t/\delta t\rfloor\\ \end{aligned} \] Now let \(\delta x=\sigma \sqrt{\delta t}\) and let \(\delta t\rightarrow 0\), then
\[var \left(Z(t) \right) = (\delta x)^2\lfloor t/\delta t\rfloor = (\sigma \sqrt{\delta t})^2\lfloor t/\delta t\rfloor\rightarrow\sigma^2t\] and by the central limit theorem
\[Z(t)\rightarrow N(0,1)\] in distribution.
The Brownian motion process plays a role in the theory of stochastic processes similar to the role of the normal distribution in the theory of random variables.
If \(\sigma=1\) the process is called standard Brownian motion.
Next we draw sample paths of a standard Brownian motion process.
Here are some properties of Brownian motion:
Brownian motion will eventually hit any and every real value, no matter how large or how negative! It may be a million units above the axis, but it will (with probability 1) be back down again to 0, by some later time.
Once Brownian motion hits zero (or any particular value), it immediately hits it again infinitely often, and then again from time to time in the future.
Spatial Homogeneity: B(t) + x for any \(x \in \mathbb{R}\) is a Brownian motion started at x.
Symmetry: \(-B(t)\) is a Brownian motion
Scaling: \(\sqrt{c} B(t/c)\) for any c > 0 is a Brownian motion
Time inversion:
\[Z(t)=\left\{\begin{array}.0&\text{if}&t=0\\tB(1/t)&\text{if}&t>0\end{array}\right.\]
is a Brownian motion.
Brownian motion is time reversible
Brownian motion is self-similar (that is its paths are fractals):
Consider the four graphs of BM paths drawn here:
They are drawn without labeling on the axis. They appear completely the same, but if we add the tick marks
we see that the scales are completely different. This phenomena is called self-similarity.
Brownian Motion is an example of a process that has a fractal dimension of 2. One of its occurrences is in microscopic particles and is the result of random jostling by water molecules (if water is the medium). So in moving from a given location in space to any other, the path taken by the particle is almost certain to fill the whole space before it reaches the exact point that is the ‘destination’ (hence the fractal dimension of 2).
When studying a continuous-time stochastic process it is often useful to think of any particular realization of the process as a function. Say S is the sample space of the process, that is the set of all possible paths \(\{X(t),t \ge 0\}\), and let \(\omega \in S\).
Then \(f(t) = X(t, \omega)\) is a function. (Usually we suppress \(\omega\), though).
In the case of Brownian motion, what are the properties of a typical realization \(B(t)\)? First let’s look at continuity:
Now by the definition we have that
\[B(t+h)-B(t) \sim N(0,\sqrt{h})\]
therefore \(E[(B(t+h)-B(t))^2] = h\), and so the size of an increment of \(|B(t+h)-B(t)|\) is about \(\sqrt{h}\). So as \(h\rightarrow 0\), \(\sqrt{h} \rightarrow 0\), which implies continuity.
How about differentiability? Now we have
\[\frac{dB(t)}{dt}=\lim_{h\rightarrow 0}\frac{B(t+h)-B(t)}{h}\approx \lim_{h\rightarrow 0} \frac{\sqrt h}{h}=\infty\]
and we see that Brownian motion paths are nowhere differentiable!
(Of course this is rather heuristic but it can be made rigorous).
The idea of functions that are continuous but nowhere differentiable has a very interesting history. It was first discussed in 1806 by Andre Marie Ampere and trying to show that such a function exists was one of the main open problems during the 19th century. More than fifty years later it was Karl Theodor Wilhelm Weierstrass who finally succeeded in constructing such a function as follows:
\[W(x)= \sum_{i=0}^{n} b^i \cos(a^i\pi x)\]
Here is what this looks like for b = .2 and a = 5 + 7.5\(\pi\)(and a finite sum!)
The hard part here was not the construction but to show that the function existed! For the proof he developed what is now known as the Stone-Weierstrass theorem.
Shorty after that a new branch of mathematics called functional analysis was developed. It studies the properties of real-valued functions on function space. Here are some examples of such functionals:
\[ \begin{aligned} &\Phi(f) = f(t)\text{ for some }t\\ &\Phi(f) = \int_{-\infty}^{\infty} f(t) dt\\ &\Phi(f) = \int_{-\infty}^{\infty} f^2(t) dt\\ \end{aligned} \]
Of course one needs to specify the space of functions for which a certain functional applies. Standard “function spaces” are \(C[0,1]\), the space of all continuous functions on \([0,1]\) and \(C^1[0,1]\), the space of all continuous functions on \([0,1]\) with a continuous derivative.
One of the results of functional analysis is that \(C[0,1]\) is much larger than \(C^1[0,1]\), actually of a higher order of infinity, shown with the Baire category theorem.
So consider the following “experiment”: pick any continuous function on \([0,1]\). Then the probability that it has a continuous derivative anywhere is 0! So functions such as Weierstrass (or the paths of Brownian motion) are not the exception, they are the rule. Or, all the functions we study in mathematics are completely irrelevant in nature!
let \(g(x)\) be a continuous function and let \(\{B(t),t\ ge 0\}\) be a standard Brownian motion. For each fixed $t>0, there exists a random variable
\[\Psi(g)=\int_0^t g(x)dB(x)\]
which is the limit of the approximating sums
\[\Psi(g)=\sum_{k=1}^{2^n} g(\frac{k}{2^n}t)\left[B(\frac{k}{2^n}t)-B(\frac{k-1}{2^n}t)\right]\]
as \(n\rightarrow \infty\). The random variable \(\Psi(g)\) is normally distributed with mean 0 and variance
\[var[\Psi(g)]=\int_0^t g^2(x)dx\]
If \(f(x)\) is another continuous function of x, then \(\Psi(f)\) and \(\Psi(g)\) have a joint normal distribution with covariance
\[cov(\Psi(g),\Psi(f))=E[\Psi(g)\Psi(f)]=\int_0^t g(x)f(x)dxdx\]
There is a version of the integration by parts formula:
\[\int_0^t g(x)dB(x)=g(t)B(t)-\int_0^t B(x)g'(x)dx\] and so for example
\[\int_0^t 1dB(x)=B(t)-\int_0^t B(x)0dx=B(t)\]
or
\[\int_0^t (t-x)dB(x)=(t-t)B(t)-\int_0^t B(x)(-1)dx=\int_0^t B(x)dx\] This is called integrate Brownian motion. Note from the definition as the (limit of) sums we have
\[ \begin{aligned} &E \left[\int_0^t B(x)dx \right] = 0\\ &var \left( \int_0^t B(x)dx\right) = \int_0^t (t-x)^2dx= \frac{t^3}3 \end{aligned} \]
Notice that so far we have discussed integration of Brownian motion, not differential equations. Of course just as in calculus, integration and differentiation are closely related. We are unfortunately not quite at the point were we can define a stochastic differential equation. If you want to know how this is done, come to my course Esma 6789 Stochastic Processes!