Category Archives: Stochastic Processes

The Poisson Process: Use of Generating Functions

In here, we obtained the Poisson distribution by successively solving a differential-difference equation. While getting successive solutions of the Poisson distribution was easy and simple, the same thing can’t be said in general. In this note, we discuss another method of obtaining probability distributions. Let $P(x,t)=\sum_{n=0}^\infty p_n(t)x^n$. $P(x,t)$ is called a probability generating function for the probability distribution $p_n(t)$. Recall the differential-difference equation (2) in here: $$\frac{dp_n(t)}{dt}=\lambda\{p_{n-1}(t)-p_n(t)\},\ n\geq 0$$ with $p_{-1}(t):=0$ and $p_0(0)=1$. Multiplying the equation by $x^n$ and then summing over $n$ from $n=1$ to $\infty$, we obtain the differential equation \begin{equation}\label{eq:genfn}\frac{\partial P(x,t)}{\partial t}=\lambda(x-1)P(x,t)\end{equation} with $P(x,0)=1$. \eqref{eq:genfn} only contains derivative with respect to $t$, so it is a separable equation. Its solution is given by $$P(x,t)=e^{\lambda t(x-1)}$$ From this we easily obtain the Poisson distribution $$p_n(t)=\frac{e^{-\lambda t}(\lambda t)^n}{n!}$$

References:

  1. Norman T. J. Bailey, The Elements of Stochastic Processes with Applications to the Natural Sciences, John Wiley & Sons, Inc., 1964.

Markov Processes: The Poisson Process

A Markov process is a stochastic process such that given the present state of the system, the future behavior is independent of the past history.

Suppose that the size of the population under consideration at time $t$ is represented by a discrete random variable $X(t)$ with probability $$P\{X(t)=n\}=p_n(t),\ n=0,1,2,\cdots$$ At certain instants of time, there will be discrete changes in the population size, due to the loss or death of an individual, or to the appearance or birth of new individuals.

We now consider an example of a stochastic process, which will give rise to a simple model of a Markov process called the Poisson process. Suppose that we are measuring the radiation of a certain radioactive substance. Let $X(t)$ be the total number of particles recorded up to time $t$. Also suppose that the chance of a new particle being recorded is any short time interval is independent of not only the previous states but also the present state. Then the chance of a new addition to the total count during a very short time interval $\Delta t$ can be written as $\lambda\Delta t+o(\Delta t)$ where $\lambda$ is a constant and $o(\Delta t)$ is the chance of two or more simultaneous emissions. The chance of no change in the total count would be $1-\Delta t-o(\Delta t)$. Thus, we obtain \begin{equation}\label{eq:poisson}p_n(t+\Delta t)=p_{n-1}(t)\lambda\Delta t+p_n(t)(1-\lambda\Delta t)\end{equation} where terms that are small compared with $\Delta t$ have been disregarded. \eqref{eq:poisson} results in the following differential-difference equation \begin{equation}\begin{aligned}\frac{dp_n(t)}{dt}&=\lim_{\Delta t\to 0}\frac{p_n(t+\Delta t)-p_n(t)}{\Delta t}\\&=\lambda\{p_{n-1}(t)-p_n(t)\},\ n>0\end{aligned}\label{eq:poisson2}\end{equation}

Note that we have $n=0$ at time $t+\Delta t$ only if $n=0$ at time $t$ and no new particles are emitted in $\Delta t$. This means that we have $$p_0(t+\Delta t)=p_0(t)(1-\lambda\Delta t)$$ and subsequently the differential equation $$\frac{dp_0}{dt}=-\lambda p_0(t)$$ whose solution is $$p_0(t)=e^{-\lambda t}$$ with the initial condition $p_0(0)=1$. This initial condition can be obtained by setting the Geiger counter to $0$ in the beginning of the process. Starting off with $p_0(t)$, iterative applications of \eqref{eq:poisson2} will result in $$p_n(t)=\frac{(\lambda t)^ne^{-\lambda t}}{n!},\ n=0,1,2,\cdots$$ which is a Poisson distribution with parameter $\lambda t$.

References:

  1. Norman T. J. Bailey, The Elements of Stochastic Processes with Applications to the Natural Sciences, John Wiley & Sons, Inc., 1964.

Itô’s Formula

Let us consider the 1-dimensional case ($n=1$) of the Stochastic Equation (4) from the last post
\begin{equation}\label{eq:sd3}dX=b(X)dt+dW\end{equation} with $X(0)=0$.
Let $u: \mathbb{R}\longrightarrow\mathbb{R}$ be a smooth function and $Y(t)=u(X(t))$ ($t\geq 0$). What we learned in calculus (the chain rule) would dictate us that $dY$ is
$$dY=u’dX=u’bdt+u’dW,$$
where $’=\frac{d}{dx}$. It may come to you as a surprise to hear this but this is not correct. First by Taylor series expansion we obtain
\begin{align*}
dY&=u’dX+\frac{1}{2}u^{\prime\prime}(dX)^2+\cdots\\
&=u'(bdt+dW)+\frac{1}{2}u^{\prime\prime}(bdt+dW)^2+\cdots
\end{align*}
Now we introduce the following striking formula
\begin{equation}\label{eq:wiener2}(dW)^2=dt\end{equation}
The proof of \eqref{eq:wiener2} is beyond the scope of this notes and so it won’t be given now or ever. However it can be found, for example, in [2]. Using \eqref{eq:wiener2} $dY$ can be written as
$$dY=\left(u’b+\frac{1}{2}u^{\prime\prime}\right)dt+u’dW+\cdots$$
The terms beyond $u’dW$ are of order $(dt)^{\frac{3}{2}}$ and higher. Neglecting these terms, we have
\begin{equation}\label{eq:sd4}dY=\left(u’b+\frac{1}{2}u^{\prime\prime}\right)dt+u’dW\end{equation}
\eqref{eq:sd4} is the stochastic differential equation satisfied by $Y(t)$ and it is called the Itô’s Formula named after a Japanese mathematician Kiyosi Itô.

Example. Let us consider the stochastic differential equation
\begin{equation}\label{eq:sd5}dY=YdW,\ Y(0)=1\end{equation}
Comparing \eqref{eq:sd4} and \eqref{eq:sd5}, we obtain
\begin{align}\label{eq:sd5a}
u’b+\frac{1}{2}u^{\prime\prime}&=0\\\label{eq:sd5b}u’&=u\end{align}
The equation \eqref{eq:sd5b} along with the initial condition $Y(0)=1$ results $u(X(t))=e^{X(t)}$. Using this $u$ with equation \eqref{eq:sd5a} we get $b=-\frac{1}{2}$ and so the equation \eqref{eq:sd3} becomes
$$dX=-\frac{1}{2}dt+dW$$
in which case $X(t)=-\frac{1}{2}t+W(t)$. Hence, we find $Y(t)$ as
$$Y(t)=e^{-\frac{1}{2}t+W(t)}$$

Example. Let $P(t)$ denote the price of a stock at time $t\geq 0$. A standard model assumes that the relative change of price $\frac{dP}{P}$ evolves according to the stochastic differential equation
\begin{equation}\label{eq:relprice}\frac{dP}{P}=\mu dt+\sigma dW\end{equation}
where $\mu>0$ and $\sigma$ are constants called the drift and the volatility of the stock, respectively. Again using Itô’s formula similarly to what we did in the previous example, we find the price function $P(t)$ which is the solution of
$$dP=\mu Pdt+\sigma PdW,\ P(0)=p_0$$
as
$$P(t)=p_0\exp\left[\left(\mu-\frac{1}{2}\sigma^2\right)\right]t+\sigma W(t).$$

References:

1. Lawrence C. Evans, An Introduction to Stochastic Differential Equations, Lecture Notes

2. Bernt Øksendal, Stochastic Differential Equations, An Introduction with Applications, 5th Edition, Springer, 2000

What is a Stochastic Differential Equation?

Consider the population growth model
\begin{equation}\label{eq:popgrowth}\frac{dN}{dt}=a(t)N(t),\ N(0)=N_0\end{equation}
where $N(t)$ is the size of a population at time $t$ and $a(t)$ is the relativive growth rate at time $t$. If $a(t)$ is completely known, one can easily solve \eqref{eq:popgrowth}. In fact, the solution would be $N(t)=N_0\exp\left(\int_0^t a(t)dt\right)$. Now suppose that $a(t)$ is not completely known but it can be written as $a(t)=r(t)+\mbox{noise}$. We do not know the exact behavior of noise but only its probability distribution. Such a case equations like \eqref{eq:popgrowth} is called a stochastic differential equation. More genrally, a stochastic differential equation can be written as
\begin{equation}\label{eq:sd}\frac{dX}{dt}=b(X(t))+B(X(t))\xi(t)\ (t>0),\ X(0)=x_0,\end{equation}
where $b: \mathbb{R}^n\longrightarrow\mathbb{R}^n$ is a smooth vector field and $X: [0,\infty)\longrightarrow\mathbb{R}^n$, $B: \mathbb{R}^n\longrightarrow\mathbb{M}^{n\times m}$ and $\xi(t)$ is an $m$-dimensional white noise. If $m=n$, $x_0=0$, $b=0$ and $B=I$, then \eqref{eq:sd} turns into
\begin{equation}\label{eq:wiener}\frac{dX}{dt}=\xi(t),\ X(0)=0\end{equation}
The solution of \eqref{eq:wiener} is denoted by $W(t)$ and is called the $n$-dimensional Wiener process or Brownian motion. In other words, white noise $\xi(t)$ is the time derivative of the Wiener process. Replace $\xi(t)$ in \eqref{eq:sd} by $\frac{W(t)}{dt}$ and divide the resulting equation by $dt$. Then we obtain
\begin{equation}\label{eq:sd2}dX(t)=b(X(t))dt+B(X(t))dW(t),\ X(0)=x_0\end{equation}
The stochastic differential equation \eqref{eq:sd2} is solved symbolically as
\begin{equation}\label{eq:sdsol}X(t)=x_0+\int_0^tb(X(s))ds+\int_0^tb(X(s))dW(s)\end{equation}
for all $t>0$. In order to make sense of $X(t)$ in \eqref{eq:sdsol} we will have to know what $W(t)$ is and what the integral $\int_0^tb(X(s))dW(s)$, which is called a stochastic integral, means.

References:

  1. Lawrence C. Evans, An Introduction to Stochastic Differential Equations, Lecture Notes
  2. Bernt Øksendal, Stochastic Differential Equations, An Introduction with Applications, 5th Edition, Springer, 2000