Category Archives: Statistical Mechanics

Maxwell-Boltzmann Statistics

From here, we continue to consider the problem of distributing $N$ particles into $K$ boxes. Assume that the probability of a particle going into box $i$ is the same as the probability of a particle going into box $j$ for all $i,j$, i.e. we assume the equal probability for the distribution of a single particle going into a box. Let us call it $p$. Then the probability of distributing $n_1$ particles in box 1, $n_2$ particles in box 2, …, $n_K$ particles in box $K$ is
\begin{equation}
\begin{aligned}
p&=N!\prod_{i=1}^K\frac{1}{n_i!}p^{n_1}p^{n_2}\cdots p^{n_K}\\
&=N!\prod_{i=1}^K\frac{1}{n_i!}p^N
\end{aligned}\label{eq:maxwell-boltzmann}
\end{equation}
We want to find the distribution of particles into different boxes by maximizing the probability \eqref{eq:maxwell-boltzmann}. Since $p^N$ is constant, maximizing the probability is the same as maximizing $W:=N!\prod_{i=1}^K\frac{1}{n_i!}$. Boltzmann defined entropy corresponding to a distribution of particles by
$$S=k\log W$$
where $k$ is the Boltzmann constant. By Sterling’s formula, we can write $S$ as
$$S\approx k[N\log N-N-\sum_i(n_i\log n_i-n_i)]$$
We are going to neglect $N\log N -N$ from this entropy, so the form of entropy we are considering is
\begin{equation}
\label{eq:maxwell-boltzmann2}
S\approx -k\sum_i(n_i\log n_i-n_i)
\end{equation}
This amounts to dropping $N!$ from $W$. The reason for this mysterious step is to avoid the so-called Gibbs paradox. For details about Gibbs paradox see, for example, [1] of the references at the end of this note.
Let $\epsilon_i$ denote the single particle energy. We have two conservative quantities that we want to keep fixed: the particle number $N=\sum_i n_i$ and the energy $U=\sum_i n_i\epsilon_i$. So, we add the terms of these constraints to $S$:
$$S\approx k[-\sum_i(n_i\log n_i-n_i)]+\beta(U-\sum_i n_i\epsilon_i)-\beta\mu (N-\sum_i n_i)$$
$\frac{\partial S}{\partial n_i}=0$ results in the critical point $n_i=e^{-\beta(\epsilon_i-\mu)}$. This is the value at which the probability and entropy assume a maximum. This is called the Maxwell-Boltzmann distribution.
From the constraints, we obtain
\begin{align*} N&=\sum_i e^{-\beta(\epsilon_i-\mu)}\\ U&=\sum_i \epsilon_ie^{-\beta(\epsilon_i-\mu)} \end{align*}
Substituting $n_i=e^{-\beta(\epsilon_i-\mu)}$ in \eqref{eq:maxwell-boltzmann2}, the value of the entropy at the maximum is given by
\begin{equation}
\begin{aligned}
S&\approx k[\beta\sum_i\epsilon_i e^{-\beta(\epsilon_i-\mu)}-\beta\mu\sum_i e^{-\beta(\epsilon_i-\mu)}+\sum_i e^{-\beta(\epsilon_i-\mu)}]\\
&=k[\beta U-\beta\mu N+N]
\end{aligned}\label{eq:maxwell-boltzmann3}
\end{equation}
We are going to determine $\mu$ and $\beta$. The single particle kinetic energy is $\epsilon=\frac{p^2}{2m}$. The summation covers all possible states of each particle. This means that we may replace the summation by an integration over momentum and position:
$$N\to e^{\beta\mu}\int d^3xd^3p e^{-\beta\frac{p^2}{2m}},\ U\to e^{\mu\beta}\int d^3xd^3p \frac{p^2}{2m}e^{-\beta\frac{p^2}{2m}}$$
However, note that the number of states cannot be given only by $d^3x d^3p$ because of its dimension. To make it dimensionless, we make the following quantum mechanical correction:
\begin{equation}
\label{eq:maxwell-boltzmann4}
\frac{d^3x d^3p}{h^3}=\frac{d^3 xd^3p}{(2\pi\hbar)^3}
\end{equation}
Recall that the Planck constant $h$ has the dimension length$\times$momentum.
\begin{equation}
\begin{aligned}
N&=\frac{e^{\beta\mu}}{h^3}\int d^3xd^3p e^{-\beta\frac{p^2}{2m}}\\
&=\frac{e^{\beta\mu}}{h^3}V\left(\frac{2m\pi}{\beta}\right)^{\frac{3}{2}},\\
U&=\frac{e^{\beta\mu}}{h^3}\int d^3xd^3p \frac{p^2}{2m}e^{-\beta\frac{p^2}{2m}}\\
&=\frac{e^{\beta\mu}}{h^3}V\frac{3}{2\beta}\left(\frac{2m\pi}{\beta}\right)^{\frac{3}{2}}
\end{aligned}\label{eq:maxwell-boltzmann5}
\end{equation}
For some details about Gaussian integrals, see here. From \eqref{eq:maxwell-boltzmann5}, we obtain
\begin{align*} \beta&=\frac{3N}{2U},\\ \beta\mu&=\log\left[\frac{Nh^3}{V}\left(\frac{\beta}{2\pi m}\right)^{\frac{3}{2}}\right]=\log\left[\frac{Nh^3}{V}\left(\frac{3N}{4\pi mU}\right)^{\frac{3}{2}}\right] \end{align*}
Consequently, the entropy in \eqref{eq:maxwell-boltzmann3} can be written as
\begin{equation}
\label{eq:maxwell-boltzmann6}
S=kN\left[\frac{5}{2}+\log\left(\frac{V}{N}\right)+\frac{3}{2}\log\left(\frac{U}{N}\right)+\frac{3}{2}\log\left(\frac{4\pi m}{3h^2}\right)\right]
\end{equation}
\eqref{eq:maxwell-boltzmann6} is known in statistical mechanics as the Sackur-Tetrode formula for the entropy of a classical ideal gas (we will soon see its relationship with an ideal gas). According to the Huang’s book [1], this formula has been experimentally verified as the correct entropy of an ideal gas at high temperatures.

Differentiating $S$ in \eqref{eq:maxwell-boltzmann6}, we obtain
\begin{align*} dU&=\frac{\partial U}{\partial S}dS-\frac{\partial U}{\partial S}\frac{\partial S}{\partial N}dN-\frac{\partial U}{\partial S}\frac{\partial S}{\partial V}dV\\ &=\frac{1}{k\beta}dS-\frac{NkT}{V}+\mu dN \end{align*}
Comparing this with
$$dU=TdS-pdV+\mu dN$$ from the first law of thermodynamics, we have
\begin{align*} \beta&=\frac{1}{kT},\\ p&=\frac{NkT}{V} \end{align*}
The second equation is the well-known ideal gas equation of state. The chemical potential $\mu$ and the internal energy $U$ can be expressed as functions of the temperature $T$ as
\begin{align*} \mu&=kT\log\left[\frac{h^3N}{V}\frac{1}{(2\pi mkT)^{\frac{3}{2}}}\right],\\ U&=\frac{3}{2}NkT \end{align*}

References:

  1. Kerson Huang, Statistical Mechanics, John Wiley & Sons, 1987
  2. V. P. Nair, Lectures on Thermodynamics and Statistical Mechanics

The Binomial Distribution

For a fair coin, if we toss it a very large number of times, nearly half the time we will get heads and half the time we will get tails. So, we can say the probability of getting heads is equal to the probability of getting tails as $\frac{1}{2}$.

Now consider the simultaneous tossing of $N$ coins. For $N=2$, there are four possible outcomes: $HH$, $HT$, $TH$, and $TT$. The probabilities of getting 2 heads, 1 head, and no head are, respectively, $\frac{1}{4}$, $\frac{2}{4}=\frac{1}{2}$, $\frac{1}{4}$. The number of ways we can get $n$ heads (consequently $N-n$ tails) is
$$\frac{N!}{n!(N-n)!}$$
The probability $p$ of getting $n$ heads is then
\begin{equation}
\label{eq:binomial}
p=\frac{N!}{n!(N-n)!2^N}
\end{equation}
Let us consider
$$\ln p=\ln\frac{N!}{n!(N-n)!2^N}=\ln N!-\ln n!-\ln(N-n)!-N\ln 2$$
Using the Stirling’s formula
$$\ln N!\approx N\ln N-N$$
we obtain
$$\ln p\approx -N\left[\frac{n}{N}\ln\frac{n}{N}+\left(1-\frac{n}{N}\right)\ln\left(1-\frac{n}{N}\right)+\ln 2\right]$$
For very large $N$, $x=\frac{n}{N}$ can be considered to be continuous.
$$\ln p\approx -N[x\ln x+(1-x)\ln(1-x)]$$
$\ln p$ has only one critical point at $x=\frac{1}{2}$ and the second derivative of $\ln p$ at $x=\frac{1}{2}$ is negative. So $\ln p$ (and consequently $p$) assumes a maximum at $x=\frac{1}{2}$. Expanding $\ln p$ at $x=\frac{1}{2}$, we have
$$\ln p\approx -2N\left(x-\frac{1}{2}\right)^2$$
or
$$p\approx\exp\left[-2N\left(x-\frac{1}{2}\right)^2\right]$$
As seen clearly, the standard deviation from $x=\frac{1}{2}$ is $\frac{1}{2\sqrt{N}}$.

$p\approx\exp[-2N(x-\frac{1}{2})^2]$ with $N=11$.

Thus far, we have considered equal probabilities. Suppose that we have a coin with a probability of $q$, $0<q<1$ for heads. Then the probability of getting $n$ heads when $N$ coins are tossed is
\begin{equation}
\label{eq:binomial2}
\frac{N!}{n!(N-n)!}q^n(1-q)^{N-n}
\end{equation}
$q=\frac{1}{2}$ reduces \eqref{eq:binomial2} to \eqref{eq:binomial}. Note that the probability in $\eqref{eq:binomial}$ peaks at $x=q$ and the standard deviation from $x=q$ is the same.

We now consider events with more than two outcomes. For example, rolling a dice has six possible outcomes.
Suppose that we want to distribute $N$ particles into $K$ boxes in such a way that $n_i$ particles go into box $i$. How many ways are there to do this? There are $\frac{N!}{n_1!(N-n_1)!}$ ways to choose $n_1$ particles out of $N$ particles. Then there are $\frac{(N-n_1)!}{n_2!(N-n_1-n_2)!}$ ways to choose $n_2$ particles out of remaining $N-n_1$ particles, and so on and so forth. Hence, the number of ways to distribute $n_1$ particles in box 1, $n_2$ particles in box 2, …, $n_K$ particles in box $K$ is
\begin{align*} \frac{N!}{n_1!(N-n_1)!}\frac{(N-n_1)!}{n_2!(N-n_1-n_2)!}&\frac{(N-n_1-n_2)!}{n_3!(N-n_1-n_2-n_3)!}\cdots\\ &\frac{(N-n_1-n_2-\cdots-n_{K-1})!}{n_K!}\\ &=\frac{N!}{n_1!n_2!\cdots n_K!}\\ &=N!\Pi_{i=1}^K\frac{1}{n_i!}, \end{align*}
$\sum_{i=1}^Kn_i=N$.