From here, we continue to consider the problem of distributing $N$ particles into $K$ boxes. Assume that the probability of a particle going into box $i$ is the same as the probability of a particle going into box $j$ for all $i,j$, i.e. we assume the equal probability for the distribution of a single particle going into a box. Let us call it $p$. Then the probability of distributing $n_1$ particles in box 1, $n_2$ particles in box 2, …, $n_K$ particles in box $K$ is
\begin{equation}
\begin{aligned}
p&=N!\prod_{i=1}^K\frac{1}{n_i!}p^{n_1}p^{n_2}\cdots p^{n_K}\\
&=N!\prod_{i=1}^K\frac{1}{n_i!}p^N
\end{aligned}\label{eq:maxwell-boltzmann}
\end{equation}
We want to find the distribution of particles into different boxes by maximizing the probability \eqref{eq:maxwell-boltzmann}. Since $p^N$ is constant, maximizing the probability is the same as maximizing $W:=N!\prod_{i=1}^K\frac{1}{n_i!}$. Boltzmann defined entropy corresponding to a distribution of particles by
$$S=k\log W$$
where $k$ is the Boltzmann constant. By Sterling’s formula, we can write $S$ as
$$S\approx k[N\log N-N-\sum_i(n_i\log n_i-n_i)]$$
We are going to neglect $N\log N -N$ from this entropy, so the form of entropy we are considering is
\begin{equation}
\label{eq:maxwell-boltzmann2}
S\approx -k\sum_i(n_i\log n_i-n_i)
\end{equation}
This amounts to dropping $N!$ from $W$. The reason for this mysterious step is to avoid the so-called Gibbs paradox. For details about Gibbs paradox see, for example, [1] of the references at the end of this note.
Let $\epsilon_i$ denote the single particle energy. We have two conservative quantities that we want to keep fixed: the particle number $N=\sum_i n_i$ and the energy $U=\sum_i n_i\epsilon_i$. So, we add the terms of these constraints to $S$:
$$S\approx k[-\sum_i(n_i\log n_i-n_i)]+\beta(U-\sum_i n_i\epsilon_i)-\beta\mu (N-\sum_i n_i)$$
$\frac{\partial S}{\partial n_i}=0$ results in the critical point $n_i=e^{-\beta(\epsilon_i-\mu)}$. This is the value at which the probability and entropy assume a maximum. This is called the Maxwell-Boltzmann distribution.
From the constraints, we obtain
\begin{align*} N&=\sum_i e^{-\beta(\epsilon_i-\mu)}\\ U&=\sum_i \epsilon_ie^{-\beta(\epsilon_i-\mu)} \end{align*}
Substituting $n_i=e^{-\beta(\epsilon_i-\mu)}$ in \eqref{eq:maxwell-boltzmann2}, the value of the entropy at the maximum is given by
\begin{equation}
\begin{aligned}
S&\approx k[\beta\sum_i\epsilon_i e^{-\beta(\epsilon_i-\mu)}-\beta\mu\sum_i e^{-\beta(\epsilon_i-\mu)}+\sum_i e^{-\beta(\epsilon_i-\mu)}]\\
&=k[\beta U-\beta\mu N+N]
\end{aligned}\label{eq:maxwell-boltzmann3}
\end{equation}
We are going to determine $\mu$ and $\beta$. The single particle kinetic energy is $\epsilon=\frac{p^2}{2m}$. The summation covers all possible states of each particle. This means that we may replace the summation by an integration over momentum and position:
$$N\to e^{\beta\mu}\int d^3xd^3p e^{-\beta\frac{p^2}{2m}},\ U\to e^{\mu\beta}\int d^3xd^3p \frac{p^2}{2m}e^{-\beta\frac{p^2}{2m}}$$
However, note that the number of states cannot be given only by $d^3x d^3p$ because of its dimension. To make it dimensionless, we make the following quantum mechanical correction:
\begin{equation}
\label{eq:maxwell-boltzmann4}
\frac{d^3x d^3p}{h^3}=\frac{d^3 xd^3p}{(2\pi\hbar)^3}
\end{equation}
Recall that the Planck constant $h$ has the dimension length$\times$momentum.
\begin{equation}
\begin{aligned}
N&=\frac{e^{\beta\mu}}{h^3}\int d^3xd^3p e^{-\beta\frac{p^2}{2m}}\\
&=\frac{e^{\beta\mu}}{h^3}V\left(\frac{2m\pi}{\beta}\right)^{\frac{3}{2}},\\
U&=\frac{e^{\beta\mu}}{h^3}\int d^3xd^3p \frac{p^2}{2m}e^{-\beta\frac{p^2}{2m}}\\
&=\frac{e^{\beta\mu}}{h^3}V\frac{3}{2\beta}\left(\frac{2m\pi}{\beta}\right)^{\frac{3}{2}}
\end{aligned}\label{eq:maxwell-boltzmann5}
\end{equation}
For some details about Gaussian integrals, see here. From \eqref{eq:maxwell-boltzmann5}, we obtain
\begin{align*} \beta&=\frac{3N}{2U},\\ \beta\mu&=\log\left[\frac{Nh^3}{V}\left(\frac{\beta}{2\pi m}\right)^{\frac{3}{2}}\right]=\log\left[\frac{Nh^3}{V}\left(\frac{3N}{4\pi mU}\right)^{\frac{3}{2}}\right] \end{align*}
Consequently, the entropy in \eqref{eq:maxwell-boltzmann3} can be written as
\begin{equation}
\label{eq:maxwell-boltzmann6}
S=kN\left[\frac{5}{2}+\log\left(\frac{V}{N}\right)+\frac{3}{2}\log\left(\frac{U}{N}\right)+\frac{3}{2}\log\left(\frac{4\pi m}{3h^2}\right)\right]
\end{equation}
\eqref{eq:maxwell-boltzmann6} is known in statistical mechanics as the Sackur-Tetrode formula for the entropy of a classical ideal gas (we will soon see its relationship with an ideal gas). According to the Huang’s book [1], this formula has been experimentally verified as the correct entropy of an ideal gas at high temperatures.
Differentiating $S$ in \eqref{eq:maxwell-boltzmann6}, we obtain
\begin{align*} dU&=\frac{\partial U}{\partial S}dS-\frac{\partial U}{\partial S}\frac{\partial S}{\partial N}dN-\frac{\partial U}{\partial S}\frac{\partial S}{\partial V}dV\\ &=\frac{1}{k\beta}dS-\frac{NkT}{V}+\mu dN \end{align*}
Comparing this with
$$dU=TdS-pdV+\mu dN$$ from the first law of thermodynamics, we have
\begin{align*} \beta&=\frac{1}{kT},\\ p&=\frac{NkT}{V} \end{align*}
The second equation is the well-known ideal gas equation of state. The chemical potential $\mu$ and the internal energy $U$ can be expressed as functions of the temperature $T$ as
\begin{align*} \mu&=kT\log\left[\frac{h^3N}{V}\frac{1}{(2\pi mkT)^{\frac{3}{2}}}\right],\\ U&=\frac{3}{2}NkT \end{align*}
References:
- Kerson Huang, Statistical Mechanics, John Wiley & Sons, 1987
- V. P. Nair, Lectures on Thermodynamics and Statistical Mechanics