Legendre Functions I: A Physical Origin of Legendre Functions

Consider an electric charge $q$ placed on the $z$-axis at $z=a$.

Electric Potential

The electrostatic potential of charge $q$ is $$\varphi=\frac{1}{4\pi\epsilon_0}\frac{q}{r_1}.\ \ \ \ \ \mbox{(1)}$$ Using the Laws of Cosine, one can write $r_1$ in terms of $r$ and $\theta$:
$$r_1=\sqrt{r^2+a^2-2ar\cos\theta}$$
and thereby the electrostatic potential (1) can be written as
$$\varphi=\frac{q}{4\pi\epsilon_0}(r^2+a^2-2ar\cos\theta)^{-1/2}.\ \ \ \ \ \mbox{(2)}$$

Recall the Binomial Expansion Formula: Suppose that $x,y\in\mathbb{R}$ and $|x|>|y|$. Then
$$(x+y)^r=\sum_{k=0}^\infty\begin{pmatrix}r\\k\end{pmatrix}x^{r-k}y^k,\ \ \ \ \ \mbox{(3)}$$ where $\begin{pmatrix}r\\k\end{pmatrix}=\frac{r!}{k!(r-k)!}$.

Legendre Polynomials: If $r>a$ (or more specifically $r^2>|a^2-2ar\cos\theta|$), we can expand the radical to obtain:
$$\varphi=\frac{q}{4\pi\epsilon_0 r}\sum_{n=0}^\infty P_n(\cos\theta)\left(\frac{a}{r}\right)^n.$$ The coefficients $P_n$ are called the Legendre polynomials. The Legendre polynomials can be defined by the generating function
$$g(t,x)=(1-2xt+t^2)^{-1/2}=\sum_{n=0}^\infty P_n(x)t^n,\ \ \ \ \ \mbox{(4)}$$ where $|t|<1$. Using the binomial expansion formula (3), we obtain
\begin{align*}
(1-2xt+t^2)^{-1/2}&=\sum_{n=0}^\infty\frac{(2n)!}{2^{2n}(n!)^2}(2xt-t^2)^n\ \ \ \ \ \mbox{(5)}\\
&=\sum_{n=0}^\infty\frac{(2n-1)!!}{(2n)!!}(2xt-t^2)^n.
\end{align*}
Let us write out the first three terms:
\begin{align*}
\frac{0!}{2^0(0!)^2}&(2xt-t^2)^0+\frac{2!}{2^2(1!)^2}(2xt-t^2)^1+\frac{4!}{2^4(2!)^2}(2xt-t^2)^2\\
&=1t^0+xt^1+\left(\frac{3}{2}x^2-\frac{1}{2}\right)t^2+\mathcal{O}t^3.
\end{align*}
Thus we see that $P_0(x)=1$, $P_1(x)=x$, and $P_2(x)=\frac{3}{2}x^2-\frac{1}{2}$. In practice, we don’t calculate Legendre polynomials using the power series (5). Instead, we use the recurrence relation of Legendre polynomials that will be discussed later.

The Maxima name for Legendre polynomial $P_n(x)$ is legendre_p(n,x). The following graphs of $P_2(x)$, $P_3(x)$, $P_4(x)$, $P_5(x)$, $-1\leq x\leq 1$ is made by Maxima using the command:

plot2d([legendre_p(2,x),legendre_p(3,x),legendre_p(4,x),legendre_p(5,x)],[x,-1,1]);

Legendre Polynomials

Now expand the polynomial $(2xt-t^2)^n$ in the power series (5):
\begin{align*}
(1-2xt+t^2)^{-1/2}&=\sum_{n=0}^\infty\frac{(2n)!}{2^{2n}(n!)^2}t^n\sum_{k=0}^n(-1)^k\frac{n!}{k!(n-k)!}(2x)^{n-k}t^k\\
&=\sum_{n=0}^\infty\sum_{k=0}^n(-1)^k\frac{(2n)!}{2^{2n}n!k!(n-k)!}(2x)^{n-k}t^{k+n}.\ \ \ \ \ \mbox{(6)}
\end{align*}
By rearranging the order of summation, (6) can be written as
$$(1-2xt+t^2)^{-1/2}=\sum_{n=0}^\infty\sum_{k=0}^{[n/2]}(-1)^k\frac{(2n-k)!}{2^{2n-2k}k!(n-k)!(n-2k)!}(2x)^{n-2k}t^n,$$ where
$$\left[\frac{n}{2}\right]=\left\{\begin{array}{ccc}
\frac{n}{2} & \mbox{for} & n=\mbox{even}\\
\frac{n-1}{2} & \mbox{for} & n=\mbox{odd}.
\end{array}\right.$$

Hence,
$$P_n(x)=\sum_{k=0}^{[n/2]}(-1)^k\frac{(2n-k)!}{2^{2n-2k}k!(n-k)!(n-2k)!}(2x)^{n-2k}.\ \ \ \ \ \mbox{(7)}$$
In practice, we hardly use the formula (7). Again, we use the recurrence relation of Legendre polynomials instead.

Electric Dipole: The generating function (3) can be used for the electric multipole potential. Here we consider an electric dipole. Let us place electric charges $q$ and $-q$ at $z=a$ and $z=-a$, respectively.

Electric Dipole Potential

The electric dipole potential is given by
$$\varphi=\frac{q}{4\pi\epsilon_0}\left(\frac{1}{r_1}-\frac{1}{r_2}\right).\ \ \ \ \ \mbox{(8)}$$
$r_2$ is written in terms of $r$ and $\theta$ using the Laws of Cosine as
\begin{align*}
r_2^2&=r^2+a^2-2ar\cos(\pi-\theta)\\
&=r^2+a^2+2ar\cos\theta.
\end{align*}
So by the generating function (3), the electric dipole potential (8) can be written as
\begin{align*}
\varphi&=\frac{q}{4\pi\epsilon_0 r}\left\{\left[1-2\left(\frac{a}{r}\right)\cos\theta+\left(\frac{a}{r}\right)^2\right]^{-\frac{1}{2}}-\left[1+2\left(\frac{a}{r}\right)\cos\theta+\left(\frac{a}{r}\right)^2\right]^{-\frac{1}{2}}\right\}\\
&=\frac{q}{4\pi\epsilon_0 r}\left[\sum_{n=0}^\infty P_n(\cos\theta)\left(\frac{a}{r}\right)^n-\sum_{n=0}^\infty P_n(\cos\theta)(-1)^n\left(\frac{a}{r}\right)^n\right]\\
&=\frac{2q}{4\pi\epsilon_0 r}\left[P_1(\cos\theta)\left(\frac{a}{r}\right)+P_3(\cos\theta)\left(\frac{a}{r}\right)^3+\cdots\right]
\end{align*}
for $r>a$.

For $r\gg a$,
$$\varphi\approx\frac{2aq}{4\pi\epsilon_0 r}\frac{P_1(\cos\theta)}{r^2}=\frac{2aq}{4\pi\epsilon_0 r}\frac{\cos\theta}{r^2}.$$
This is usual electric dipole potential. The quantity $2aq$ is called the dipole moment in electromagnetism.

Spherical Bessel Functions

When the Helmholtz equation is separated in spherical coordinates the radial equation has the form
$$r^2\frac{d^2R}{dr^2}+2r\frac{dR}{dr}+[k^2r^2-n(n+1)]R=0.\ \ \ \ \ \mbox{(1)}$$
The equation (1) looks similar to Bessel’s equation.  If we use the transformation $R(kr)=\frac{Z(kr)}{(kr)^{1/2}}$, (1) turns into Bessel’s equation
$$r^2\frac{d^2Z}{dr^2}+r\frac{dZ}{dr}+\left[k^2r^2-\left(n+\frac{1}{2}\right)^2\right]Z=0.\ \ \ \ \ \mbox{(2)}$$
Hence $Z(kr)=J_{n+\frac{1}{2}}(x)$, Bessel function of order $n+\frac{1}{2}$ where $n$ is an integer.

Spherical Bessel Functions: Spherical Bessel functions of the first kind and the second kind are defined by
\begin{align*}
j_n(x)&:=\sqrt{\frac{\pi}{2x}}J_{n+\frac{1}{2}}(x),\\
n_n(x)&:=\sqrt{\frac{\pi}{2x}}N_{n+\frac{1}{2}}(x)=(-1)^{n+1}\sqrt{\frac{\pi}{2x}}J_{-n-\frac{1}{2}}(x).
\end{align*}
Spherical Bessel functions $j_n(kr)$ and $n_n(kr)$ are two linearly independent solutions of the equation (1).

One can obtain power series representations of $j_n(x)$ and $n_n(x)$ using Legendre Duplication Formula

$$z!\left(z+\frac{1}{2}\right)!=2^{-2z-1}\pi^{1/2}(2z+1)!$$ from $$J_{n+\frac{1}{2}}(x)=\sum_{s=0}^\infty\frac{(-1)^s}{s!\left(s+n+\frac{1}{2}\right)!}\left(\frac{x}{2}\right)^{2s+n+\frac{1}{2}}:$$
\begin{align*}
j_n(x)&=2^nx^n\sum_{s=0}^\infty\frac{(-1)^s(s+n)!}{s!(2s+2n+1)!}x^{2s},\\
n_n(x)&=(-1)^{n+1}\frac{2^n\pi^{1/2}}{x^{n+1}}\sum_{s=0}^\infty\frac{(-1)^s}{s!\left(s-n-\frac{1}{2}\right)!}\left(\frac{x}{2}\right)^{2s}\\
&=\frac{(-1)^{n+1}}{2^nx^{n+1}}\sum_{s=0}^\infty\frac{(-1)^s(s-n)!}{s!(2s-2n)!}x^{2s}.
\end{align*}
From these power series representations, we obtain
\begin{align*}
j_0(x)&=\frac{\sin x}{x}\left(=\sum_{s=0}^\infty\frac{(-1)^s}{(2s+1)!}x^{2s}\right)\\
n_0(x)&=-\frac{\cos x}{x}\\
j_1(x)&=\frac{\sin x}{x^2}-\frac{\cos x}{x}\\
n_1(x)&=-\frac{\cos x}{x^2}-\frac{\sin x}{x}.
\end{align*}
Orthogonality: Recall the orthogonality of Bessel functions
$$\int_0^aJ_\nu\left(\frac{\alpha_{\nu p}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu q}}{a}\rho\right)\rho d\rho=\frac{a^2}{2}[J_{\nu+1}(\alpha_{\nu p})]^2\delta_{pq}$$ as discussed here. By a substitution, we obtain the orthogonality of spherical Bessel functions
$$\int_0^aj_n\left(\frac{\alpha_{np}}{a}\rho\right)j_n\left(\frac{\alpha_{nq}}{a}\rho\right)\rho^2 d\rho=\frac{a^3}{2}[j_{n+1}(\alpha_{np})]^2\delta_{pq},$$ where $\alpha_{np}$ and $\alpha_{nq}$ are roots of $j_n$.

Example: [Particle in a Sphere]

Let us consider a particle inside a sphere with radius $a$. The wave function that describes the state of the particle satisfies Schrödinger equation
$$-\frac{\hbar^2}{2m}\nabla^2\psi=E\psi\ \ \ \ \ \mbox{(3)}$$
with boundary conditions:
\begin{align*}
&\psi(r\leq a)\ \mbox{is finite},\\
&\psi(a)=0.
\end{align*}
This corresponds to a potential $V=0$, $r\leq a$ and $V=\infty$, $r>a$. Here $m$ is the mass of the particle, $\hbar=\frac{h}{2\pi}$ is the reduced Planck constant (also called Dirac constant).
Note that (3) is the Helmholtz equation $\nabla^2\psi+k^2\psi=0$ with $k^2=\frac{2mE}{\hbar^2}$, whose radial part satisfies
$$\frac{d^2R}{dr^2}+\frac{2}{r}\frac{dR}{dr}+\left[k^2-\frac{n(n+1)}{r^2}\right]R=0.$$ Now we determine the minimum energy (zero-point energy) $E_{\mbox{min}}$. Since any angular dependence would increase the energy, we take $n=0$. The solution $R$ is given by
$$R(kr)=Aj_0(kr)+Bn_0(kr).$$ Since $n_0(kr)\rightarrow\infty$ at the origin, $B=0$. From the boundary condition $\psi(a)=0$, $R(a)=0$, i.e. $j_0(ka)=0$. Thus $ka=\frac{2mE}{\hbar}a=\alpha$ is a root of $j_0(x)$.The smallest $\alpha$ is the first zero of $j_0(x)$, $\alpha=\pi$. Therefore,
\begin{align*}
E_{\mbox{min}}&=\frac{\hbar^2\alpha^2}{2ma^2}\\
&=\frac{\hbar^2\pi^2}{2ma^2}\\
&=\frac{h^2}{8ma^2},
\end{align*}
where $h$ is the Planck constant. This means that for any finite sphere, the particle will have a positive minimum energy (or zero-point energy).

Neumann Functions, Bessel Function of the Second Kind $N_\nu(X)$

In here, we have considered Bessel functions $J_\nu(x)$ for $\nu=\mbox{integer}$ case only. Note that $J_\nu$ and $J_{-\nu}$ are linearly independent if $\nu$ is a non-integer. If $\nu$ is an integer, $J_\nu$ and $J_{-\nu}$ satisfy the relation $J_{-\nu}=(-1)^\nu J_\nu$, i.e. they are no longer linearly independent. Thus we need a second solution for Bessel’s equation.

Let us define
$$N_\nu(x):=\frac{\cos\nu\pi J_\nu(x)-J_{-\nu}(x)}{\sin\nu\pi}.$$
$N_\nu(x)$ is called Neumann function or Bessel function of the second kind. For $\nu=\mbox{integer}$, $N_\nu(x)$ is an indeterminate form of type $\frac{0}{0}$. So by l’Hôpital’s rule
\begin{align*}
N_n(x)&=\lim_{\nu\to n}\frac{\frac{\partial}{\partial\nu}[\cos\nu\pi J_\nu(x)-J_{-\nu}(x)]}{\frac{\partial}{\partial\nu}\sin\nu\pi}\\
&=\frac{1}{\pi}\lim_{\nu\to n}\left[\frac{\partial J_\nu(x)}{\partial\nu}-(-1)^\nu\frac{\partial J_{-\nu}(x)}{\partial\nu}\right].
\end{align*}
Neumann function can be also written as a power series:
\begin{align*}
N_n(x)=&\frac{2}{\pi}\left[\ln\left(\frac{x}{2}\right)+\gamma-\frac{1}{2}\sum_{p=1}^n\frac{1}{p}\right]J_n(x)\\
&-\frac{1}{\pi}\sum_{r=0}^\infty(-1)^r\frac{\left(\frac{x}{2}\right)^{n+2r}}{r!(n+r)!}\sum_{p=1}^r\left[\frac{1}{p}+\frac{1}{p+n}\right]\\
&-\frac{1}{\pi}\sum_{r=0}^{n-1}\frac{(n-r-1)!}{r!}\left(\frac{x}{2}\right)^{-n+2r},
\end{align*}
where $\gamma$ is the Euler-Mascheroni number.

In Maxima, Neumann function is denoted by bessel_y(n,x). Let us plot $N_0(x)$, $N_1(x)$, $N_3(x)$ altogether using the command

plot2d([bessel_y(0,x),bessel_y(1,x),bessel_y(2,x)],[x,0.5,10]);

Neumann Functions

We now show that Neumann functions do satisfy Bessel’s differential equation. Differentiate Bessel’s equation
$$x^2\frac{d^2}{dx^2}J_{\pm\nu}(x)+x\frac{d}{dx}J_{\pm\nu}(x)+(x^2-\nu^2)J_{\pm\nu}(x)=0$$
with respect to $\nu$. (Here of course we are assuming that $\nu$ is a continuous variable.) Then we obtain the following equations:
\begin{align*}
x^2\frac{d^2}{dx^2}\frac{\partial J_\nu(x)}{\partial\nu}+x\frac{d}{dx}\frac{\partial J_\nu(x)}{\partial\nu}+(x^2-\nu^2)\frac{\partial J_\nu(x)}{\partial\nu}=2\nu J_\nu(x)\ \ \ \ \ \mbox{(1)}\\
x^2\frac{d^2}{dx^2}\frac{\partial J_{-\nu(x)}}{\partial\nu}+x\frac{d}{dx}\frac{\partial J_{-\nu(x)}}{\partial\nu}+(x^2-\nu^2)\frac{\partial J_{-\nu(x)}}{\partial\nu}=2\nu J_\nu(x)\ \ \ \ \ \mbox{(2)}
\end{align*}
Subtract $\frac{1}{\pi}(-1)^n$ times (2) from $\frac{1}{\pi}$ times (1) and then take the limit of the resulting equation as $\nu\to n$. Then we see that $N_n$ satisfies the Bessel’s differential equation
$$x^2\frac{d^2}{dx^2}N_n(x)+x\frac{d}{dx}N_n(x)+(x^2-n^2)N_n(x)=0.$$
The general solution of Bessel’s differential equation is given by
$$y_n(x)=AJ_n(x)+BN_n(x).$$

Lie Group and Lie Algebra Representations

Given a matrix Lie group $G$, a representation $\Pi$ of $G$ is a Lie group homomorphism $\Pi: G\longrightarrow\mathrm{GL}(V)$, where $V$ is a finite dimensional vector space and the general linear group $\mathrm{GL}(V)$ is the set of all linear isomorphisms of $V$. For each $g\in G$, $\Pi(g): V\longrightarrow V$ is a linear operator on $V$.

If $\mathfrak{g}$ is a Lie algebra, a representation of $\mathfrak{g}$ is a Lie algebra homomorphism $\pi: \mathfrak{g}\longrightarrow\mathrm{gl}(V)$, where $\mathrm{gl}(V)$ is the Lie algebra of $\mathrm{GL}(V)$.

If $\Pi$ or $\pi$ is a one-to-one homomorphism, then the representation is called faithful.

One may understand a representation as the action of a Lie group or a Lie algebra on the vector space $V$.

Example. [Trivial Representation] Let $G$ be a matrix Lie group. Define the trivial representation of $G$ by $$\Pi: G\longrightarrow\mathrm{GL}(1;\mathbb{C});\ A\longmapsto I.$$ This is an irreducible representation since $\mathbb C$ has no nontrivial subspace. If $\mathfrak{g}$ is a Lie algebra, the trivial representation of $\mathfrak{g}$, $\pi: \mathfrak{g}\longrightarrow\mathrm{gl}(1;\mathbb{C})$ is defined by $\pi(X)=0$ for all $X\in\mathfrak{g}$. This is also an irreducible representation.

Example. [The Adjoint Representation] Let $G$ be a matrix Lie group with Lie algebra $\mathfrak{g}$. The adjoint mapping $\mathrm{Ad}: G\longrightarrow\mathrm{GL}(\mathfrak{g})$ is defined by $$\mathrm{Ad}_A(X)=AXA^{-1}$$ for $A\in G$. We claim that $AXA^{-1}\in\mathfrak{g}$ for $A\in G$ and $X\in\mathfrak{g}$, so that $\mathrm{Ad}_A:\mathfrak{g}\longrightarrow\mathfrak{g}$. First note that for any invertible matrix $A$, $(AXA^{-1})^m=AX^mA^{-1}$. So, \begin{align*}
e^{AXA^{-1}}&=\sum_{m=0}^\infty\frac{(AXA^{-1})^m}{m!}\\
&=A\sum_{m=0}^\infty\frac{X^m}{m!}A^{-1}\\
&=Ae^XA^{-1}.
\end{align*}
Now for $A\in G$ and $X\in\mathfrak{g}$,
\begin{align*}
e^{tAXA^{-1}}&=e^{A\cdot tX\cdot A^{-1}}\\
&=Ae^{tX}A^{-1}\in G
\end{align*} and hence $AXA^{-1}\in\mathfrak{g}$. Note that $\mathrm{Ad}: G\longrightarrow\mathrm{GL}(\mathfrak{g})$ is a Lie group homomorphism. So $\mathrm{Ad}$ can be considered as a representation of $G$ acting on the Lie algebra $\mathfrak{g}$. We call $\mathrm{Ad}$ the adjoint representation of $G$. We can also define the adjoint representation of the Lie algebra $\mathfrak{g}$ as follows:
$$\mathrm{ad}:\mathfrak{g}\longrightarrow\mathrm{gl}(\mathfrak{g});\ \mathrm{ad}_X(Y)=[X,Y].$$ $\mathrm{ad}$ is a Lie algebra homomorphism.

Let $V$ be a finite dimensional real vector space. Then complexification of $V$, $V_{\mathbb{C}}$ is the space of formal linear combination $v_1+iv_2$ with $v_1,v_2\in V$. This is again a real vector space. If we define $$i(v_1+iv_2)=-v_2+iv_1,$$ then $V_{\mathbb{C}}$ becomes a complex vector space. For example, the complexification $\mathfrak{su}(2)_{\mathbb{C}}$ of the Lie algebra $\mathfrak{su}(2)$ is $\mathfrak{sl}(2;\mathbb{C})$.

Some Representations of $\mathrm{SU}(2)$

Let $V_m$ be the space of homogeneous polynomials in two variables with total degree $m\geq 0$
$$f(z_1,z_2)=a_0z_1^m+a_1z_1^{m-1}z_2+a_2z_1^{m-2}z_2^2+\cdots+a_mz_2^m.$$ Then $V_m$ is an $(m+1)$-dimensional complex vector space. Define $\Pi_m:\mathrm{SU}(2)\longrightarrow\mathrm{GL}(V_m)$ by $$[\Pi_m(U)f](z)=f(U^{-1}z).$$
Let us write $U^{-1}=\begin{pmatrix}
U_{11}^{-1} & U_{12}^{-1}\\
U_{21}^{-1} & U_{22}^{-1}
\end{pmatrix}$. Then $U^{-1}z=\begin{pmatrix}
U_{11}^{-1}z_1+U_{12}^{-1}z_2\\
U_{21}^{-1}z_1+U_{22}^{-1}z_2
\end{pmatrix}$, where $z=\begin{pmatrix}z_1\\z_2\end{pmatrix}\in\mathbb{C}^2$. So $[\Pi_m(U)f](z_1,z_2)$ is written as
$$[\Pi_m(U)f](z_1,z_2)=\sum_{k=0}^ma_k(U_{11}^{-1}z_1+U_{12}^{-1}z_2)^{m-k}(U_{21}^{-1}z_1+U_{22}^{-1}z_2)^k.$$
We now show that $\Pi_m$ is indeed a Lie group homomorphism: \begin{align*}
\Pi_m(U_1)[\Pi_m(U_2)f](z)&=\Pi_m(U_2f)(U_1^{-1}z)\\
&=f(U_2^{-1}U_1^{-1}z)\\
&=f((U_1U_2)^{-1}z)\\
&=[\Pi_m(U_1U_2)f](z).
\end{align*} Therefore, $\Pi_m$ is a finite dimensional complex representation of $\mathrm{SU}(2)$. Note that each $\Pi_m$ is irreducible and that every finite dimensional irreducible representation of $\mathrm{SU}(2)$ is equivalent to one and only one of the $\Pi_m$’s.

Now we compute the corresponding representation $\pi_m$ of the Lie algebra $\mathfrak{su}(2)$. $\pi_m$ can be computed as $$\pi_m(X)=\frac{d}{dt}\Pi_m(e^{tX})|_{t=0}.$$ So,
\begin{align*}
[\pi_m(X)f](z)&=\frac{d}{dt}[\Pi_m(e^{tX})f](z)|_{t=0}\\
&=\frac{d}{dt}f(e^{-tX}z)|_{t=0}.
\end{align*} Let $z(t)$ be a curve in $\mathbb{C}^2$ defined as $z(t)=e^{-tX}z$, so that $z(0)=z$. Write $z(t)=(z_1(t),z_2(t))$, where $z_i(t)\in\mathbb{C}$, $i=1,2$. By the chain rule, \begin{align*}
\pi_m(X)f&=\frac{\partial f}{\partial z_1}\frac{dz_1}{dt}|_{t=0}+\frac{\partial f}{\partial z_2}\frac{dz_2}{dt}|_{t=0}\\
&=-\frac{\partial f}{\partial z_1}(X_{11}z_1+X_{12}z_2)-\frac{\partial f}{\partial z_2}(X_{21}z_1+X_{22}z_2),\ \ \ \ \ \mbox{(1)}\end{align*}
since $\frac{dz}{dt}|_{t=0}=-Xz$.

Every finite dimensional complex representation of the Lie algebra $\mathfrak{su}(2)$ extends uniquely to a complex linear representation of the complexification of $\mathfrak{su}(2)$ and the complexification of $\mathfrak{su}(2)$ is isomorphic to $\mathfrak{sl}(2;\mathbb{C})$. Thus, the representation $\pi_m$ of $\mathfrak{su}(2)$ extends to a representation of $\mathfrak{sl}(2;\mathbb{C})$. Note that the Lie algebra $\mathfrak{sl}(2;\mathbb{C})$ is the set of all $2\times 2$ trace-free complex matrices, i.e. matrices of the form $\begin{pmatrix}\alpha & \beta\\\gamma & -\alpha\end{pmatrix}$ where $\alpha$, $\beta$ and $\gamma$ are complex numbers. So any element in $\mathfrak{sl}(2;\mathbb{C})$ can be uniquely written as $\alpha H+\beta X+\gamma Y$, where $H=\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}$, $X=\begin{pmatrix}
0 & 1\\
0 & 0
\end{pmatrix}$, $Y=\begin{pmatrix}
0 & 0\\
1 & 0
\end{pmatrix}$.
Let us calculate $\pi_m$ for the basis members $H$, $X$, and $Y$.
$$(\pi_m(H)f)(z)=-\frac{\partial f}{\partial z_1}z_1+\frac{\partial f}{\partial z_2}z_2$$ so
$$\pi_m(H)=-z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}.$$
Applying $\pi_m(H)$ to a basis element $z_1^kz_2^{m-k}$, we obtain
\begin{align*}
\pi_m(H)z_1^kz_2^{m-k}&=-kz_1^kz_2^{m-k}+(m-k)z_1^kz_2^{m-k}\\
&=(m-2k)z_1^kz_2^{m-k}.
\end{align*}
This means that $z_1^kz_2^{m-k}$ is an eigenvector for $\pi_m(H)$ with eigenvalue $m-2k$. In particular, $\pi_m(H)$ is diagonalizable. Using (1) again we also obtain
$$\pi_m(X)=-z_2\frac{\partial}{\partial z_1},\ \pi_m(Y)=-z_1\frac{\partial}{\partial z_2}$$
and
\begin{align*}
\pi_m(X)z_1^kz_2^{m-k}&=-kz_1^{k-1}z_2^{m-k+1},\ \ \ \ \ \mbox{(2)}\\
\pi_m(Y)z_1^kz_2^{m-k}&=(k-m)z_1^{k+1}z_2^{m-k-1}.\ \ \ \ \ \mbox{(3)}
\end{align*}

Proposition. The representation $\pi_m$ is an irreducible representation of $\mathfrak{sl}(2;\mathbb{C})$.

Proof. Suppose that $W$ is a nonzero invariant subspace of $V_m$. We claim that $W=V_m$. Since $W\ne \{0\}$, there exists $w\in W$ with $w\ne 0$. $w$ can be uniquely written as
$$w=a_0z_1^m+a_1z_1^{m-1}z_2+a_2z_1^{m-2}z_2^2+\cdots+a_mz_2^m$$ with at least one of the $a_k$’s nonzero. Let $k_0$ be the smallest value of $k$ for which $a_k\ne 0$ and consider $\pi_m(X)^{m-k_0}w$. Since $\pi_m(X)$ lowers the power of $z_1$ by 1, $\pi_m(X)^{m-k_0}w$ will kill all the terms in $w$ except $a_{k_0}z_1^{m-k_0}z_2^{k_0}$. On the other hand,
$$\pi_m(X)^{m-k_0}(z_1^{m-k_0}z_2^{k_0})=(-1)^{m-k_0}(m-k_0)!z_2^m.$$ Since $\pi_m(X)^{m-k_0}(z_1^{m-k_0}z_2^{k_0})$ is a multiple of $z_2^m$ and $W$ is invariant, $W$ must contain $z_2^m$. It follows from (2) that $\pi_m(Y)^kz_2^m$ is a nonzero multiple of $z_1^kz_2^{m-k}$ for $0< k\leq m$. Hence, $W$ must contain $z_1^kz_2^{m-k}$, $0< k\leq m$. Since $W$ contains all the basis members of $V_m$, $z_1^kz_2^{m-k}$, $0\leq k\leq m$, then $W=V_m$.

The Lie Algebra of the Orthogonal Group $\mathrm{O}(n)\ (\mathrm{SO}(n))$

It can be easily shown that
$${\rm SO}(2)=\left\{\left(\begin{array}{cc}
\cos\theta & -\sin\theta\\
\sin\theta & \cos\theta
\end{array}
\right): \theta\in[0,2\pi)\right\}\cong{\rm S}^1=\{e^{i\theta}:
\theta\in[0,2\pi)\}.$$Let $\gamma(t)=\left(\begin{array}{cc}
\cos\theta(t) & -\sin\theta(t)\\
\sin\theta(t) & \cos\theta(t)
\end{array}
\right)\in\mathrm{SO}(2)$ with $\theta(0)=0$ and $\dot\theta(0)\ne 0$. Then $\gamma(t)$ be a differentiable (regular) curve in ${\rm SO}(2)$ such that
$\gamma(0)=I$. Thus
$$\dot{\gamma}(0)=\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)\left(\frac{d\theta}{dt}\right)_{t=0}$$
is a tangent vector to $\mathrm{SO}(2)$ at the identity $I$. Hence, the tangent space of ${\rm SO}(2)$ at $I$ is a line i.e. ${\rm SO}(2)$ is a one-dimensional Lie group. (We already know that ${\rm SO}(2)$ is a one-dimensional Lie group since it is identified with the unit circle ${\rm S}^1$.)

Remark. $\dot\gamma(0)=\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)$ is a skew-symmetric matrix, i.e., $\dot\gamma(0)+{}^t\dot\gamma(0)=0$.

Let $\gamma: (-\epsilon,\epsilon)\buildrel{\rm
diff}\over\longrightarrow{\rm O}(n)$ such that $\gamma(0)=I$. Then $\dot{\gamma}(0)$ is a tangent vector to ${\rm O}(n)$ at $I$. Since $\gamma(t)\in{\rm O}(n)$, $$\gamma(t)\cdot{}^t\gamma(t)=I$$ for each $t\in(-\epsilon,\epsilon)$. Thus,
$$\dot{\gamma}(0)\cdot{}^t\gamma(0)+\gamma(0)\cdot\dot{{}^t\gamma}(0)=0.$$ Since ${}^t\gamma(0)=\gamma(0)=I$, $$\dot{\gamma}(0)+\dot{{}^t\gamma}(0)=\dot{\gamma}(0)+{}^t\dot{\gamma}(0)=0.$$ Hence, we see that any tangent vector to ${\rm O}(n)$ at $I$ is represented as a skew-symmetric $n\times n$ matrix. Conversely, we want to show that every skew-symmetric $n\times n$ matrix is a tangent vector to ${\rm O}(n)$ at $I$.

Suppose that $A$ is a $n\times n$ skew-symmetric matrix. As discussed here,
$$e^{At}=I+At+\frac{(At)^2}{2!}+\cdots+\frac{(At)^n}{n!}+\cdots=I+At+\frac{A^2}
{2!}t^2+\cdots+\frac{A^n}{n!}t^n+\cdots$$
is an $n\times n$ matrix.

If $AB=BA$, then by Cauchy’s Theorem,
$$\left(\sum_{k=0}^\infty\frac{A^k}{k!}\right)\left(\sum_{l=0}^\infty\frac{B^l}
{l!}\right)=\sum_{m=0}^\infty\sum_{p=0}^m\frac{A^{m-p}B^p}{(m-p)!p!}=\sum_{m=0}^\infty\frac{(A+B)^m}{m!}.$$ This implies that $e^Ae^B=e^{A+B}$ if $AB=BA$. In particular, $e^{A}e^{-A}=e^0=I$ so that $e^A$ is non-singular. If $A$ is skew-symmetric, then ${}^t(e^{At})=e^{{}^tAt}=e^{-At}$ and so $e^{At}\cdot{}^t(e^{At})=I$, i.e., $e^{At}\in{\rm O}(n)$. Now, $\displaystyle\frac{de^{At}}{dt}=Ae^{At}$ and $\dot{e^{At}}(0)=A$, i.e., the skew-symmetric matrix $A$ is a tangent vector to ${\rm O}(n)$ at $I$.

Proposition. The tangent space of ${\rm O}(n)$ or ${\rm SO}(n)$ at $I$ is the set of all $n\times n$ skew-symmetric matrices. Denote by ${\mathfrak o}(n)$ (${\mathfrak s\mathfrak o}(n)$) the tangent space of ${\rm O}(n)$ (${\rm SO}(n)$, respectively) at $I$. Note that $\dim{\mathfrak o}(n)=\displaystyle\frac{1}{2}n(n-1)$. This can be easily shown.

Definition. The tangent space ${\mathfrak o}(n)$ (${\mathfrak s\mathfrak o}(n)$) to the Lie group ${\rm O}(n)$ (${\rm SO}(n)$, respectively) at $I$ is called the Lie algebra of ${\rm O}(n)$ (${\rm SO}(n)$, respectively).