Lie Group and Lie Algebra Representations

Given a matrix Lie group $G$, a representation $\Pi$ of $G$ is a Lie group homomorphism $\Pi: G\longrightarrow\mathrm{GL}(V)$, where $V$ is a finite dimensional vector space and the general linear group $\mathrm{GL}(V)$ is the set of all linear isomorphisms of $V$. For each $g\in G$, $\Pi(g): V\longrightarrow V$ is a linear operator on $V$.

If $\mathfrak{g}$ is a Lie algebra, a representation of $\mathfrak{g}$ is a Lie algebra homomorphism $\pi: \mathfrak{g}\longrightarrow\mathrm{gl}(V)$, where $\mathrm{gl}(V)$ is the Lie algebra of $\mathrm{GL}(V)$.

If $\Pi$ or $\pi$ is a one-to-one homomorphism, then the representation is called faithful.

One may understand a representation as the action of a Lie group or a Lie algebra on the vector space $V$.

Example. [Trivial Representation] Let $G$ be a matrix Lie group. Define the trivial representation of $G$ by $$\Pi: G\longrightarrow\mathrm{GL}(1;\mathbb{C});\ A\longmapsto I.$$ This is an irreducible representation since $\mathbb C$ has no nontrivial subspace. If $\mathfrak{g}$ is a Lie algebra, the trivial representation of $\mathfrak{g}$, $\pi: \mathfrak{g}\longrightarrow\mathrm{gl}(1;\mathbb{C})$ is defined by $\pi(X)=0$ for all $X\in\mathfrak{g}$. This is also an irreducible representation.

Example. [The Adjoint Representation] Let $G$ be a matrix Lie group with Lie algebra $\mathfrak{g}$. The adjoint mapping $\mathrm{Ad}: G\longrightarrow\mathrm{GL}(\mathfrak{g})$ is defined by $$\mathrm{Ad}_A(X)=AXA^{-1}$$ for $A\in G$. We claim that $AXA^{-1}\in\mathfrak{g}$ for $A\in G$ and $X\in\mathfrak{g}$, so that $\mathrm{Ad}_A:\mathfrak{g}\longrightarrow\mathfrak{g}$. First note that for any invertible matrix $A$, $(AXA^{-1})^m=AX^mA^{-1}$. So, \begin{align*}
e^{AXA^{-1}}&=\sum_{m=0}^\infty\frac{(AXA^{-1})^m}{m!}\\
&=A\sum_{m=0}^\infty\frac{X^m}{m!}A^{-1}\\
&=Ae^XA^{-1}.
\end{align*}
Now for $A\in G$ and $X\in\mathfrak{g}$,
\begin{align*}
e^{tAXA^{-1}}&=e^{A\cdot tX\cdot A^{-1}}\\
&=Ae^{tX}A^{-1}\in G
\end{align*} and hence $AXA^{-1}\in\mathfrak{g}$. Note that $\mathrm{Ad}: G\longrightarrow\mathrm{GL}(\mathfrak{g})$ is a Lie group homomorphism. So $\mathrm{Ad}$ can be considered as a representation of $G$ acting on the Lie algebra $\mathfrak{g}$. We call $\mathrm{Ad}$ the adjoint representation of $G$. We can also define the adjoint representation of the Lie algebra $\mathfrak{g}$ as follows:
$$\mathrm{ad}:\mathfrak{g}\longrightarrow\mathrm{gl}(\mathfrak{g});\ \mathrm{ad}_X(Y)=[X,Y].$$ $\mathrm{ad}$ is a Lie algebra homomorphism.

Let $V$ be a finite dimensional real vector space. Then complexification of $V$, $V_{\mathbb{C}}$ is the space of formal linear combination $v_1+iv_2$ with $v_1,v_2\in V$. This is again a real vector space. If we define $$i(v_1+iv_2)=-v_2+iv_1,$$ then $V_{\mathbb{C}}$ becomes a complex vector space. For example, the complexification $\mathfrak{su}(2)_{\mathbb{C}}$ of the Lie algebra $\mathfrak{su}(2)$ is $\mathfrak{sl}(2;\mathbb{C})$.

Some Representations of $\mathrm{SU}(2)$

Let $V_m$ be the space of homogeneous polynomials in two variables with total degree $m\geq 0$
$$f(z_1,z_2)=a_0z_1^m+a_1z_1^{m-1}z_2+a_2z_1^{m-2}z_2^2+\cdots+a_mz_2^m.$$ Then $V_m$ is an $(m+1)$-dimensional complex vector space. Define $\Pi_m:\mathrm{SU}(2)\longrightarrow\mathrm{GL}(V_m)$ by $$[\Pi_m(U)f](z)=f(U^{-1}z).$$
Let us write $U^{-1}=\begin{pmatrix}
U_{11}^{-1} & U_{12}^{-1}\\
U_{21}^{-1} & U_{22}^{-1}
\end{pmatrix}$. Then $U^{-1}z=\begin{pmatrix}
U_{11}^{-1}z_1+U_{12}^{-1}z_2\\
U_{21}^{-1}z_1+U_{22}^{-1}z_2
\end{pmatrix}$, where $z=\begin{pmatrix}z_1\\z_2\end{pmatrix}\in\mathbb{C}^2$. So $[\Pi_m(U)f](z_1,z_2)$ is written as
$$[\Pi_m(U)f](z_1,z_2)=\sum_{k=0}^ma_k(U_{11}^{-1}z_1+U_{12}^{-1}z_2)^{m-k}(U_{21}^{-1}z_1+U_{22}^{-1}z_2)^k.$$
We now show that $\Pi_m$ is indeed a Lie group homomorphism: \begin{align*}
\Pi_m(U_1)[\Pi_m(U_2)f](z)&=\Pi_m(U_2f)(U_1^{-1}z)\\
&=f(U_2^{-1}U_1^{-1}z)\\
&=f((U_1U_2)^{-1}z)\\
&=[\Pi_m(U_1U_2)f](z).
\end{align*} Therefore, $\Pi_m$ is a finite dimensional complex representation of $\mathrm{SU}(2)$. Note that each $\Pi_m$ is irreducible and that every finite dimensional irreducible representation of $\mathrm{SU}(2)$ is equivalent to one and only one of the $\Pi_m$’s.

Now we compute the corresponding representation $\pi_m$ of the Lie algebra $\mathfrak{su}(2)$. $\pi_m$ can be computed as $$\pi_m(X)=\frac{d}{dt}\Pi_m(e^{tX})|_{t=0}.$$ So,
\begin{align*}
[\pi_m(X)f](z)&=\frac{d}{dt}[\Pi_m(e^{tX})f](z)|_{t=0}\\
&=\frac{d}{dt}f(e^{-tX}z)|_{t=0}.
\end{align*} Let $z(t)$ be a curve in $\mathbb{C}^2$ defined as $z(t)=e^{-tX}z$, so that $z(0)=z$. Write $z(t)=(z_1(t),z_2(t))$, where $z_i(t)\in\mathbb{C}$, $i=1,2$. By the chain rule, \begin{align*}
\pi_m(X)f&=\frac{\partial f}{\partial z_1}\frac{dz_1}{dt}|_{t=0}+\frac{\partial f}{\partial z_2}\frac{dz_2}{dt}|_{t=0}\\
&=-\frac{\partial f}{\partial z_1}(X_{11}z_1+X_{12}z_2)-\frac{\partial f}{\partial z_2}(X_{21}z_1+X_{22}z_2),\ \ \ \ \ \mbox{(1)}\end{align*}
since $\frac{dz}{dt}|_{t=0}=-Xz$.

Every finite dimensional complex representation of the Lie algebra $\mathfrak{su}(2)$ extends uniquely to a complex linear representation of the complexification of $\mathfrak{su}(2)$ and the complexification of $\mathfrak{su}(2)$ is isomorphic to $\mathfrak{sl}(2;\mathbb{C})$. Thus, the representation $\pi_m$ of $\mathfrak{su}(2)$ extends to a representation of $\mathfrak{sl}(2;\mathbb{C})$. Note that the Lie algebra $\mathfrak{sl}(2;\mathbb{C})$ is the set of all $2\times 2$ trace-free complex matrices, i.e. matrices of the form $\begin{pmatrix}\alpha & \beta\\\gamma & -\alpha\end{pmatrix}$ where $\alpha$, $\beta$ and $\gamma$ are complex numbers. So any element in $\mathfrak{sl}(2;\mathbb{C})$ can be uniquely written as $\alpha H+\beta X+\gamma Y$, where $H=\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}$, $X=\begin{pmatrix}
0 & 1\\
0 & 0
\end{pmatrix}$, $Y=\begin{pmatrix}
0 & 0\\
1 & 0
\end{pmatrix}$.
Let us calculate $\pi_m$ for the basis members $H$, $X$, and $Y$.
$$(\pi_m(H)f)(z)=-\frac{\partial f}{\partial z_1}z_1+\frac{\partial f}{\partial z_2}z_2$$ so
$$\pi_m(H)=-z_1\frac{\partial}{\partial z_1}+z_2\frac{\partial}{\partial z_2}.$$
Applying $\pi_m(H)$ to a basis element $z_1^kz_2^{m-k}$, we obtain
\begin{align*}
\pi_m(H)z_1^kz_2^{m-k}&=-kz_1^kz_2^{m-k}+(m-k)z_1^kz_2^{m-k}\\
&=(m-2k)z_1^kz_2^{m-k}.
\end{align*}
This means that $z_1^kz_2^{m-k}$ is an eigenvector for $\pi_m(H)$ with eigenvalue $m-2k$. In particular, $\pi_m(H)$ is diagonalizable. Using (1) again we also obtain
$$\pi_m(X)=-z_2\frac{\partial}{\partial z_1},\ \pi_m(Y)=-z_1\frac{\partial}{\partial z_2}$$
and
\begin{align*}
\pi_m(X)z_1^kz_2^{m-k}&=-kz_1^{k-1}z_2^{m-k+1},\ \ \ \ \ \mbox{(2)}\\
\pi_m(Y)z_1^kz_2^{m-k}&=(k-m)z_1^{k+1}z_2^{m-k-1}.\ \ \ \ \ \mbox{(3)}
\end{align*}

Proposition. The representation $\pi_m$ is an irreducible representation of $\mathfrak{sl}(2;\mathbb{C})$.

Proof. Suppose that $W$ is a nonzero invariant subspace of $V_m$. We claim that $W=V_m$. Since $W\ne \{0\}$, there exists $w\in W$ with $w\ne 0$. $w$ can be uniquely written as
$$w=a_0z_1^m+a_1z_1^{m-1}z_2+a_2z_1^{m-2}z_2^2+\cdots+a_mz_2^m$$ with at least one of the $a_k$’s nonzero. Let $k_0$ be the smallest value of $k$ for which $a_k\ne 0$ and consider $\pi_m(X)^{m-k_0}w$. Since $\pi_m(X)$ lowers the power of $z_1$ by 1, $\pi_m(X)^{m-k_0}w$ will kill all the terms in $w$ except $a_{k_0}z_1^{m-k_0}z_2^{k_0}$. On the other hand,
$$\pi_m(X)^{m-k_0}(z_1^{m-k_0}z_2^{k_0})=(-1)^{m-k_0}(m-k_0)!z_2^m.$$ Since $\pi_m(X)^{m-k_0}(z_1^{m-k_0}z_2^{k_0})$ is a multiple of $z_2^m$ and $W$ is invariant, $W$ must contain $z_2^m$. It follows from (2) that $\pi_m(Y)^kz_2^m$ is a nonzero multiple of $z_1^kz_2^{m-k}$ for $0< k\leq m$. Hence, $W$ must contain $z_1^kz_2^{m-k}$, $0< k\leq m$. Since $W$ contains all the basis members of $V_m$, $z_1^kz_2^{m-k}$, $0\leq k\leq m$, then $W=V_m$.

The Lie Algebra of the Orthogonal Group $\mathrm{O}(n)\ (\mathrm{SO}(n))$

It can be easily shown that
$${\rm SO}(2)=\left\{\left(\begin{array}{cc}
\cos\theta & -\sin\theta\\
\sin\theta & \cos\theta
\end{array}
\right): \theta\in[0,2\pi)\right\}\cong{\rm S}^1=\{e^{i\theta}:
\theta\in[0,2\pi)\}.$$Let $\gamma(t)=\left(\begin{array}{cc}
\cos\theta(t) & -\sin\theta(t)\\
\sin\theta(t) & \cos\theta(t)
\end{array}
\right)\in\mathrm{SO}(2)$ with $\theta(0)=0$ and $\dot\theta(0)\ne 0$. Then $\gamma(t)$ be a differentiable (regular) curve in ${\rm SO}(2)$ such that
$\gamma(0)=I$. Thus
$$\dot{\gamma}(0)=\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)\left(\frac{d\theta}{dt}\right)_{t=0}$$
is a tangent vector to $\mathrm{SO}(2)$ at the identity $I$. Hence, the tangent space of ${\rm SO}(2)$ at $I$ is a line i.e. ${\rm SO}(2)$ is a one-dimensional Lie group. (We already know that ${\rm SO}(2)$ is a one-dimensional Lie group since it is identified with the unit circle ${\rm S}^1$.)

Remark. $\dot\gamma(0)=\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)$ is a skew-symmetric matrix, i.e., $\dot\gamma(0)+{}^t\dot\gamma(0)=0$.

Let $\gamma: (-\epsilon,\epsilon)\buildrel{\rm
diff}\over\longrightarrow{\rm O}(n)$ such that $\gamma(0)=I$. Then $\dot{\gamma}(0)$ is a tangent vector to ${\rm O}(n)$ at $I$. Since $\gamma(t)\in{\rm O}(n)$, $$\gamma(t)\cdot{}^t\gamma(t)=I$$ for each $t\in(-\epsilon,\epsilon)$. Thus,
$$\dot{\gamma}(0)\cdot{}^t\gamma(0)+\gamma(0)\cdot\dot{{}^t\gamma}(0)=0.$$ Since ${}^t\gamma(0)=\gamma(0)=I$, $$\dot{\gamma}(0)+\dot{{}^t\gamma}(0)=\dot{\gamma}(0)+{}^t\dot{\gamma}(0)=0.$$ Hence, we see that any tangent vector to ${\rm O}(n)$ at $I$ is represented as a skew-symmetric $n\times n$ matrix. Conversely, we want to show that every skew-symmetric $n\times n$ matrix is a tangent vector to ${\rm O}(n)$ at $I$.

Suppose that $A$ is a $n\times n$ skew-symmetric matrix. As discussed here,
$$e^{At}=I+At+\frac{(At)^2}{2!}+\cdots+\frac{(At)^n}{n!}+\cdots=I+At+\frac{A^2}
{2!}t^2+\cdots+\frac{A^n}{n!}t^n+\cdots$$
is an $n\times n$ matrix.

If $AB=BA$, then by Cauchy’s Theorem,
$$\left(\sum_{k=0}^\infty\frac{A^k}{k!}\right)\left(\sum_{l=0}^\infty\frac{B^l}
{l!}\right)=\sum_{m=0}^\infty\sum_{p=0}^m\frac{A^{m-p}B^p}{(m-p)!p!}=\sum_{m=0}^\infty\frac{(A+B)^m}{m!}.$$ This implies that $e^Ae^B=e^{A+B}$ if $AB=BA$. In particular, $e^{A}e^{-A}=e^0=I$ so that $e^A$ is non-singular. If $A$ is skew-symmetric, then ${}^t(e^{At})=e^{{}^tAt}=e^{-At}$ and so $e^{At}\cdot{}^t(e^{At})=I$, i.e., $e^{At}\in{\rm O}(n)$. Now, $\displaystyle\frac{de^{At}}{dt}=Ae^{At}$ and $\dot{e^{At}}(0)=A$, i.e., the skew-symmetric matrix $A$ is a tangent vector to ${\rm O}(n)$ at $I$.

Proposition. The tangent space of ${\rm O}(n)$ or ${\rm SO}(n)$ at $I$ is the set of all $n\times n$ skew-symmetric matrices. Denote by ${\mathfrak o}(n)$ (${\mathfrak s\mathfrak o}(n)$) the tangent space of ${\rm O}(n)$ (${\rm SO}(n)$, respectively) at $I$. Note that $\dim{\mathfrak o}(n)=\displaystyle\frac{1}{2}n(n-1)$. This can be easily shown.

Definition. The tangent space ${\mathfrak o}(n)$ (${\mathfrak s\mathfrak o}(n)$) to the Lie group ${\rm O}(n)$ (${\rm SO}(n)$, respectively) at $I$ is called the Lie algebra of ${\rm O}(n)$ (${\rm SO}(n)$, respectively).

Matrix Lie Groups

Definition. A group $(G,\cdot,{}^{-1},e)$ is a Lie group if $G$ is also a differentiable manifold and the binary operation $\cdot: G\times G\longrightarrow G$ and the unary operation (inverse) ${}^{-1}: G\longrightarrow G$ are smooth maps.

A subgroup of a Lie group is not necessarily a Lie subgroup.

Theorem. [C. Chevalley] Every closed subgroup of a Lie group is a Lie subgroup.

Examples of Lie Groups.

  1. Let $M(m,n)=\{m\times n-\mbox{matrices over}\ \mathbb{R}\}\cong\mathbb{R}^{mn}$. Let $A=(a_{ij})\in M(m,n)$. Define an identification map\begin{align*}M(m,n)&\longrightarrow\mathbb{R}^{mn}\\(a_{ij})&\longmapsto(a_{11},\cdots,a_{1n};\cdots;a_{m1},\cdots,a_{mn}).\end{align*} We can naturally define topology on $M(m,n)$ by the identification map. $M(m,n)$ is covered by a single chart and the identification map is the coordinate map.
  2. The General Linear Group ${\rm GL}(n)$: Let $\mathrm{GL}(n)=\{\mbox{non-singular}\ n\times n-\mbox{matrices}\}$. Define a map\begin{align*}\mathrm{GL}(n)&\longrightarrow\mathbb{R}\\A&\longmapsto\det A.\end{align*} This map is onto and continuous since $\det A$ is a polynomial function of entries $a_{ij}$ of $A$. $\mathrm{GL}(n)=\det^{-1}(\mathbb{R}-\{0\})$is an open subset of $\mathbb{R}^{n^2}$, so that it is a submanifold of $\mathbb{R}^{n^2}$. This group is called the general linear group. The set of all $n\times n$ non-singular real (complex) matrices is denoted by $\mathrm{GL}(n;\mathbb{R})$ ($\mathrm{GL}(n;\mathbb{C})$, resp.). More generally, the set $n\times n$ non-singular matrices whose entries are the elements of a field $F$ is denoted by $\mathrm{GL}(n;F)$ or $\mathrm{GL}(V)$ where $V$ is the vector space isomorphic to $F^n$. Note that $\mathrm{GL}(V)$ is also the set of all linear isomorphisms of $V$.
  3. The Orthogonal Group $\mathrm{O}(n)$: The orthogonal group $\mathrm{O}(n)$ is defined to be the set $$\mathrm{O}(n)=\{n\times n-\mbox{orthogonal matrices}\},$$ i.e., $$A\in\mathrm{O}(n)\Longleftrightarrow A\cdot{}^tA=I,$$ where ${}^tA$ is the transpose of $A$ and $I$ is the $n\times n$ identity matrix.
  4. The Special Orthogonal Group $\mathrm{SO}(n)$: The special orthogonal group is defined to be the following subgroup of $\mathrm{O}(n)$: $$\mathrm{SO}(n)=\{A\in\mathrm{O}(n): \det A=1\}.$$
  5. The Special Linear Group $\mathrm{SL}(n)$: The special linear group is defined to be the following subgroup of $\mathrm{GL}(n)$ $$\mathrm{SL}(n)=\{A\in\mathrm{GL}(n): \det A=1\}.$$
  6. The Unitary Group $\mathrm{U}(n)$: The unitary group $\mathrm{U}(n)$ is the set of all $n\times n$-unitary matrices, i.e. $$\mathrm{U}(n)=\{U\in\mathrm{GL}(n;\mathbb{C}): UU^\ast=I\},$$ where $U^\ast={}^t\bar U$. Physicists often write $U^\ast$ as $U^\dagger$. $\mathrm{U}(n)$ is a Lie subgroup of $\mathrm{GL}(n;\mathbb{C})$.
  7. The Special Unitary Group $\mathrm{SU}(n)$: The special unitary group $\mathrm{SU}(n)$ is a Lie subgroup of $\mathrm{U}(n)$ and $\mathrm{SL}(2;\mathbb{C})$ $$\mathrm{SU}(n)=\{U\in\mathrm{SL}(2;\mathbb{C}):UU^\ast=I\}.$$

Proposition. For any $n\times n$ real or complex matrix $X$,
$$e^X:=\sum_{m=0}^\infty\frac{X^m}{m!}$$ converges and is a continuous function.

Proof. For the proof of the proposition click here.

Definition. Let $G$ be a matrix Lie group. The Lie algebra of $G$, denoted by $\mathfrak{g}$, is the set of all matrices $X$ such that $e^{tX}\in G$ for all $t\in\mathbb{R}$.

Definition. A function $A:\mathbb{R}\longrightarrow\mathrm{GL}(n;\mathbb{C})$ is called a one-parameter subgroup of $\mathrm{GL}(n;\mathbb{C})$ if

  1. $A$ is continuous;
  2. $A(0)=I$;
  3. $A(t+s)=A(t)A(s)$ for all $t,s\in\mathbb{R}$.

Theorem. If $A$ is a one-parameter subgroup of $\mathrm{GL}(n;\mathbb{C})$, then there exists uniquely an $n\times n$-complex matrix $X$ such that $A(t)=e^{tX}$ for all $t\in\mathbb{R}$.

In differential geometry, the Lie algebra $\mathfrak{g}$ is defined to be the tangent space $T_eG$ to $G$ at the identity $e$. The two definitions coincide if $G$ is $\mathrm{GL}(n;\mathbb{C})$ or its Lie subgroup. If $X\in\mathfrak{g}$ then by definiton $e^{tX}\in G$ for all $t\in\mathbb{R}$. The one-parameter subgroup  $\{e^{tX}:t\in\mathbb{R}\}$ of $G$ can be regarded as a differentiable curve $\gamma:\mathbb{R}\longrightarrow G$ such that $\gamma(0)=e$ where $e$ is the $n\times n$ identity matrix $I$. Thus $\dot\gamma(0)=X$ is the tangent vector to $G$ at the identity $e$, i.e. $X\in T_eG$. Conversely, $X\in T_eG$. Let $\{\phi_t:G\longrightarrow G\}_{t\in\mathbb{R}}$ be the flow generated by $X$, i.e.
$$\frac{d}{dt}\phi_t(p)=X_{\phi_t(p)}.$$ Then $\phi_t$ is smooth, $\phi_0=e$, and $\phi_t\circ \phi_s=\phi_{t+s}$. That is, $\{\phi_t:G\longrightarrow G\}_{t\in\mathbb{R}}$ is a one-parameter subgroup of $\mathrm{GL}(n;\mathbb{C})$. Hence by the above Theorem, there exists uniquely an $n\times n$-complex matrix $Y$ such that $A(t)=e^{tY}$. Since $\dot A(0)=Y$, $Y=X$ i.e. $A(t)=e^{tX}\in G\leq\mathrm{GL}(n;\mathbb C)$. Therefore $X\in\mathfrak{g}$.

Physicists’ convention: In the physics literature, the exponential map $\exp:\mathfrak{g}\longrightarrow G$ is usually given by $X\longmapsto e^{iX}$ instead of $X\longmapsto e^X$. The reason for that comes from quantum mechanics and it will be discussed later.

References:

[1] Andrew Baker, Matrix Groups, An Introduction to Lie Group Theory, Springer 2001

[2] Brian C. Hall, Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer-Verlag 2004

Electrostatic Potential in a Hollow Cylinder

An electrostatic field $E$ (i.e. an electric field produced only by a static charge) is a conservative field, i.e. there exists a scalar potential $\psi$ such that $E=-\nabla\psi$. This is clear from Maxwell’s equations. Since there is no change of the magnetic field $B$ in time, $\nabla\times E=0$. If there is no charge present in a region, $\nabla\cdot E=0$. Together with $E=-\nabla\psi$, we obtain the Laplace equation $\nabla^2\psi=0$. Thus the Laplace equation can be used to find the electrostatic potential $\psi(\rho,\varphi,z)$ in a hollow cylinder with radius $a$ and height $l$ ($0\leq z\leq l$).

Using the separation of variables, we find the mode
\begin{align*}
\psi_{km}(\rho,\varphi,z)&=P_{km}(\rho)\Phi_m(\varphi)Z_k(z)\\
&=J_m(k\rho)[a_m\sin m\varphi+b_m\cos m\varphi][c_1e^{kz}+c_2e^{-kz}].
\end{align*}
The boundary conditions are:
$$\psi(\rho,\varphi,l)=\psi(\rho,\varphi),$$
where $\psi(\rho,\varphi)$ is a potential distribution. Elsewhere on the surface $\psi=0$. Now we find electrostatic potential
$$\psi(\rho,\varphi,z)=\sum_{k,m}\psi_{km}$$
inside the cylinder. From the boundary condition $\psi(\rho,\varphi,0)=0$, we find $c_1+c_2=1$. So we choose $c_1=-c_2=\frac{1}{2}$ and thereby $c_1e^{kz}+c_2e^{-kz}\sinh kz$. Since $\psi=0$ on the lateral surface of the cylinder, $\psi(a,\varphi,z)=0$. This implies that $J_m(ka)=0$. If we write the $n$-th Bessel zero as $a_{mn}$, then $k_{mn}a=a_{mn}$ or $k_{mn}=\frac{a_{mn}}{a}$. Hence,
$$\psi(\rho,\varphi,z)=\sum_{m=0}^\infty\sum_{n=1}^\infty J_m\left(\alpha_{mn}\frac{\rho}{a}\right)[a_m\sin m\varphi+b_m\cos m\varphi]\sinh\left(\alpha_{mn}\frac{z}{a}\right).$$
Finally using the boundary condition
$$\psi(\rho,\varphi)=\sum_{m=0}^\infty\sum_{n=1}^\infty J_m\left(\alpha_{mn}\frac{\rho}{a}\right)[a_m\sin m\varphi+b_m\cos m\varphi]\sinh\left(\alpha_{mn}\frac{1}{a}\right)$$ and the orthogonality of $\sin m\varphi$ and $\cos m\varphi$, we can determine the coefficients $a_m$ and $b_m$ as
\begin{align*}\left\{\begin{aligned}a_{mn}\\b_{mn}\end{aligned}\right\}=\frac{2}{\pi a^2\sinh\left(\alpha_{mn}\frac{1}{a}\right)J_{m+1}^2(\alpha_{mn})}\int_0^{2\pi}\int_0^a\psi(\rho,\varphi)&J_m\left(\alpha_{mn}\frac{\rho}{a}\right)\\
&\left\{\begin{aligned}
\sin m\varphi\\
\cos m\varphi
\end{aligned}\right\}\rho d\rho d\varphi.\end{align*}

Bessel Functions of the First Kind $J_n(x)$ II: Orthogonality

To accommodate boundary conditions for a finite interval $[0,a]$, we need to consider Bessel functions of the form $J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)$. For $x=\frac{\alpha_{\nu m}}{a}\rho$, Bessel’s equation (9) in here can be written as
\begin{equation}\label{eq:bessel10}\rho^2\frac{d^2}{d\rho^2}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)+\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)+\left(\frac{\alpha_{\nu m}^2\rho}{a^2}-\frac{\nu^2}{\rho}\right)J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)=0.\end{equation} Changing $\alpha_{\nu m}$ to $\alpha_{\nu n}$, $J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)$ satisfies
\begin{equation}\label{eq:bessel11}\rho^2\frac{d^2}{d\rho^2}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)+\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)+\left(\frac{\alpha_{\nu n}^2\rho}{a^2}-\frac{\nu^2}{\rho}\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)=0.\end{equation}
Multiply \eqref{eq:bessel10} by $J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)$ and \eqref{eq:bessel11} by $J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)$ and subtract:
\begin{align*}
J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\frac{d}{d\rho}&\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]-J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\frac{d}{d\rho}\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\right]\\&=\frac{\alpha_{\nu n}^2-\alpha_{\nu m}^2}{a^2}\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right).\end{align*}
Integrate this equation with respect to $\rho$ from $\rho=0$ to $\rho=a$:
\begin{equation}\begin{aligned}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\frac{d}{d\rho}&\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]d\rho\\&-\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\frac{d}{d\rho}\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\right]d\rho\\&=\frac{\alpha_{\nu n}^2-\alpha_{\nu m}^2}{a^2}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho d\rho.\end{aligned}\label{eq:bessel12}\end{equation}
Using Integration by Parts, we have
\begin{align*}
\int_0^\rho &J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\frac{d}{d\rho}\left[\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]d\rho\\&=\left[\rho J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]_0^a-\int_0^a \rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)dJ_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right).\end{align*}
Thus \eqref{eq:bessel12} can be written as
\begin{equation}\begin{aligned}\left[\rho J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]_0^a-\left[\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\rho\frac{d}{d\rho}J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\right]_0^a\\=\frac{\alpha_{\nu n}^2-\alpha_{\nu m}^2}{a^2}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho d\rho.\end{aligned}\label{eq:bessel13}\end{equation}
Clearly the LHS of \eqref{eq:bessel13} vanishes at $\rho=0$. (Here we consider only $\nu=\mbox{integer}$ case.) It also vanishes at $\rho=a$ if we choose $\alpha_{\nu n}$ and $\alpha_{\nu m}$ to be $n$-th and $m$-th zeros of $J_\nu$. Therefore, for $m\ne n$
\begin{equation}\label{eq:bessel14}\int_0^\rho J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)J_\nu\left(\frac{\alpha_{\nu n}}{a}\rho\right)\rho d\rho=0.\end{equation}
\eqref{eq:bessel14} gives us orthogonality over the interval $[0,a]$.

For $m=n$, we have the normalization integral
\begin{equation}\int_0^a\left[J_\nu\left(\frac{\alpha_{\nu m}}{a}\rho\right)\right]^2\rho d\rho=\frac{a^2}{2}[J_{\nu+1}(\alpha_{\nu m})]^2.\end{equation}