Category Archives: Partial Differential Equations

1-Dimensional Heat Initial Boundary Value Problems 2: Sturm-Liouville Problems and Orthogonal Functions

Sturm-Liouville Problems

The homogeneous boundary conditions of 1D heat conduction problem are given by
\begin{align*}
-\kappa_1u_x(0,t)+h_1u(0,t)&=0,\ t>0\\
\kappa_2u_x(L,t)+h_2u(L,t)&=0,\ t>0
\end{align*}
(See here)

The homogeneous BCs for the second order linear differential equation \begin{equation}\label{eq:ho}X^{\prime\prime}=kX\end{equation} is then
\begin{equation}\label{eq:bc}\begin{aligned}
-k_1X'(0)+h_1X(0)&=0\\
k_2X'(L)+h_2X(L)&=0
\end{aligned}\end{equation}
Finding solutions of the second order linear differential equation \eqref{eq:ho} for $k=0$, $k=\lambda^2$, and $k=-\lambda^2$ that satisfy the BCs \eqref{eq:bc} is called a Sturm-Liouville Problem. Here, we study the Sturm-Liouville Theory with the following example.

Remark. In case of homogeneous heat BVPs, the eventual temperature would be 0 as there is no heat source. So, we see that $k=-\lambda^2<0$ is the only physically relevant case.

Example. [Fixed temperature at both ends]

Consider the heat BVP:
\begin{align*}
u_t&=\alpha^2 u_{xx}\ \mbox{PDE}\\
u(0,t)&=u(1,t)=0\ \mbox{(BCs)}
\end{align*}
From the above BCs, we obtain the BCs for $X(x)$:
$$X(0)=X(1)=0$$
For $k=0$ and $k=\lambda^2>0$ we have a trivial solution $X(x)=0$. For $k=-\lambda^2<0$ $X(x)=A\cos\lambda x+B\sin\lambda x$. With the BCS we find the eigenvalues
$$\lambda_n=n\pi,\ n=1,2,3,\cdots$$
and the corresponding eigenfunctions
$$X_n(x)=\sin n\pi x,\ n=1,2,3,\cdots$$
The $\{X_n: n=1,2,3,\cdots\}$ is a linearly independent set so they form a basis for the solution space which is infinite dimensional. The general solution to the heat BVP is given by
$$u(x,t)=\sum_{n=1}^\infty A_n e^{-n^2\pi^2\alpha^2t}\sin n\pi x$$
There are undetermined coefficients $A_n$ called Fourier coefficients. They can be determined by initial condition (initial temperature).

Orthogonal Functions and Solution of a Homogeneous Heat IBVP

Consider a heat distribution function $u(x,t)$ of the following form
$$u(x,t)=\sum_{n=0}^\infty A_ne^{-\lambda_n^2\alpha^2t}X_n(x)$$
where $X_n$’s are eigenfunctions corresponding to the eigenvalues $\lambda_n$’s respectively. The eigenfunctions $X_n$’s form a basis for the solution space (which is often infinite dimensional) of a given heat IBVP, furthermore they can form an orthonormal basis with respect to the inner product
\begin{equation}\label{eq:innerprod}\langle X_m,X_n\rangle=\int_0^LX_mX_ndx\end{equation}
We say that eigenfunctions $X_m$ and $X_n$ are orthogonal if $\langle X_m,X_n\rangle=0$.

Example. $X_n(x)=\sin n\pi x$, $n=1,2,3,\cdots$ form an orthogonal basis with respect to \eqref{eq:innerprod}, where $0<x<1$:
\begin{align*}
\langle X_m,X_n\rangle&=\int_0^1\sin m\pi x\sin n\pi xdx\\
&=\left\{\begin{aligned}
\frac{1}{2}\ &{\rm if}\ m=n\\
0\ &{\rm if}\ m\ne n.
\end{aligned}\right.
\end{align*}

Remark. [The Gram-Schmidt Orthogonalization Process]
If $\{X_n\}$ is not an orthogonal basis, one can construct an orthogonal basis from $\{X_n\}$ using the inner product (3). The standard process is called the Gram-Schmidt orthogonalization process. Details can be found in many standard linear algebra textbooks.

Now we assume that $\{X_n\}$ is an orthogonal basis for the solution space. Let $L_n:=\langle X_n,X_n\rangle=\int_0^LX_n^2dx$. Let the initial condition be given by
$u(x,0)=\phi(x)$. Then
$$\phi(x)=\sum_{n=0}^\infty A_nX_n$$
Multiply this by $X_m$ and then integrate:
$$\int_0^LX_m\phi(x)dx=\sum_{n=0}^\infty A_n\int_0^LX_nX_mdx$$
By orthogonality we obtain
$$L_mA_m=\int_0^LX_m\phi(x)dx$$
or
$$A_m=\frac{1}{L_m}\int_0^L\phi(x)X_mdx,\ m=0,1,2,\cdots$$

Example. Consider the heat BVP in the previous example with initial condition $\phi(x)=T$, a constant temperature. For $n=1,2,3,\cdots$, $X_n(x)=\sin n\pi x;\ 0<x<1$ so
$$L_n=\int_0^1\sin^2 n\pi x dx=\frac{1}{2}$$
The Fourier coefficients are then computed to be
\begin{align*}
A_n&=2\int_0^1\phi(x)\sin n\pi xdx\\
&=2T\int_0^1\sin n\pi xdx\\
&=\frac{2T}{n\pi}[1-\cos n\pi]\\
&=\frac{2T}{n\pi}[1-(-1)^n].
\end{align*}
$A_n=0$ for $n={\rm even}$ and $A_{2n-1}=\frac{4T}{(2n-1)\pi},\ n=1,2,3,\cdots$. Hence
$$u(x,t)=\sum_{n=1}^\infty\frac{4T}{(2n-1)\pi}e^{-(2n-1)^2\pi^2\alpha^2t}\sin(2n-1)\pi x.$$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

1-Dimensional Heat Initial Boundary Value Problems 1: Separation of Variables

Let us consider the following assumptions for a heat conduction problem.

  1. The region $\Omega$ is a cylinder of length $L$ centered on the $x$-axis.
  2. The lateral surface is insulated.
  3. The left end ($x=0$) and the right end ($x=L$) have boundary conditions that do not depend on the $y$ and $z$ coordinates.
  4. The initial temperature distribution $\phi$ does not depend on $y$ and $z$ i.e. $\phi=\phi(x)$.

Under these assumptions we want to find temperature distribution $u(\mathbf{r},t)=u(x,y,z,t)$ which satisfies the heat equation
$$\frac{\partial u}{\partial t}=\alpha^2\nabla^2 u+F(\mathbf{r},t)$$
Here $\alpha$ is a constant called the diffusitivity of the material and $F$ is a scalar function called the heat source density. Due to the assumption 2 the  heat equation becomes the 1-dimensional heat equation
$$\frac{\partial u}{\partial t}=\frac{\partial^2 u}{\partial x^2}+F(x,t)$$
The most general form of the boundary conditions is given by the Newton’s law of cooling
$$-\kappa\nabla u\cdot\mathbf{n}=h(u-g)\ \mbox{on}\ \partial\Omega,\ t>0$$
where $\kappa\geq 0, h\geq 0$, $\mathbf{n}$ is unit normal to $\partial\Omega$ and $g$ is the temperature on $\partial\Omega$, $t>0$.

In our 1-dimensional case, the heat flux vector field $\nabla u$ is given by $\nabla u=\left(\frac{\partial u}{\partial x},0,0\right)$. So on the lateral surface $\nabla u\cdot\mathbf{n} =0$ i.e. the assumption that lateral surface is insulated is automatically satisfied. On the left end, $\mathbf{n}=(-1,0,0)$ so
$$-\kappa\nabla u\cdot\mathbf{n}=\kappa\frac{\partial u}{\partial x}.$$
On the right end, $\mathbf{n}=(1,0,0)$ so
$$-\kappa\nabla u\cdot\mathbf{n}=-\kappa\frac{\partial u}{\partial x}.$$
Hence for our 1-dimensional heat problem the general boundary conditions (BCs) are given by
\begin{align*}
-\kappa_1u_x(0,t)+h_1u(0,t)&=g_1(t),\ t>0\\
\kappa_2u_x(L,t)+h_2u(L,t)&=g_2(t),\ t>0
\end{align*}
The BCs are the main ingredients that uniquely determine a specific physical heat conduction phenomenon along with initial condition (IC)
$$u(x,0)=\phi(x),\ 0<x<L$$

The heat equation is said to be homogeneous if $F(x,t)=0$. The BCs are said to be homogeneous if $g_1(t)=g_2(t)=0$.

Separation of Variables Method: The separation of variables method is one of the oldest methods of solving partial differential equations. It reduces a partial differential equation to a number of ordinary differential equations.

Consider the homogeneous 1-dimensional heat equation
$$\frac{\partial u}{\partial t}=\alpha^2\frac{\partial^2 u}{\partial x^2}$$
Assume that $u(x,t)=X(x)T(t)$.
Then
$$X(x)\dot{T}(t)=\alpha^2X^{\prime\prime}(x)T(t)$$
where $’=\frac{\partial}{\partial x}$ and $\dot{}=\frac{\partial}{\partial t}$.
Divide this by $\alpha^2X(x)T(t)$. Then
$$\frac{\dot{T}}{\alpha^2T}=\frac{X^{\prime\prime}}{X}$$
The LHS depends only on time variable $t$ while the RHS depends on $x$ variable. This is possible when both the LHS and the RHS are the same as a constant, say, $k$. Thereby the 1D heat equation reduces to the ordinary differential equations
\begin{align*}
X^{\prime\prime}&=kX,\\
\dot{T}&=\alpha^2kT
\end{align*}
The second order linear equation has the following solutions depending on the sign of $k$:

  • If $k=0$, $X(x)=Ax+B$.
  • If $k=\lambda^2>0$, $X(x)=Ae^{\lambda x}+Be^{-\lambda x}$ or  $X(x)=A\cosh\lambda x+B\sinh\lambda x$.
  • If $k=-\lambda^2<0$, $X(x)=A\cos\lambda x+B\sinh\lambda x$.

The first order linear equation has solution
$$T(t)=Ce^{k\alpha^2 t}$$
Here one may assume that $C=1$ without loss of generality. Therefore we have three possible cases of $u(x,t)$:

  • $u(x,t)=Ax+B$
  • $u(x,t)=e^{\lambda^2\alpha^2 t}(Ae^{\lambda x}+Be^{-\lambda x})$ or $u(x,t)=e^{\lambda^2\alpha^2 t}(A\cosh\lambda x+B\sinh\lambda x)$
  • $u(x,t)=e^{-\lambda^2\alpha^2 t}(A\cos\lambda x+B\sinh\lambda x)$

References:

David Betounes, Partial Differential Equations for Computational Science with Maple and Vector Analysis, TELOS, Springer-Verlag

Helmholtz Equation

Helmholtz equation
$$\nabla^2\psi+k^2\psi=0\ \ \ \ \ \mbox{(1)}$$
is extremely important in physics. Solving many physically important partial differential equations such as heat equation, wave equation (Klein-Gordon equation), Maxwell’s equations, and Schrödinger equation, etc. often require solving Helmholtz equation (1).

In this notes, we discuss how to solve Helmholtz equation using separation of variables in rectangular, cylindrical, and spherical coordinate systems. The solutions we discuss here will be used when you solve boundary value problems associated with Helmholtz equation.

Helmholtz Equation in Rectangular Coordinates

Assume that $\psi(x,y,z)=X(x)Y(y)Z(z)$. Then the equation (1) becomes
$$YZ\frac{d^2X}{dx^2}+XZ\frac{d^2Y}{dy^2}+XY\frac{d^2Z}{dz^2}+k^2XYZ=0.\ \ \ \ \ \mbox{(2)}$$
Dividing (2) by $XYZ$, we obtain
$$\frac{1}{X}\frac{d^2X}{dx^2}+\frac{1}{Y}\frac{d^2Y}{dy^2}+\frac{1}{Z}\frac{d^2Z}{dz^2}+k^2=0.\ \ \ \ \mbox{(3)}$$
Let us write (3) as
$$\frac{1}{X}\frac{d^2X}{dx^2}=-\frac{1}{Y}\frac{d^2Y}{dy^2}-\frac{1}{Z}\frac{d^2Z}{dz^2}-k^2.\ \ \ \ \ \mbox{(4)}$$
Now we have a paradox. The LHS of (4) depends only on the $x$-variable while the RHS of (4) depends on $y$ and $z$-variables. One way to to avoid this paradox is to assume that the LHS and the RHS of (4) is a constant, say $-l^2$. If you are wondering why we choose a negative constant, the reason comes from physics. For a physical reason, we need an oscillating solution which can be obtained by choosing a negative separation constant. Often boundary conditions for Helmholtz equation lead to a trivial solution for a positive separation constant. Continuing a similar process, we separate Helmholtz equation into three ordinary differential equations:
\begin{align*}
\frac{1}{X}\frac{d^2 X}{dx^2}&=-l^2,\\
\frac{1}{Y}\frac{d^2Y}{dy^2}&=-m^2,\\
\frac{1}{Z}\frac{d^2Z}{dz^2}&=-n^2,
\end{align*}
where $k^2=l^2+m^2+n^2$.

Each mode is given by
$$\psi_{lmn}(x,y,z)=X_l(x)Y_m(y)Z_n(z)$$ and the most general solution is given by the linear combination of the modes
$$\psi(x,y,z)=\sum_{i,m,n}a_{lmn}\psi_{lmn}(x,y,z).$$

Helmholtz Equation in Cylindrical Coordinates

In cylindrical coordinate system $(\rho,\varphi,z)$, Helmholtz equation (1) is written as
$$\frac{1}{\rho}\frac{\partial}{\partial\rho}\left(\rho\frac{\partial\psi}{\partial\rho}\right)+\frac{1}{\rho^2}\frac{\partial^2\psi}{\partial\varphi^2}+\frac{\partial^2\psi}{\partial z^2}+k^2\psi=0.\ \ \ \ \ \mbox{(5)}$$

We assume that $\psi(\rho,\varphi,z)=P(\rho)\Phi(\varphi)Z(z)$. Then (5) can be written as
$$\frac{\Phi Z}{\rho}\frac{\partial}{\partial\rho}\left(\rho\frac{\partial\psi}{\partial\rho}\right)+\frac{PZ}{\rho^2}\frac{\partial^2\psi}{\partial\varphi^2}+P\Phi\frac{\partial^2\psi}{\partial z^2}+k^2=0.\ \ \ \ \ \mbox{(6)}$$
As we have done in rectangular coordinate system, by introducing the separation constants we can separate (6) into three ordinary differential equations
\begin{align*}
\frac{d^2Z}{dz^2}=l^2z,\\
\frac{d^2\Phi}{d\phi^2}=-m^2\Phi,\\
\rho\frac{d}{d\rho}\left(\rho\frac{dP}{d\rho}\right)+(n^2\rho^2-m^2)P=0,\ \ \ \ \ \mbox{(7)}
\end{align*}
where $n^2=k^2+l^2$. The last equation (7) is Bessel’s differential equation.

The general solution of Helmholtz equation in cylindrical coordinates is given by
$$\psi(\rho,\varphi,z)=\sum_{m,n}a_{mn}P_{mn}(\rho)\Phi_m(\varphi)Z_n(z).$$

Helmholtz Equation in Spherical Coordinates

In spherical coordinates $(r,\theta,\varphi)$, Helmholtz equation (1) is written as
$$\frac{1}{r^2\sin\theta}\left[\sin\theta\frac{\partial}{\partial r}\left(r^2\frac{\partial\psi}{\partial r}\right)+\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{\sin\theta}\frac{\partial^2\psi}{\partial\varphi^2}\right]=-k^2\psi.\ \ \ \ \ \mbox{(8)}$$
Assume that $\psi(r,\theta,\varphi)=R(r)\Theta(\theta)\phi(\varphi)$. Then (8) can be written as
$$\frac{1}{Rr^2}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)+\frac{1}{\Theta r^2\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{d\Theta}{d\theta}\right)+\frac{1}{\Phi r^2\sin^2\theta}\frac{d^2\Phi^2}{d\varphi^2}=-k^2.\ \ \ \ \ \mbox{(9)}$$
By introducing separation constants, (9) is separated into three ordinary differential equations
\begin{align*}
\frac{1}{\Phi}\frac{d^2\Phi}{d\varphi^2}=-m^2,\\
\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta\frac{d\Theta}{d\theta}\right)+\left(Q-\frac{m^2}{\sin^2\theta}\right)\Theta=0,\ \ \ \ \ \mbox{(10)}\\
\frac{1}{r^2}\frac{d}{dr}\left(r^2\frac{dR}{dr}\right)+\left(k^2-\frac{Q}{r^2}\right)R=0.\ \ \ \ \ \mbox{(11)}
\end{align*}
The second equation (10) is the associated Legendre equation with $Q=l(l+1)$. The third equation (11) is spherical Bessel equation with $k^2>0$.

The general solution of Helmholtz equation (8) is then given by
$$\psi(r,\theta,\varphi)=\sum_{Q,m}R_Q(r)\Theta_{Qm}(\theta)\Phi_m(\varphi).$$

The restriction that $k^2$ be a constant is unnecessary. For instance the separation process will still be possible for $k^2=f(r)$. If $k^2=f(r)$, (11) is the associated Laguerre equation. The associated Laguerre equation is appeared in the hydrogen atom problem in quantum mechanics.

Cylindrical Resonant Cavity

In this lecture, we discuss cylindrical resonant cavity as an example of the applications of Bessel functions.

Recall that electromagnetic waves in vacuum space can be described by the following four equations, called Maxwell’s equations (in vacuum)
\begin{align*}
\nabla\cdot B=0,\\
\nabla\cdot E=0,\\
\nabla\times B=\epsilon_0\mu_o\frac{\partial E}{\partial t},\\
\nabla\times E=-\frac{\partial B}{\partial t},
\end{align*}
where $E$ is the magnetic field and $B$ the magnetic induction, $\epsilon_0$ the electric permittivity, and $\mu_o$ the magnetic permeability.
Now,
\begin{align*}
\nabla\times(\nabla\times E)&=-\frac{\partial}{\partial t}(\nabla\times B)\\
&=-\epsilon_0\mu_0\frac{\partial^2E}{\partial t^2}.
\end{align*}

Resonant cavity is an electromaginetic resonator in which waves oscillate inside a hollow space (device). For more details see Wikipedia entry for Resonator, in particular for Cavity Resonator.

Here we consider a cylindrical resonant cavity. In the interior of a resonant cavity, electromagnetic waves oscillate with a time dependence $e^{-i\omega t}$, i.e. $E(t,x,y,z)$ can be written as $E=e^{-i\omega t}P(x,y,z),$ where $P(x,y,z)$ is a vector-valued function in $\mathbb R^3$. One can easily show that $\frac{\partial^2E}{\partial t^2}=-\omega^2E$ or
$$\nabla\times(\nabla\times E)=\alpha^2E,$$
where $\alpha^2=\epsilon_0\mu_0\omega^2$. On the other hand,
\begin{align*}
\nabla\times(\nabla\times E)&=\nabla\nabla\cdot E-\nabla\cdot\nabla E\\
&=-\nabla^2E.
\end{align*}
Thus, the electric field $E$ satisfies the Helmholtz equation
$$\nabla^2E+\alpha^2E=0.$$
Suppose that the cavity is a cylinder with radius $a$ and height $l$. Without loss of generality we may assume that the end surfaces are at $z=0$ and $z=l$. Let $E=E(\rho,\varphi,z)$. Using separation of variables in cylindrical coordinate system, we find that the $z$-component $E_z(\rho,\varphi,z)$ satisfies the scalar Helmholtz equation
$$\nabla^2E_z+\alpha^2E_z=0,$$
where $\alpha^2=\omega^2\epsilon_0\mu_0=\frac{\omega^2}{c^2}$. The mode of $E_z$ is obtained as

$$(E_z)_{mnk}=\sum_{m,n}J_m(\gamma_{mn}\rho)e^{\pm im\varphi}[a_{mn}\sin kz+b_{mn}\cos kz].\ \ \ \ \ \mbox{(1)}$$ Here $k$ is a separation constant. Consider the boundary conditions: $\frac{\partial E_z}{\partial z}(z=0)=\frac{\partial E_z}{\partial z}(z=l)=0$ and $E_z(\rho=a)=0$. The boundary conditions $\frac{\partial E_z}{\partial z}(z=0)=\frac{\partial E_z}{\partial z}(z=l)=0$ result that $a_{mn}=0$ and $$k=\frac{p\pi}{l},\ p=0,1,2,\cdots.$$ The boundary condition $E_z(\rho=a)=0$ results $$\gamma_{mn}=\frac{\alpha_{mn}}{a},$$ where $\alpha_{mn}$ is the $n$th zero of $J_m$. Thus the mode (1) is written as
$$(E_z)_{mnp}=\sum_{m,n}b_{mn}J_m\left(\frac{\alpha_{mn}}{a}\rho\right)e^{\pm im\varphi}\cos\frac{p\pi}{l}z,\ \ \ \ \ \mbox{(2)}$$ where $p=0,1,2,\cdots$. In physics, the mode (2) is called the transverse magnetic mode or shortly TM mode of oscillation.

We have \begin{align*}\gamma^2&=\alpha^2-k^2\\&=\frac{\omega^2}{c^2}-\frac{p^2\pi^2}{l^2}.\end{align*} Hence the TM mode has resonant frequencies
$$\omega_{mnp}=c\sqrt{\frac{\alpha_{mn}^2}{a^2}+\frac{p^2\pi^2}{l^2}},\ \left\{\begin{aligned}
m&=0,1,2,\cdots\\
n&=1,2,3,\cdots\\
p&=0,1,2,\cdots.\end{aligned}
\right.
$$
For more details about transverse mode, click here and here.

Modeling a Vibrating Drumhead III

In the previous discussion, we finally obtained the solution of the vibrating drumhead problem:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$
In this lecture, we determine the Fourier coefficients $A_{nm}$ and $B_{nm}$ using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. Before we go on, we need to mention two types of orthogonalities: the orthogonality of cosine functions and the orthogonality of Bessel functions. First note that
$$\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\left\{\begin{array}{ccc}0 & \mbox{if} & n\ne m,\\\pi & \mbox{if} & n=m.\end{array}\right.$$
The reason this property is called an orthogonality is that if $V$ is the set of all (Riemann) integrable real-valued functions on the interval $[a,b]$, then $V$ forms a vector space over $\mathbb R$. This vector space is indeed an inner product space with the inner product $$\langle f,g\rangle=\int_a^bf(x)g(x)dx\ \mbox{for}\ f,g\in V.$$
Bessel functions are orthogonal as well in the following sense:
$$\int_0^1J_n(\lambda_{nm}r)J_n(\lambda_{nl}r)rdr=\left\{\begin{array}{ccc}0 & \mbox{if} & m\ne l,\\\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2 & \mbox{if} & m=l.\end{array}\right.$$

From the solution $u(r,\theta,t)$, we obtain the initial position of the drumhead:
$$u(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}.$$
On the other hand, $u(r,\theta,0)=f(r,\theta)$. Multiply
$$\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}=f(r,\theta)$$
by $\cos(k\theta)$ and integrate with respect to $\theta$ from $0$ to $2\pi$:
$$\sum_n\sum_mJ_n(\lambda_{nm}r)A_{nm}\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ The only nonvanishing term of the above series is when $n=k$, so we obtain
$$\pi\sum_mJ_k(\lambda_{km}r)A_{km}=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ Multiply this equation by $J_k(\lambda_{kl}r)$ and integrate with respect to $r$ from $0$ to $1$:
$$\pi\sum_mA_{km}\int_0^1J_k(\lambda_{km}r)J_k(\lambda_{kl}r)rdr=\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta.$$ The only nonvanishing term of this series is when $m=l$. As a result we obtain:
$$A_{kl}=\frac{1}{\pi L_{kl}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta$$
or
$$A_{nm}=\frac{1}{\pi L_{nm}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots$$
where
$$L_{nm}=\int_0^1J_n(\lambda_{nm}r)^2rdr=\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2, n=0,1,2,\cdots.$$
For $n=0$ we obtain
$$A_{0m}\frac{1}{2\pi L_{0m}}\int_0^1f(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.$$
Using
$$u_t(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)B_{nm}\lambda_{nm}c=g(r,\theta),$$
we obtain
\begin{align*}
B_{nm}&=\frac{1}{c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots,\\
B_{0m}&=\frac{1}{2c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.
\end{align*}
Unfortunately at this moment I do not know if I can make an animation of the solution using an open source math software package such as Maxima or Sage. I will let you know if I find a way. In the meantime, if any of you have an access to Maple, you can download a Maple worksheet I made here and run it for yourself. In the particular example in the Maple worksheet, I used $f(r,\theta)=J_0(2.4r)+0.10J_0(5.52r)$ and $g(r,\theta)=0$. For an animation of the solution, click here.