Cylindrical Resonant Cavity

In this lecture, we discuss cylindrical resonant cavity as an example of the applications of Bessel functions.

Recall that electromagnetic waves in vacuum space can be described by the following four equations, called Maxwell’s equations (in vacuum)
\begin{align*}
\nabla\cdot B=0,\\
\nabla\cdot E=0,\\
\nabla\times B=\epsilon_0\mu_o\frac{\partial E}{\partial t},\\
\nabla\times E=-\frac{\partial B}{\partial t},
\end{align*}
where $E$ is the magnetic field and $B$ the magnetic induction, $\epsilon_0$ the electric permittivity, and $\mu_o$ the magnetic permeability.
Now,
\begin{align*}
\nabla\times(\nabla\times E)&=-\frac{\partial}{\partial t}(\nabla\times B)\\
&=-\epsilon_0\mu_0\frac{\partial^2E}{\partial t^2}.
\end{align*}

Resonant cavity is an electromaginetic resonator in which waves oscillate inside a hollow space (device). For more details see Wikipedia entry for Resonator, in particular for Cavity Resonator.

Here we consider a cylindrical resonant cavity. In the interior of a resonant cavity, electromagnetic waves oscillate with a time dependence $e^{-i\omega t}$, i.e. $E(t,x,y,z)$ can be written as $E=e^{-i\omega t}P(x,y,z),$ where $P(x,y,z)$ is a vector-valued function in $\mathbb R^3$. One can easily show that $\frac{\partial^2E}{\partial t^2}=-\omega^2E$ or
$$\nabla\times(\nabla\times E)=\alpha^2E,$$
where $\alpha^2=\epsilon_0\mu_0\omega^2$. On the other hand,
\begin{align*}
\nabla\times(\nabla\times E)&=\nabla\nabla\cdot E-\nabla\cdot\nabla E\\
&=-\nabla^2E.
\end{align*}
Thus, the electric field $E$ satisfies the Helmholtz equation
$$\nabla^2E+\alpha^2E=0.$$
Suppose that the cavity is a cylinder with radius $a$ and height $l$. Without loss of generality we may assume that the end surfaces are at $z=0$ and $z=l$. Let $E=E(\rho,\varphi,z)$. Using separation of variables in cylindrical coordinate system, we find that the $z$-component $E_z(\rho,\varphi,z)$ satisfies the scalar Helmholtz equation
$$\nabla^2E_z+\alpha^2E_z=0,$$
where $\alpha^2=\omega^2\epsilon_0\mu_0=\frac{\omega^2}{c^2}$. The mode of $E_z$ is obtained as

$$(E_z)_{mnk}=\sum_{m,n}J_m(\gamma_{mn}\rho)e^{\pm im\varphi}[a_{mn}\sin kz+b_{mn}\cos kz].\ \ \ \ \ \mbox{(1)}$$ Here $k$ is a separation constant. Consider the boundary conditions: $\frac{\partial E_z}{\partial z}(z=0)=\frac{\partial E_z}{\partial z}(z=l)=0$ and $E_z(\rho=a)=0$. The boundary conditions $\frac{\partial E_z}{\partial z}(z=0)=\frac{\partial E_z}{\partial z}(z=l)=0$ result that $a_{mn}=0$ and $$k=\frac{p\pi}{l},\ p=0,1,2,\cdots.$$ The boundary condition $E_z(\rho=a)=0$ results $$\gamma_{mn}=\frac{\alpha_{mn}}{a},$$ where $\alpha_{mn}$ is the $n$th zero of $J_m$. Thus the mode (1) is written as
$$(E_z)_{mnp}=\sum_{m,n}b_{mn}J_m\left(\frac{\alpha_{mn}}{a}\rho\right)e^{\pm im\varphi}\cos\frac{p\pi}{l}z,\ \ \ \ \ \mbox{(2)}$$ where $p=0,1,2,\cdots$. In physics, the mode (2) is called the transverse magnetic mode or shortly TM mode of oscillation.

We have \begin{align*}\gamma^2&=\alpha^2-k^2\\&=\frac{\omega^2}{c^2}-\frac{p^2\pi^2}{l^2}.\end{align*} Hence the TM mode has resonant frequencies
$$\omega_{mnp}=c\sqrt{\frac{\alpha_{mn}^2}{a^2}+\frac{p^2\pi^2}{l^2}},\ \left\{\begin{aligned}
m&=0,1,2,\cdots\\
n&=1,2,3,\cdots\\
p&=0,1,2,\cdots.\end{aligned}
\right.
$$
For more details about transverse mode, click here and here.

Bessel Functions of the First Kind $J_n(x)$ I: Generating Function, Recurrence Relation, Bessel’s Equation

Let us begin with the generating function

$$g(x,t) = e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}.$$
Expanding this function in a Laurent series, we obtain
$$e^{\frac{x}{2}\left(t-\frac{1}{t}\right)} = \sum_{n=-\infty}^\infty J_n(x)t^n.$$
The coefficient of $t^n$, $J_n(x)$, is defined to be a Bessel function of the first kind of order $n$.
Now, we determine $J_n(x)$.
\begin{align*}
e^{\frac{x}{2}t}e^{-\frac{x}{2t}}&=\sum_{r=0}^\infty\left(\frac{x}{2}\right)^r\frac{t^r}{r!} \sum_{s=0}^\infty(-1)^s\left( \frac{x}{2}\right)^s \frac{t^{-s}}{s!}\\
&=\sum_{r=0}^\infty\sum_{s=0}^\infty\frac{(-1)^s}{r!s!}\left(\frac{x}{2}\right)^{r+s}t^{r-s}.
\end{align*}
Set $r=n+s$. Then for $n\ge 0$ we obtain
$$J_n(x)=\sum_{s=0}^\infty \frac{(-1)^s}{s!(n+s)!}\left(\frac{x}{2}\right)^{n+2s}.$$

Bessel Functions

Now we find $J_{-n}(x)$. In the above series for $J_n(x)$, we obtain
$$J_{-n}(x)=\sum_{s=0}^\infty\frac{(-1)^s}{s!(s-n)!} \left(\frac{x}{2}\right)^{2s-n}.$$
However, $(s-n)!\rightarrow\infty$ for $s=0,1,\cdots,(n-1)$. So the series may be considered to begin at $s=n$. Replacing $s$ by $s+n$, we obtain
$$J_{-n}(x)=\sum_{s=0}^ \infty\frac{(-1)^{s+n}}{s!(s+n)!}\left( \frac{x}{2} \right)^{n+2s}.$$ Note that $J_n(x)$ and $J_{-n}(x)$ satisfy the relation
$$J_{-n}=(-1)^nJ_n(x).$$

Let us differentiate the generating function $g(x,t)$ with respect to $t$:
\begin{align*}
\frac{\partial g(x,t)}{\partial t} &=\frac{x}{2}\left(1+ \frac{1}{t^2}\right) e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}\\
&=\sum_{n=-\infty}^\infty n J_n(x) t^{n-1}.
\end{align*}
Replace $e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}$ by $\sum_{n=-\infty}^\infty J_n(x) t^n$.
Then
\begin{align*}
\sum_{n=-\infty}^\infty \frac{x}{2} (1+ \frac{1}{t^2}) J_n(x) t^{n}&=\sum_{n=-\infty}^\infty \frac{x}{2} [J_n(x) t^n + J_n(x) t^{n-2}]\\
&=\sum_{n=-\infty}^\infty \frac{x}{2} [J_{n-1}(x) + J_{n+1}(x)] t^{n-1}.
\end{align*}
Thus,
$$\sum_{n=-\infty}^\infty \frac{x}{2} [J_{n-1}(x) + J_{n+1}(x)] t^{n-1}=\sum_{n=-\infty}^\infty n J_n(x) t^{n-1}$$
or we obtain the recurrence relation,
\begin{equation}J_{n-1}(x) + J_{n+1}(x) = \frac{2n}{x} J_n(x).\end{equation}
Now we differentiate $g(x,t)$ with respect to $x$:
\begin{align*}
\frac{\partial g(x,t)}{\partial x}&=\frac{1}{2}\left(1-\frac{1}{t}\right)e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}\\
& =\sum_{n=-\infty}^\infty J_n'(x)t^n.
\end{align*}
This leads to the recurrence relation
\begin{equation}J_{n-1}(x) – J_{n+1}(x) = 2 J_n'(x).\end{equation}
As a special Case of this recurrence relation, we obtain,
$$J_{0}'(x)=-J_1(x).$$
Adding (1) and (2), we have
\begin{equation}J_{n-1}(x)=\frac{n}{x}J_n(x) + J_n'(x).\end{equation}
Multiplying (3) by $x^n$:
\begin{align*}
x^n J_{n-1}(x) & = n x^{n-1} J_n(x) + x^n J_n'(x)\\
& = \frac{d}{dx}[ x^n J_n(x)].
\end{align*}
Subtracting (2) from (1), we have
\begin{equation}J_{n+1}(x) = \frac{n}{x} J_n(x) – J_n'(x).\end{equation}
Multiplying (4) by $-x^{-n}$:
\begin{align*}
-x^{-n} J_{n+1}(x) & = -n x^{-n-1} J_n(x) + x^{-n} J_n'(x)\\
& = \frac{d}{dx}[x^{-n} J_n(x)].
\end{align*}

Using recurrence relations, we can show that the Bessel functions $J_n(x)$ are the solutions of the Bessel’s differential equation. The recurrence relation (3) can be written as
\begin{equation}x J_n'(x) = x J_{n-1}(x) – n J_n(x).\end{equation}
Differentiating this equation with respect to $x$, we obtain
$$J_n'(x) + x J_n^{\prime\prime}(x) = J_{n-1}(x) + x J_{n-1}'(x) – n J_n'(x)$$
or
\begin{equation}x J_n^{\prime\prime}(x) + (n+1) J_n'(x) – x J_{n-1}'(x) – J_{n-1}(x) = 0.\end{equation}
Subtracting (5) times $n$ from (6) times $x$ results the equation
\begin{equation}x^2 J_n^{\prime\prime}(x) + x J_n'(x) – n^2 J_n(x) + x(n-1) J_{n-1}(x) – x^2 J_{n-1}'(x) = 0.\end{equation}
Replace $n$ by $n-1$ in (4) and multiply the resulting equation by $x^2$ to get the equation
\begin{equation}x^2 J_n(x) = x (n-1) J_{n-1}(x) – x^2 J_{n-1}'(x).\end{equation}
With the equation (8), the equation (7) can be written as
\begin{equation}\label{eq:bessel9}x^2 J_n^{\prime\prime}(x) + x J_n'(x) + (x^2 – n^2) J_n(x) = 0.\end{equation}
This is Bessel’s equation. Hence the Bessel functions $J_n(x)$ are the solutions of Bessel’s equation.

Modeling a Vibrating Drumhead III

In the previous discussion, we finally obtained the solution of the vibrating drumhead problem:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$
In this lecture, we determine the Fourier coefficients $A_{nm}$ and $B_{nm}$ using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. Before we go on, we need to mention two types of orthogonalities: the orthogonality of cosine functions and the orthogonality of Bessel functions. First note that
$$\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\left\{\begin{array}{ccc}0 & \mbox{if} & n\ne m,\\\pi & \mbox{if} & n=m.\end{array}\right.$$
The reason this property is called an orthogonality is that if $V$ is the set of all (Riemann) integrable real-valued functions on the interval $[a,b]$, then $V$ forms a vector space over $\mathbb R$. This vector space is indeed an inner product space with the inner product $$\langle f,g\rangle=\int_a^bf(x)g(x)dx\ \mbox{for}\ f,g\in V.$$
Bessel functions are orthogonal as well in the following sense:
$$\int_0^1J_n(\lambda_{nm}r)J_n(\lambda_{nl}r)rdr=\left\{\begin{array}{ccc}0 & \mbox{if} & m\ne l,\\\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2 & \mbox{if} & m=l.\end{array}\right.$$

From the solution $u(r,\theta,t)$, we obtain the initial position of the drumhead:
$$u(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}.$$
On the other hand, $u(r,\theta,0)=f(r,\theta)$. Multiply
$$\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}=f(r,\theta)$$
by $\cos(k\theta)$ and integrate with respect to $\theta$ from $0$ to $2\pi$:
$$\sum_n\sum_mJ_n(\lambda_{nm}r)A_{nm}\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ The only nonvanishing term of the above series is when $n=k$, so we obtain
$$\pi\sum_mJ_k(\lambda_{km}r)A_{km}=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ Multiply this equation by $J_k(\lambda_{kl}r)$ and integrate with respect to $r$ from $0$ to $1$:
$$\pi\sum_mA_{km}\int_0^1J_k(\lambda_{km}r)J_k(\lambda_{kl}r)rdr=\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta.$$ The only nonvanishing term of this series is when $m=l$. As a result we obtain:
$$A_{kl}=\frac{1}{\pi L_{kl}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta$$
or
$$A_{nm}=\frac{1}{\pi L_{nm}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots$$
where
$$L_{nm}=\int_0^1J_n(\lambda_{nm}r)^2rdr=\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2, n=0,1,2,\cdots.$$
For $n=0$ we obtain
$$A_{0m}\frac{1}{2\pi L_{0m}}\int_0^1f(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.$$
Using
$$u_t(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)B_{nm}\lambda_{nm}c=g(r,\theta),$$
we obtain
\begin{align*}
B_{nm}&=\frac{1}{c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots,\\
B_{0m}&=\frac{1}{2c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.
\end{align*}
Unfortunately at this moment I do not know if I can make an animation of the solution using an open source math software package such as Maxima or Sage. I will let you know if I find a way. In the meantime, if any of you have an access to Maple, you can download a Maple worksheet I made here and run it for yourself. In the particular example in the Maple worksheet, I used $f(r,\theta)=J_0(2.4r)+0.10J_0(5.52r)$ and $g(r,\theta)=0$. For an animation of the solution, click here.

Modeling a Vibrating Drumhead II

When you solve an initial boundary value problem, the boundary condition is used to find eigenvalues while initial conditions are used to determined Fourier coefficients of the series solution. In the previous discussion, we find the solution $R(r)=AJ_n(\lambda r)$ of Bessel’s equation. The boundary condition $R(1)=0$ results the equation $$J_n(\lambda)=0.$$ This is an eigenvalue equation and its solutions are called Bessel zeros. We can easily find Bessel zeros using Maple. First open Maple and type

Bess:=(n,m)->evalf(BesselJZeros(n,m));

and press enter. This defines a function Bess that returns $m$-th zero $\lambda_{nm}$ of $J_n(\lambda)$ for given positive integer $m$ as shown in the screenshot.


To calculate, for example, 1st zero of $J_0(\lambda)$ i.e. $\lambda_{01}$ simply type

Bess(0,1);

and press enter. It will return the value $2.40482558$ as shown in the screenshot.

In Maxima, unfortunately such a command (as BesselJZeros in Maple) does not appear to be available. But we can still find Bessel zeros using an elementary technique from calculus, Newton’s Method. If the initial approximation $x_0$ to a zero of a differentiable function $f(x)$ is given, then the $(n+1)$-th approximation is given by $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}.$$ In Maxima, Newton’s Method can be performed as follows. As an example we calculate the 1st Bessel zero of $J_0(\lambda)$.

First we define $J_0(x)$ as a function $f(x)$:

(%i1) f(x):=bessel_j(0,x);

Newton’s method applied to $f(x)$ is performed by

(%i2) newton(t):=subst(t,x,x-f(x)/diff(f(x),x));

To use Newton’s method we need the initial approximation. We can make a guess from its graph. As you can see the first zero is near 2, so we choose $x_0=2$. If you just run

(%i3) newton(2);

the output will not be a numerical value. In order to have a numerical value as output, run

(%i4) ev(newton(2),numer);

The output will be

(%o4)                          2.388210765567796

In order to calculate next approximation $x_1$, simply run

(%i5) ev(newton(%),numer);

and its output is

(%o5)                          2.404769548213657

For the next approximation $x_2$, again run

(%i6) ev(newton(%),numer);

and its output is

(%o6)                          2.404825557043583

This value is accurate to nine decimal places. If you think this is good enough, you can stop. The whole process is seen in the following screenshot.


Let us denote the eigenvalues by $\lambda_{nm}$. Then the radial eigenfunctions are given by $$R_{nm}(r)=J_n(\lambda_{nm}r).$$
The solution $U(r,\theta)$ to the Helmholtz boundary value problem is
$$U_{nm}(r,\theta)=J_n(\lambda_{nm}r)[A\cos(n\theta)+B\sin(n\theta)].$$
Using a simple trigonometric identity, one can write
$$A\cos(n\theta)+B\sin(n\theta)=\sqrt{A^2+B^2}\cos(n(\theta-\psi))$$ for some $\psi$. Thus $U_{nm}(r,\theta)$ can be written as
$$U_{nm}(r,\theta)=AJ_n(\lambda_{nm}r)\cos(n\theta).$$
Oscillatory facotrs satisfy the equation $$\ddot{T}_{nm}+\lambda_{nm}^2c^2T_{nm}=0$$ and they are
$$T_{nm}(t)=C\cos(\lambda_{nm}ct)+D\sin(\lambda ct).$$
Now we are ready to put pieces together to write down the solution $u(r,\theta,t)$ as the following Fourier series:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$

The only things remained for us to do are to determine the unknown Fourier coefficients $A_{nm}$ and $B_{nm}$. We can determine the Fourier coefficients using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. This will be discussed in the next lecture.

Symmetry and Transformations II

In this notes, we study the remaining transformations of a function, namely horizontal shifting, vertical shifting, and dilation (stretching and shrinking). We also discuss how those transformations can be used to sketch the graph of a complicated function from its most basic form.

Horizontal Shifting (translation to the right or left):

Example. Let us plot $y=x^2$, $y=(x-3)^2$, and $y=(x+3)^2$ all together. As you can see clearly, the graph of $y=(x-3)^2$ is 3 units translation to the right of the graph of $y=x^2$, while the graph of $y=(x+3)^2$ is 3 units translations to the left of the graph of $y=x^2$.

Let $a>0$. Then:

The graph of $y=f(x-a)$ is $a$ units translation to the right of the graph of $y=f(x)$;

The graph of $y=f(x+a)$ is $a$ units translation to the left of the graph of $y=f(x)$.

Vertical Shifting (translation upward or downward):

Example. Let us plot $y=x^2$, $y=x^2+3$, and $y=x^2-3$ all together. As you can see clearly, the graph of $y=x^2+3$ is 3 units translation upward of the graph of $y=x^2$, while the graph of $y=x^2-3$ is 3 units translations downward of the graph of $y=x^2$.

Let $a>0$. Then:

The graph of $y=f(x)+a$ is $a$ units translation to upward of the graph of $y=f(x)$;

The graph of $y=f(x)-a$ is $a$ units translation to downward of the graph of $y=f(x)$.

Stretching and Shrinking:

Example. Let us compare the graphs of $y=x^2$, $y=2x^2$ and $y=\frac{1}{2}x^2$. The graph of $y=2x^2$ looks thinner than the graph of $y=x^2$, while the graph of $y=\frac{1}{2}x^2$ looks wider than the graph of $y=x^2$. The graph of $y=2x^2$ can be considered as a (vetical) stretching of the graph of $y=x^2$. Similarly, the graph of $=\frac{1}{2}x^2$ can be considered as a (vetical) shrinking of the graph of $y=x^2$.

If $a>0$, the graph of $y=af(x)$ is a (vertical) stretching of the graph of $y=f(x)$;

If $0<a<1$, the graph of $y=af(x)$ is a (vertical) shrinking of the graph of $y=f(x)$.

Graphing functions using transformations: Sometimes we can sketch graph of a function by analyzing transformations involved such as reflection, stretching/shrinking, horizontal/vertical shifting.

Example. Sketch the graph of $f(x)=\frac{1}{3}x^3+2$ using transformations.

Solution. The most basic form of the function is $y=x^3$. So we obtain the graph of $f(x)$ from $y=x^3$

  1. first by shrinking $y=\frac{1}{3}x^3$
  2. and then translate the resultign function 2 units upward $y=\frac{1}{3}x^3+2$.

Example. Sketch the graph of $g(x)=-(x-3)^2+5$ using transformations.

Solution. The most basic form of the function is $y=x^2$. We obtain the graph of $g(x)$ from $y=x^2$

  1. first by reflection $y=-x^2$
  2. next translate the resulting function 3 units to the right
  3. and finally translate the resulting function 5 units upward.