When you solve an initial boundary value problem, the boundary condition is used to find eigenvalues while initial conditions are used to determined Fourier coefficients of the series solution. In the previous discussion, we find the solution $R(r)=AJ_n(\lambda r)$ of Bessel’s equation. The boundary condition $R(1)=0$ results the equation $$J_n(\lambda)=0.$$ This is an eigenvalue equation and its solutions are called Bessel zeros. We can easily find Bessel zeros using Maple. First open Maple and type
Bess:=(n,m)->evalf(BesselJZeros(n,m));
and press enter. This defines a function Bess that returns $m$-th zero $\lambda_{nm}$ of $J_n(\lambda)$ for given positive integer $m$ as shown in the screenshot.
To calculate, for example, 1st zero of $J_0(\lambda)$ i.e. $\lambda_{01}$ simply type
Bess(0,1);
and press enter. It will return the value $2.40482558$ as shown in the screenshot.
In Maxima, unfortunately such a command (as BesselJZeros in Maple) does not appear to be available. But we can still find Bessel zeros using an elementary technique from calculus, Newton’s Method. If the initial approximation $x_0$ to a zero of a differentiable function $f(x)$ is given, then the $(n+1)$-th approximation is given by $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}.$$ In Maxima, Newton’s Method can be performed as follows. As an example we calculate the 1st Bessel zero of $J_0(\lambda)$.
First we define $J_0(x)$ as a function $f(x)$:
(%i1) f(x):=bessel_j(0,x);
Newton’s method applied to $f(x)$ is performed by
(%i2) newton(t):=subst(t,x,x-f(x)/diff(f(x),x));
To use Newton’s method we need the initial approximation. We can make a guess from its graph. As you can see the first zero is near 2, so we choose $x_0=2$. If you just run
(%i3) newton(2);
the output will not be a numerical value. In order to have a numerical value as output, run
(%i4) ev(newton(2),numer);
The output will be
(%o4) 2.388210765567796
In order to calculate next approximation $x_1$, simply run
(%i5) ev(newton(%),numer);
and its output is
(%o5) 2.404769548213657
For the next approximation $x_2$, again run
(%i6) ev(newton(%),numer);
and its output is
(%o6) 2.404825557043583
This value is accurate to nine decimal places. If you think this is good enough, you can stop. The whole process is seen in the following screenshot.
Let us denote the eigenvalues by $\lambda_{nm}$. Then the radial eigenfunctions are given by $$R_{nm}(r)=J_n(\lambda_{nm}r).$$
The solution $U(r,\theta)$ to the Helmholtz boundary value problem is
$$U_{nm}(r,\theta)=J_n(\lambda_{nm}r)[A\cos(n\theta)+B\sin(n\theta)].$$
Using a simple trigonometric identity, one can write
$$A\cos(n\theta)+B\sin(n\theta)=\sqrt{A^2+B^2}\cos(n(\theta-\psi))$$ for some $\psi$. Thus $U_{nm}(r,\theta)$ can be written as
$$U_{nm}(r,\theta)=AJ_n(\lambda_{nm}r)\cos(n\theta).$$
Oscillatory facotrs satisfy the equation $$\ddot{T}_{nm}+\lambda_{nm}^2c^2T_{nm}=0$$ and they are
$$T_{nm}(t)=C\cos(\lambda_{nm}ct)+D\sin(\lambda ct).$$
Now we are ready to put pieces together to write down the solution $u(r,\theta,t)$ as the following Fourier series:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$
The only things remained for us to do are to determine the unknown Fourier coefficients $A_{nm}$ and $B_{nm}$. We can determine the Fourier coefficients using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. This will be discussed in the next lecture.