Analyzing Graphs of Quadratic Functions

There are two important topics in this section: graphing the quadratic function $f(x)=ax^2+bx+c$ and finding the (absolute) maximum or the minimum value of $f(x)=ax^2+bx+c$.

First the sign of the leading coefficient $a$ tells us some information about the graph. If $a>0$ then the tail of the graph goes up, i.e. the graph is a smiling face $\smile$. If $a<0$ then the tail of the graph goes down, i.e. the graph is a frowning face $\frown$.

Using the completing the square $f(x)=ax^2+bx+c$ can be written as
$$f(x)=a(x-h)^2+k,$$
where $h=-\frac{b}{2a}$ and $k=f(h)=f\left(-\frac{b}{2a}\right)$. The ordered pair $\left(-\frac{b}{2a},f\left(-\frac{b}{2a}\right)\right)$ is called the vertex of the parabola $f(x)$ and the vertical line $x=-\frac{b}{2a}$ is called the axis of symmetry (this is the vertical line that divides the graph of $f(x)$ into two halves). If $a>0$, then $f\left(-\frac{b}{2a}\right)$ is the absolute minimum value of $f(x)$. If $a<0$, then $f\left(-\frac{b}{2a}\right)$ is the absolute maximum value of $f(x)$.

How to sketch the graph of $f(x)=a(x-h)^2+k$?

Your textbook is telling you to sketch the graph of $f(x)=a(x-h)^2+k$ using transformations that you learned in section 1.7 (here and here). In principle, it is right to use transformations but in practice there is an easier way to do. All you need is the sign of $a$, the vertex $\left(-\frac{b}{2a},f\left(-\frac{b}{2a}\right)\right)$, and the $y$-intercept $c$. (Although not required, it would be better if you know $x$-intercepts as well.)

Example. Let $f(x)=x^2+7x-8$.

(a) Find the vertex.

Solution. $-\frac{b}{2a}=-\frac{7}{2}$ and
\begin{align*}
f\left(-\frac{b}{2a}\right)&=f\left(-\frac{7}{2}\right)\\
&=\left(-\frac{7}{2}\right)^2+7\left(-\frac{7}{2}\right)-8\\
&=-\frac{81}{4}.
\end{align*}

(b) Find the axis of symmetry.

Solution. The axis of symmetry is the vertical line $x=-\frac{b}{2a}=-\frac{7}{2}$.

(c) Determine whether there is a maximum or minimum value and find that value.

Solution. Since $a=1>0$, there is a minimum and the minimum value is the $y$-coordinate of the vertex $f\left(-\frac{7}{2}\right)=-\frac{81}{4}$.

(d) Graph the function.

Solution. Since $a=1>0$, the graph is a parabola that opens up (smiling face). Also note that the $y$-intercept of $f(x)$ is $-8$. In fact, we can extract more information since $f(x)$ can be easily factored as $(x+8)(x-1)$, so the $x$-intercepts are $x=-8,1$.

Quadratic Equations

In this lecture note we study how to solve a quadratic equation $ax^2+bx+c=0$. There are three ways to solve a quadratic equation. The first one is

1. By Factoring: This is a typical method to solve a quadratic equation whenever the polynomial $ax^2+bx+c$ can be easily factored. Here is an example.

Example. Solve the quadratic equation $x^2-3x-4=0$ by factoring.

Solution. The polynomial $x^2-3x-4$ is factored as $(x-4)(x+1)$. So the equation is $(x-4)(x+1)=0$. This means that $x-4=0$ or $x+1=0$, i.e. we obtain two real solutions $x=-1$ or $x=4$.

Example. Solve the quadratic equation $x^2-3=0$.

Solution 1. Recall the factorization formula $(a^2-b^2)=(a+b)(a-b)$. Now
\begin{align*}
x^2-3&=x^2-(\sqrt{3})^2\\
&=(x+\sqrt{3})(x-\sqrt{3}).
\end{align*}
Thus our equation becomes $(x+\sqrt{3})(x-\sqrt{3})=0$ whose solutions are $x=\pm\sqrt{3}$.

Solution 2. The quadratic equation can be written as $x^2=3$. Solving this equation for $x$, we obtain $x=\pm\sqrt{3}$.

Next method is

2. By Completing the Square:

This is a method that can be used to solve any quadratic equation. First note that \begin{equation}\label{eq:cts}x^2+bx+\left(\frac{b}{2}\right)^2=\left(x+\frac{b}{2}\right)^2.\end{equation}

Example. Solve the equation $x^2-6x-10=0$ by completing the square.

Solution. By adding 10 to each side of the equation, we obtain
\begin{equation}\label{eq:cthex1}x^2-6x=10.\end{equation} Note that half of the coefficient of $x$ is $\frac{-6}{2}=-3$. Add $(-3)^2$ to each side of \eqref{eq:cthex1}:
\begin{equation}\label{eq:cthex1a}x^2-6x+(-3)^2=10+(-3)^2.\end{equation} Now notice that the LHS of \eqref{eq:cthex1a} is exactly the same form as the LHS of the forumula \eqref{eq:cth}. Hence, the equation \eqref{eq:cthex1a} becomes
$$(x-3)^2=19.$$ Solving this for $x-3$, we obtain $x-3=\pm\sqrt{19}$. That is, $x=3\pm\sqrt{19}$.

While completing the square can be a useful tool for some other things, I do not strongly recommend this method because there is a more convenient method of solving quadratic equations.

3. By the Quadratic Formula:

If you apply the method by completing the square to solve the quadratic equation $ax^2+bx+c=0$, we obtain the quadratic formula
\begin{equation}\label{eq:quadform}x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}\end{equation}

Example. Solve the quadratic equation $3x^2+2x-7=0$.

Solution. $a=3$, $b=2$, and $c=-7$. Thus
\begin{align*}
x&=\frac{-b\pm\sqrt{b^2-4ac}}{2a}\\
&=\frac{-2\pm\sqrt{2^2-4(3)(-7)}}{2(3)}\\
&=\frac{-1\pm\sqrt{22}}{3}.
\end{align*}

The expression inside radical $b^2-4ac$ is called the discriminant. Using the discriminant, we can tell the following without solving the equation itself.

Theorem. For $ax^2+bx+c=0$ with $a\ne 0$,

  • If $b^2-4ac>0$, then the equation has two distinct real solutions.
  • If $b^2-4ac=0$, then the equation has only one real solution (which is $x=-\frac{b}{2a}$).
  • If $b^2-4ac<0$, then the equation has two complex solutions that are conjugate of each other.

Update: There is a convenient formula for quadratic equations of the form $ax^2+bx+c=0$ with $b=2b’$ i.e. a multiple of 2. I wrote about it as a forum entry. For details click here.

Cylindrical Resonant Cavity

In this lecture, we discuss cylindrical resonant cavity as an example of the applications of Bessel functions.

Recall that electromagnetic waves in vacuum space can be described by the following four equations, called Maxwell’s equations (in vacuum)
\begin{align*}
\nabla\cdot B=0,\\
\nabla\cdot E=0,\\
\nabla\times B=\epsilon_0\mu_o\frac{\partial E}{\partial t},\\
\nabla\times E=-\frac{\partial B}{\partial t},
\end{align*}
where $E$ is the magnetic field and $B$ the magnetic induction, $\epsilon_0$ the electric permittivity, and $\mu_o$ the magnetic permeability.
Now,
\begin{align*}
\nabla\times(\nabla\times E)&=-\frac{\partial}{\partial t}(\nabla\times B)\\
&=-\epsilon_0\mu_0\frac{\partial^2E}{\partial t^2}.
\end{align*}

Resonant cavity is an electromaginetic resonator in which waves oscillate inside a hollow space (device). For more details see Wikipedia entry for Resonator, in particular for Cavity Resonator.

Here we consider a cylindrical resonant cavity. In the interior of a resonant cavity, electromagnetic waves oscillate with a time dependence $e^{-i\omega t}$, i.e. $E(t,x,y,z)$ can be written as $E=e^{-i\omega t}P(x,y,z),$ where $P(x,y,z)$ is a vector-valued function in $\mathbb R^3$. One can easily show that $\frac{\partial^2E}{\partial t^2}=-\omega^2E$ or
$$\nabla\times(\nabla\times E)=\alpha^2E,$$
where $\alpha^2=\epsilon_0\mu_0\omega^2$. On the other hand,
\begin{align*}
\nabla\times(\nabla\times E)&=\nabla\nabla\cdot E-\nabla\cdot\nabla E\\
&=-\nabla^2E.
\end{align*}
Thus, the electric field $E$ satisfies the Helmholtz equation
$$\nabla^2E+\alpha^2E=0.$$
Suppose that the cavity is a cylinder with radius $a$ and height $l$. Without loss of generality we may assume that the end surfaces are at $z=0$ and $z=l$. Let $E=E(\rho,\varphi,z)$. Using separation of variables in cylindrical coordinate system, we find that the $z$-component $E_z(\rho,\varphi,z)$ satisfies the scalar Helmholtz equation
$$\nabla^2E_z+\alpha^2E_z=0,$$
where $\alpha^2=\omega^2\epsilon_0\mu_0=\frac{\omega^2}{c^2}$. The mode of $E_z$ is obtained as

$$(E_z)_{mnk}=\sum_{m,n}J_m(\gamma_{mn}\rho)e^{\pm im\varphi}[a_{mn}\sin kz+b_{mn}\cos kz].\ \ \ \ \ \mbox{(1)}$$ Here $k$ is a separation constant. Consider the boundary conditions: $\frac{\partial E_z}{\partial z}(z=0)=\frac{\partial E_z}{\partial z}(z=l)=0$ and $E_z(\rho=a)=0$. The boundary conditions $\frac{\partial E_z}{\partial z}(z=0)=\frac{\partial E_z}{\partial z}(z=l)=0$ result that $a_{mn}=0$ and $$k=\frac{p\pi}{l},\ p=0,1,2,\cdots.$$ The boundary condition $E_z(\rho=a)=0$ results $$\gamma_{mn}=\frac{\alpha_{mn}}{a},$$ where $\alpha_{mn}$ is the $n$th zero of $J_m$. Thus the mode (1) is written as
$$(E_z)_{mnp}=\sum_{m,n}b_{mn}J_m\left(\frac{\alpha_{mn}}{a}\rho\right)e^{\pm im\varphi}\cos\frac{p\pi}{l}z,\ \ \ \ \ \mbox{(2)}$$ where $p=0,1,2,\cdots$. In physics, the mode (2) is called the transverse magnetic mode or shortly TM mode of oscillation.

We have \begin{align*}\gamma^2&=\alpha^2-k^2\\&=\frac{\omega^2}{c^2}-\frac{p^2\pi^2}{l^2}.\end{align*} Hence the TM mode has resonant frequencies
$$\omega_{mnp}=c\sqrt{\frac{\alpha_{mn}^2}{a^2}+\frac{p^2\pi^2}{l^2}},\ \left\{\begin{aligned}
m&=0,1,2,\cdots\\
n&=1,2,3,\cdots\\
p&=0,1,2,\cdots.\end{aligned}
\right.
$$
For more details about transverse mode, click here and here.

Bessel Functions of the First Kind $J_n(x)$ I: Generating Function, Recurrence Relation, Bessel’s Equation

Let us begin with the generating function

$$g(x,t) = e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}.$$
Expanding this function in a Laurent series, we obtain
$$e^{\frac{x}{2}\left(t-\frac{1}{t}\right)} = \sum_{n=-\infty}^\infty J_n(x)t^n.$$
The coefficient of $t^n$, $J_n(x)$, is defined to be a Bessel function of the first kind of order $n$.
Now, we determine $J_n(x)$.
\begin{align*}
e^{\frac{x}{2}t}e^{-\frac{x}{2t}}&=\sum_{r=0}^\infty\left(\frac{x}{2}\right)^r\frac{t^r}{r!} \sum_{s=0}^\infty(-1)^s\left( \frac{x}{2}\right)^s \frac{t^{-s}}{s!}\\
&=\sum_{r=0}^\infty\sum_{s=0}^\infty\frac{(-1)^s}{r!s!}\left(\frac{x}{2}\right)^{r+s}t^{r-s}.
\end{align*}
Set $r=n+s$. Then for $n\ge 0$ we obtain
$$J_n(x)=\sum_{s=0}^\infty \frac{(-1)^s}{s!(n+s)!}\left(\frac{x}{2}\right)^{n+2s}.$$

Bessel Functions

Now we find $J_{-n}(x)$. In the above series for $J_n(x)$, we obtain
$$J_{-n}(x)=\sum_{s=0}^\infty\frac{(-1)^s}{s!(s-n)!} \left(\frac{x}{2}\right)^{2s-n}.$$
However, $(s-n)!\rightarrow\infty$ for $s=0,1,\cdots,(n-1)$. So the series may be considered to begin at $s=n$. Replacing $s$ by $s+n$, we obtain
$$J_{-n}(x)=\sum_{s=0}^ \infty\frac{(-1)^{s+n}}{s!(s+n)!}\left( \frac{x}{2} \right)^{n+2s}.$$ Note that $J_n(x)$ and $J_{-n}(x)$ satisfy the relation
$$J_{-n}=(-1)^nJ_n(x).$$

Let us differentiate the generating function $g(x,t)$ with respect to $t$:
\begin{align*}
\frac{\partial g(x,t)}{\partial t} &=\frac{x}{2}\left(1+ \frac{1}{t^2}\right) e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}\\
&=\sum_{n=-\infty}^\infty n J_n(x) t^{n-1}.
\end{align*}
Replace $e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}$ by $\sum_{n=-\infty}^\infty J_n(x) t^n$.
Then
\begin{align*}
\sum_{n=-\infty}^\infty \frac{x}{2} (1+ \frac{1}{t^2}) J_n(x) t^{n}&=\sum_{n=-\infty}^\infty \frac{x}{2} [J_n(x) t^n + J_n(x) t^{n-2}]\\
&=\sum_{n=-\infty}^\infty \frac{x}{2} [J_{n-1}(x) + J_{n+1}(x)] t^{n-1}.
\end{align*}
Thus,
$$\sum_{n=-\infty}^\infty \frac{x}{2} [J_{n-1}(x) + J_{n+1}(x)] t^{n-1}=\sum_{n=-\infty}^\infty n J_n(x) t^{n-1}$$
or we obtain the recurrence relation,
\begin{equation}J_{n-1}(x) + J_{n+1}(x) = \frac{2n}{x} J_n(x).\end{equation}
Now we differentiate $g(x,t)$ with respect to $x$:
\begin{align*}
\frac{\partial g(x,t)}{\partial x}&=\frac{1}{2}\left(1-\frac{1}{t}\right)e^{\frac{x}{2}\left(t-\frac{1}{t}\right)}\\
& =\sum_{n=-\infty}^\infty J_n'(x)t^n.
\end{align*}
This leads to the recurrence relation
\begin{equation}J_{n-1}(x) – J_{n+1}(x) = 2 J_n'(x).\end{equation}
As a special Case of this recurrence relation, we obtain,
$$J_{0}'(x)=-J_1(x).$$
Adding (1) and (2), we have
\begin{equation}J_{n-1}(x)=\frac{n}{x}J_n(x) + J_n'(x).\end{equation}
Multiplying (3) by $x^n$:
\begin{align*}
x^n J_{n-1}(x) & = n x^{n-1} J_n(x) + x^n J_n'(x)\\
& = \frac{d}{dx}[ x^n J_n(x)].
\end{align*}
Subtracting (2) from (1), we have
\begin{equation}J_{n+1}(x) = \frac{n}{x} J_n(x) – J_n'(x).\end{equation}
Multiplying (4) by $-x^{-n}$:
\begin{align*}
-x^{-n} J_{n+1}(x) & = -n x^{-n-1} J_n(x) + x^{-n} J_n'(x)\\
& = \frac{d}{dx}[x^{-n} J_n(x)].
\end{align*}

Using recurrence relations, we can show that the Bessel functions $J_n(x)$ are the solutions of the Bessel’s differential equation. The recurrence relation (3) can be written as
\begin{equation}x J_n'(x) = x J_{n-1}(x) – n J_n(x).\end{equation}
Differentiating this equation with respect to $x$, we obtain
$$J_n'(x) + x J_n^{\prime\prime}(x) = J_{n-1}(x) + x J_{n-1}'(x) – n J_n'(x)$$
or
\begin{equation}x J_n^{\prime\prime}(x) + (n+1) J_n'(x) – x J_{n-1}'(x) – J_{n-1}(x) = 0.\end{equation}
Subtracting (5) times $n$ from (6) times $x$ results the equation
\begin{equation}x^2 J_n^{\prime\prime}(x) + x J_n'(x) – n^2 J_n(x) + x(n-1) J_{n-1}(x) – x^2 J_{n-1}'(x) = 0.\end{equation}
Replace $n$ by $n-1$ in (4) and multiply the resulting equation by $x^2$ to get the equation
\begin{equation}x^2 J_n(x) = x (n-1) J_{n-1}(x) – x^2 J_{n-1}'(x).\end{equation}
With the equation (8), the equation (7) can be written as
\begin{equation}\label{eq:bessel9}x^2 J_n^{\prime\prime}(x) + x J_n'(x) + (x^2 – n^2) J_n(x) = 0.\end{equation}
This is Bessel’s equation. Hence the Bessel functions $J_n(x)$ are the solutions of Bessel’s equation.

Modeling a Vibrating Drumhead III

In the previous discussion, we finally obtained the solution of the vibrating drumhead problem:
$$u(r,\theta,t)=\sum_{n=0}^\infty\sum_{m=1}^\infty J_n(\lambda_{nm}r)\cos(n\theta)[A_{nm}\cos(\lambda_{nm} ct)+B_{nm}\sin(\lambda_{nm}ct)].$$
In this lecture, we determine the Fourier coefficients $A_{nm}$ and $B_{nm}$ using the initial conditions $u(r,\theta,0)$ and $u_t(r,\theta,0)$. Before we go on, we need to mention two types of orthogonalities: the orthogonality of cosine functions and the orthogonality of Bessel functions. First note that
$$\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\left\{\begin{array}{ccc}0 & \mbox{if} & n\ne m,\\\pi & \mbox{if} & n=m.\end{array}\right.$$
The reason this property is called an orthogonality is that if $V$ is the set of all (Riemann) integrable real-valued functions on the interval $[a,b]$, then $V$ forms a vector space over $\mathbb R$. This vector space is indeed an inner product space with the inner product $$\langle f,g\rangle=\int_a^bf(x)g(x)dx\ \mbox{for}\ f,g\in V.$$
Bessel functions are orthogonal as well in the following sense:
$$\int_0^1J_n(\lambda_{nm}r)J_n(\lambda_{nl}r)rdr=\left\{\begin{array}{ccc}0 & \mbox{if} & m\ne l,\\\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2 & \mbox{if} & m=l.\end{array}\right.$$

From the solution $u(r,\theta,t)$, we obtain the initial position of the drumhead:
$$u(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}.$$
On the other hand, $u(r,\theta,0)=f(r,\theta)$. Multiply
$$\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)A_{nm}=f(r,\theta)$$
by $\cos(k\theta)$ and integrate with respect to $\theta$ from $0$ to $2\pi$:
$$\sum_n\sum_mJ_n(\lambda_{nm}r)A_{nm}\int_0^{2\pi}\cos(n\theta)\cos(k\theta)d\theta=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ The only nonvanishing term of the above series is when $n=k$, so we obtain
$$\pi\sum_mJ_k(\lambda_{km}r)A_{km}=\int_0^{2\pi}f(r,\theta)\cos(k\theta)d\theta.$$ Multiply this equation by $J_k(\lambda_{kl}r)$ and integrate with respect to $r$ from $0$ to $1$:
$$\pi\sum_mA_{km}\int_0^1J_k(\lambda_{km}r)J_k(\lambda_{kl}r)rdr=\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta.$$ The only nonvanishing term of this series is when $m=l$. As a result we obtain:
$$A_{kl}=\frac{1}{\pi L_{kl}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(k\theta)J_k(\lambda_{kl}r)rdrd\theta$$
or
$$A_{nm}=\frac{1}{\pi L_{nm}}\int_0^{2\pi}\int_0^1f(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots$$
where
$$L_{nm}=\int_0^1J_n(\lambda_{nm}r)^2rdr=\frac{1}{2}[J_{n+1}(\lambda_{nm})]^2, n=0,1,2,\cdots.$$
For $n=0$ we obtain
$$A_{0m}\frac{1}{2\pi L_{0m}}\int_0^1f(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.$$
Using
$$u_t(r,\theta,0)=\sum_n\sum_mJ_n(\lambda_{nm}r)\cos(n\theta)B_{nm}\lambda_{nm}c=g(r,\theta),$$
we obtain
\begin{align*}
B_{nm}&=\frac{1}{c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)\cos(n\theta)J_n(\lambda_{nm}r)rdrd\theta,\ n,m=1,2,\cdots,\\
B_{0m}&=\frac{1}{2c\pi L_{nm}\lambda_{nm}}\int_0^{2\pi}\int_0^1g(r,\theta)J_0(\lambda_{0m}r)rdrd\theta,\ m=1,2,\cdots.
\end{align*}
Unfortunately at this moment I do not know if I can make an animation of the solution using an open source math software package such as Maxima or Sage. I will let you know if I find a way. In the meantime, if any of you have an access to Maple, you can download a Maple worksheet I made here and run it for yourself. In the particular example in the Maple worksheet, I used $f(r,\theta)=J_0(2.4r)+0.10J_0(5.52r)$ and $g(r,\theta)=0$. For an animation of the solution, click here.