A Convergence Theorem for Fourier Series

In here, we have seen that if a function $f$ is Riemann integrable on every bounded interval, it can be expended as a trigonometric series called a Fourier series by assuming that the series converges to $f$. So, it would be natural to pause the following question. If $f$ is a periodic function, would its Fourier series always converge to $f$? The answer is affirmative if $f$ is in addition piecewise smooth.

Let $S_N^f(\theta)$ denote the $n$-the partial sum of the Fourier series of a $2\pi$-periodic function $f(\theta)$. Then
\begin{equation}
\label{eq:partsum}
\begin{aligned}
S_N^f(\theta)&=\sum_{-N}^N c_ne^{in\theta}\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\psi)e^{in(\theta-\psi)}d\psi\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\psi)e^{in(\psi-\theta)}d\psi.
\end{aligned}
\end{equation}
Let $\phi=\psi-\theta$. Then
\begin{align*}
S_N^f(\theta)&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi+\theta}^{\pi+\theta} f(\phi+\theta)e^{in\phi}d\phi\\
&=\frac{1}{2\pi}\sum_{-N}^N\int_{-\pi}^\pi f(\phi+\theta)e^{in\phi}d\phi\\
&=\int_{-\pi}^\pi f(\theta+\phi)D_N(\phi)d\phi,
\end{align*}
where
\begin{equation}
\label{eq:dkernel}
\begin{aligned}
D_N(\phi)&=\frac{1}{2\pi}\sum_{-N}^N e^{in\phi}\\
&=\frac{1}{2\pi}\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}\\
&=\frac{1}{2\pi}\frac{\sin\left(N+\frac{1}{2}\right)\phi}{\sin\frac{1}{2}\phi}.
\end{aligned}
\end{equation}
$D_N(\phi)$ is called the $N$-th Dirichlet kernel. Note that the Dirichlet kernel can be used to realize the Dirac delta function $\delta(x)$, i.e.
$$\delta(x)=\lim_{n\to\infty}\frac{1}{2\pi}\frac{\sin\left(n+\frac{1}{2}\right)x}{\sin\frac{1}{2}x}.$$

Dirichlet kernel D_n(x), n=1..10, x=-pi..pi

Dirichlet kernel D_n(x), n=1..10, x=-pi..pi

Note that
$$\frac{1}{2}+\frac{\sin\left(N+\frac{1}{2}\right)\theta}{2\sin\frac{1}{2}\theta}=1+\sum_{n=1}^N\cos n\theta\ (0<\theta<2\pi)$$
Using this identity, one can easily show that:

Lemma. For any $N$,
$$\int_{-\pi}^0 D_N(\theta)d\theta=\int_0^{\pi}D_N(\theta)d\theta=\frac{1}{2}.$$

Now, we area ready to prove the following convergence theorem.

Theorem. If $f$ is $2\pi$-periodic and piecewise smooth on $\mathbb{R}$, then
$$\lim_{N\to\infty} S_N^f(\theta)=\frac{1}{2}[f(\theta-)+f(\theta+)]$$
for every $\theta$. Here, $f(\theta-)=\lim_{\stackrel{h\to 0}{h>0}}f(\theta-h)$ and $f(\theta+)=\lim_{\stackrel{h\to 0}{h>0}}f(\theta+h)$. In particular, $\lim_{N\to\infty}S_N^f(\theta)=f(\theta)$ for every $\theta$ at which $f$ is continuous.

Proof. By Lemma,
$$\frac{1}{2}f(\theta-)=f(\theta-)\int_{-\pi}^0 D_N(\phi)d\phi,\ \frac{1}{2}f(\theta+)=f(\theta+)\int_0^\pi D_N(\phi)d\phi.$$
So,
\begin{align*}
S_N^f(\theta)-\frac{1}{2}[f(\theta-)+f(\theta+)]&=\int_{-\pi}^0[f(\theta+\phi)-f(\theta-)]D_N(\phi)d\phi+\\
&\int_0^\pi[f(\theta+\phi)-f(\theta+)]D_N(\phi)d\phi\\
&=\frac{1}{2\pi}\int_{-\pi}^0[f(\theta+\phi)-f(\theta-)]\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}d\phi\\
&+\frac{1}{2\pi}\int_0^\pi[f(\theta+\phi)-f(\theta+)]\frac{e^{i(N+1)\phi}-e^{-iN\phi}}{e^{i\phi}-1}d\phi.
\end{align*}
$$\lim_{\phi\to 0+}\frac{f(\theta+\phi)-f(\theta+)}{e^{i\phi}-1}=\frac{f'(\theta+)}{i},\ \lim_{\phi\to 0-}\frac{f(\theta+\phi)-f(\theta-)}{e^{i\phi}-1}=\frac{f'(\theta-)}{i}.$$
Hence, the function
$$g(\phi):=\left\{\begin{aligned}
&\frac{f(\theta+\phi)-f(\theta+)}{e^{i\phi}-1},\ -\pi<\phi<0,\\
&\frac{f(\theta+\phi)-f(\theta-)}{e^{i\phi}-1},\ 0<\phi<\pi
\end{aligned}\right.$$
is piecewise continuous on $[-\pi,\pi]$. By the corollary to Bessel’s inequality,
$$c_n=\frac{1}{2\pi}\int_{-\pi}^\pi g(\phi)e^{in\phi}d\phi\to 0$$
as $n\to\pm\infty$. Therefore,
\begin{align*}
S_N^f(\theta)-\frac{1}{2}[f(\theta-)+f(\theta+)]&=\frac{1}{2\pi}\int_{-\pi}^\pi g(\phi)[e^{i(N+1)\phi}-e^{-iN\phi}]d\phi\\
&=c_{-(N+1)}-c_N\\
&\to 0
\end{align*}
as $N\to\infty$. This completes the proof.

Corollary. If $f$ and $g$ are $2\pi$-periodic and piecewise smooth, and $f$ and $g$ have the same Fourier coefficients, then $f=g$.

Proof. If $f$ and $g$ have the same Fourier coefficients, their their Fourier series are the same. Due to the conditions on $f$ and $g$, the Fourier series of $f$ and $g$ converge to $f$ and $g$ respectively by the above convergence theorem. Hence, $f=g$.

The Curvature of a Curve in Euclidean 3-space $\mathbb{R}^3$

The quantity curvature is intended to be a measurement of the bending or turning of a curve. Let $\alpha: I\longrightarrow\mathbb{R}^3$ be a regular curve (i.e. a smooth curve whose derivative never vanishes). If $\alpha$ were to have the unit speed, i.e.
\begin{equation}
\label{eq:unitspped}
||\dot\alpha(t)||^2=\alpha(t)\cdot\alpha(t)=1.
\end{equation}
Differentiating \eqref{eq:unitspped}, we see that $\dot\alpha(t)\cdot\ddot\alpha(t)=0$, i.e. the acceleration is normal to the velocity which is tangent to $\alpha$. Hence, measuring the acceleration is measuring the curvature. So, if we denote the curvature by $\kappa$, then
\begin{equation}
\label{eq:curvature}
\kappa=||\ddot\alpha(t)||.
\end{equation}
Remember that the definition of curvature \eqref{eq:curvature} requires the curve $\alpha$ to be a unit speed curve, but it is not necessarily always the case. What we know is that we can always reparametrize a curve and reparametrization does not change the curve itself but only changes its speed. There is one particular parametrization that we are interested in as it results a unit speed curve. It is called paramtrization by arc-length. This time let us assume that $\alpha$ is not a unit speed curve and define
\begin{equation}
\label{eq:arclength}
s(t)=\int_a^t||\dot\alpha(u)||du,
\end{equation}
where $a\in I$. Since $\frac{ds}{dt}>0$, $s(t)$ is an increasing function and so it is one-to-one. This means that we can solve \eqref{eq:arclength} for $t$ and this allows us to reparametrize $\alpha(t)$ by the arc-length parameter $s$.

Example. Let $\alpha: (-\infty,\infty)\longrightarrow\mathbb{R}^3$ be given by
$$\alpha(t)=(a\cos t,a\sin t,bt)$$
where $a>0$, $b\ne 0$. $\alpha$ is a right circular helix. Its speed is
$$||\dot\alpha(t)||=\sqrt{a^2+b^2}\ne 1.$$
$s(t)=\sqrt{a^2+b^2}t$, so $t=\frac{s}{\sqrt{a^2+b^2}}$. The reparametrization of $\alpha(t)$ by $s$ is given by
$$\alpha(s)=\left(a\cos\frac{s}{\sqrt{a^2+b^2}},b\sin\frac{s}{\sqrt{a^2+b^2}},\frac{bs}{\sqrt{a^2+b^2}}\right).$$
Hence the curvature $\kappa$ is
$$\kappa=\frac{a}{a^2+b^2}.$$

Bessel’s Inequality

Bessel’s inequality is important in studying Fourier series.

Theorem. If $f$ is $2\pi$-periodic and Riemann integrable on $[-\pi,\pi]$ and if the Fourier coefficients $c_n$ are defined by
$$c_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)e^{-in\theta}d\theta,$$
then
\begin{equation}
\label{eq:besselinequality}
\sum_{n=-\infty}^\infty|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi|f(\theta)|^2d\theta.
\end{equation}

Proof.
\begin{align*}
0&\leq|f(\theta)-\sum_{-N}^Nc_ne^{in\theta}|^2\\
&=f(\theta)^2-\sum_{-N}^Nf(\theta)[c_ne^{in\theta}+\overline{c_n}e^{-in\theta}]+\sum_{m,n=-N}^Nc_m\overline{c_n}e^{i(m-n)\theta}
\end{align*}
By integrating,
\begin{align*}
\frac{1}{2\pi}\int_{-\pi}^\pi|f(\theta)-\sum_{-N}^Nc_ne^{in\theta}|^2d\theta&=\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta-\sum_{-N}^N\left[c_n\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)e^{in\theta}d\theta\right.\\
\left.+\overline{c_n}\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)e^{-in\theta}d\theta\right]+&\sum_{m,n=-N}^Nc_m\overline{c_n}\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(m-n)\theta}d\theta\\
&=\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta-\sum_{-N}^N|c_n|^2.
\end{align*}
Hence, for each $N=1,2,\cdots$,
$$\sum_{-N}^N|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.$$
Taking the limit $N\to\infty$, we obtain
$$\sum_{-\infty}^\infty|c_n|^2\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.$$

Note that $|a_0|^2=4|c_0|^2$, $|a_n|^2+|b_n|^2=2(|c_n|+|c_{-n}|^2)$, $n\geq 1$. So, in terms of the real coefficients, Bessel’s inequality can be written as
\begin{equation}
\label{eq:besselinequality2}
\frac{1}{4}|a_0|^2+\frac{1}{2}\sum_1^\infty(|a_n|^2+|b_n|^2)\leq\frac{1}{2\pi}\int_{-\pi}^\pi f(\theta)^2d\theta.
\end{equation}
Bessel’s inequality implies that $\sum|a_n|^2$, $\sum|b_n|^2$, $\sum|c_n|^2$ are convergent and hence the series of Fourier coefficients $\sum a_n$, $\sum b_n$, $\sum c_n$ are convergent. As we studied in undergraduate calculus the following corollary holds then.

Corollary. The Fourier coefficients $a_n$, $b_n$, $c_n$ tend to zero as $n\to\infty$ (and also as $n\to -\infty$ for $c_{-n}$).

Spectrum

Let us recall the Hooke’s law
\begin{equation}
\label{eq:hooke}
F=-kx.
\end{equation}
Newton’s second law of motion is
\begin{equation}
\label{eq:newton}
F=ma=m\ddot{x},
\end{equation}
where $\ddot{x}=\frac{d^2 x}{dt^2}$. The equations \eqref{eq:hooke} and \eqref{eq:newton} result the equation of a simple harmonic oscillator
\begin{equation}
\label{eq:ho}
m\ddot{x}+kx=0.
\end{equation}
Integrating \eqref{eq:ho} with respect to $x$, we have
$$\int(m\ddot{x}dx+kxdx)=E_0,$$
where $E_0$ is a constant. $d\dot{x}=\ddot{x}dt$ and $\dot{x}d\dot{x}=\dot{x}\ddot{x}dt=\ddot{x}dx$. So,
\begin{align*}
\int(m\ddot{x}dx+kxdx)&=\int(m\dot{x}d\dot{x}+kxdx)\\
&=\frac{1}{2}m\ddot{x}+\frac{1}{2}kx^2.
\end{align*}
Hence, we obtain the conservation law of energy
\begin{equation}
\label{eq:energy}
\frac{1}{2}m\ddot{x}+\frac{1}{2}kx^2=E_0.
\end{equation}
The general solution of \eqref{eq:ho} is
\begin{equation}
\label{eq:hosol}
\begin{aligned}
x(t)&=a\cos\omega t+b\sin\omega t\\
&=\sqrt{a^2+b^2}\sin(\omega t+\theta),
\end{aligned}
\end{equation}
where $a$ and $b$ are constants, $\omega=\sqrt{\frac{k}{m}}$ and $\theta=\tan^{-1}\left(\frac{a}{b}\right)$. From \eqref{eq:energy} and \eqref{eq:hosol}, the total energy $E_0$ is computed to be
$$E_0=\frac{1}{2}m\omega^2(a^2+b^2).$$
This tells us that the total energy of a simple harmonic oscillator is proportional to $a^2+b^2$, the squared amplitude. As seen here, the sawtooth function $f(x)$ is represented as the Fourier series
\begin{align*}
f(x)&=-\frac{2L}{\pi}\sum_{n=1}^\infty\frac{(-1)^n}{n}\sin\left(\frac{n\pi x}{L}\right)\\
&=\frac{2L}{\pi}\left\{\sin\left(\frac{\pi x}{L}\right)-\frac{1}{2}\sin\left(\frac{2\pi x}{L}\right)+\frac{1}{3}\sin\left(\frac{3\pi x}{L}\right)-\cdots\right\}.
\end{align*}
The amplitude $c_n=\frac{2L}{n\pi}$, $n=1,2,3,\cdots$ coincides with twice the angular frequency. $\{c_n\}$ is called the frequency spectrum or the amplitude spectrum.

The First and Second Derivative Tests

The First Derivative Test

The derivative $f'(x)$ can tell us a lot about the function $y=f(x)$. It can tell us where critical points are i.e. points at which $f'(x)=0$ and the critical points are likely places at which $y=f(x)$ assumes a local maximum or a local minimum values. By further examining the properties of $f'(x)$ we can also determine at which critical point, $f(x)$ assumes a local maximum, or a local minimum, or neither. But first we see that $f'(x)$ can tell us where $y=f(x)$ is increasing or decreasing.

Theorem. Increasing/Decreasing Test

  1. If $f'(x)>0$ on an open interval, $f$ is increasing on that interval.
  2. If $f'(x)<0$ on an open interval, $f$ is decreasing on that interval.

Example. Find where $f(x)=3x^4-4x^3-12x^2+5$ is increasing and where it is decreasing.

Solution.
\begin{align*}
f'(x)&=12x^3-12x^2-24x\\
&=12x(x^2-x-2)\\
&=12x(x-2)(x+1).
\end{align*}
The critical points are $x=-1,0,2$. Using, for instance, the test point method (which is the easiest method of solving an inequality), we obtain the following table.
$$
\begin{array}{|c|c|c|c|c|c|c|c|}
\hline
x & x<-1 & -1 & -1<x<0 & 0 & 0<x<2 & 2 & x>2\\
\hline
f'(x) & – & 0 & + & 0 & – & 0 & +\\
\hline
f(x) & \searrow & f(-1) & \nearrow & f(0) & \searrow & f(2) &\nearrow\\
\hline
\end{array}
$$
So we find that $f$ is increasing on $(-1,0)\cup(2,\infty)$ and $f$ is decreasing on $(-\infty,-1)\cup(0,2)$.

Now, local maximum values and local minimum values can be identified by observing the change of sign of $f'(x)$ at each critical point.

Theorem. [The First Derivative Test] Suppose that $c$ is a critical point of a differentiable function $f(x)$.

  1. If the sign of $f'(x)$ changes from $+$ to $-$ at $c$, $f(c)$ is a local maximum.
  2. If the sign of $f'(x)$ changes from $-$ to $+$ at $c$, $f(c)$ is a local minimum.
  3. If the sign $f'(x)$ does not change at $c$, $f$ has neither a local maximum nor a local minimum at $c$.

Example. In the previous example, the sign of $f'(x)$ changes from $+$ to $-$ at $0$, so $f(0)=5$ is a local maximum. The sign of $f'(x)$ changes from $-$ to $+$ at $-1$ and at $2$, so $f(-1)=0$ and $f(2)=-27$ are local minimum values.

The following figure confirms our findings from the above two examples.

The graph of f(x)=3x^4-4x^3-12x^2+5

The graph of f(x)=3x^4-4x^3-12x^2+5

The Second Derivative Test

The second order derivative $f^{\prime\prime}(x)$ can provide us an additional piece of information on $y=f(x)$, namely the concavity of the graph of $y=f(x)$.

Definition. If the graph of $f$ lies above all of its tangents on an open interval $I$, it is called concave upward on $I$. If the graph of $f$ lies below all of its tangents on $I$, it is called concave downward on $I$.

From here on, $\smile$ denotes “concave up” and $\frown$ denotes “concave down”.

Definition. A point $(d,f(d))$ on the graph of $y=f(x)$ is called a point of inflection if the concavity of the graph of $f$ changes from $\smile$ to $\frown$ or from $\frown$ to $\smile$ at $(d,f(d))$. The candidates for the points of inflection may be found by solving the equation $f^{\prime\prime}(x)=0$ as shown in the example below.

Theorem. [Concavity Test]

  1. If $f^{\prime\prime}(x)>0$ for all x in an open interval $I$, the graph of $f$ is concave up on $I$.
  2. If $f^{\prime\prime}(x)<0$ for all x in an open interval $I$, the graph of $f$ is concave down on $I$.

Theorem. [The Second Derivative Test] Suppose that $f'(c)=0$ i.e. $c$ is a critical point of $f$. Suppose that $f^{\prime\prime}$ is continuous near $c$.

  1. If $f^{\prime\prime}(c)>0$ then $f(c)$ is a local minimum.
  2. If $f^{\prime\prime}(c)<0$ then $f(c)$ is a local maximum.

Example. Let $f(x)=-x^4+2x^2+2$.

  1. Find and identify all local maximum and local minimum values of $f(x)$ using the Second Derivative Test.
  2. Find the intervals on which the graph of $f(x)$ is concave up or concave down. Find all points of inflection.

Solution. 1. First we find the critical points of $f(x)$ by solving the equation $f'(x)=0$:
$$f'(x)=-4x^3+4x=-4x(x^2-1)=-4x(x+1)(x-1)=0.$$ So $x=-1,0,1$ are critical points of $f(x)$ Next, $f^{\prime\prime}(x)=-12x^2+4$. Since $f^{\prime\prime}(0)=4>0$ and $f^{\prime\prime}(-1)=f^{\prime\prime}(1)=-8<0$, by the Second Derivative Test, $f(0)=2$ is a local minimum value and $f(-1)=f(1)=3$ is a local maximum value.

2. First we need to solve the equation $f”(x)=0$:
$$f^{\prime\prime}(x)=-12x^2+4=-12\left(x^2-\frac{1}{3}\right)=-12\left(x+\frac{1}{\sqrt{3}}\right)\left(x-\frac{1}{\sqrt{3}}\right)=0.$$ So $f^{\prime\prime}(x)=0$ at $x=\pm\displaystyle\frac{1}{\sqrt{3}}$. By using the test-point method we find the following table:
$$
\begin{array}{|c||c|c|c|c|c|}
\hline
x & x<-\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{3}}<x<\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & x>\frac{1}{\sqrt{3}}\\
\hline
f^{\prime\prime}(x) & – & 0 & + & 0 & -\\
\hline
f(x) & \frown & f\left(-\frac{1}{\sqrt{3}}\right)=\frac{23}{9} & \smile & f\left(\frac{1}{\sqrt{3}}\right)=\frac{23}{9} & \frown\\
\hline
\end{array}
$$
The graph of $f(x)$ is concave down on the intervals $\left(-\infty,-\frac{1}{\sqrt{3}}\right)\cup\left(\frac{1}{\sqrt{3}},\infty\right)$ and is concave up on the interval $\left(-\frac{1}{\sqrt{3}},\frac{1}{\sqrt{3}}\right)$. The points of inflection are $\left(-\frac{1}{\sqrt{3}},\frac{23}{9}\right)$ and $\left(\frac{1}{\sqrt{3}},\frac{23}{9}\right)$.

The following figure confirms our findings from the above example.

The graph of f(x)=-x^4+2x^2+2 with points of inflection (in blue)

The graph of f(x)=-x^4+2x^2+2 with points of inflection (in blue)