Maximum and Minimum

Maximum and Minimum

There are two different types of extremum (maximum or minimum) values of a function $y=f(x)$. We may consider a value of $y$ that is an extremum globally on the domain or we may also consider a value of $y$ that is an extremum locally around an $x$ value.

A function $f$ has an absolute maximum at $c$ if $f(c)\geq f(x)$ for all $x$ in the domain of $f$. Similarly, $f$ has an absolute minimum at $c$ if $f(c)\leq f(x)$ for all $x$ in the domain of $f$.

A function $f$ has a local maximum (or relative maximum) at $c$ if $f(c)\geq f(x)$ in some neighborhood of $c$ (i.e an open interval that contains $c$). Similarly, $f$ has a local minimum (or relative minimum) at $c$ if $f(c)\leq f(x)$ in some neighborhood of $c$.

Example.

The graph of f(x)=3x^4-16x^3+18x^2 on [-1,4]

The graph of f(x)=3x^4-16x^3+18x^2 on [-1,4]

The above figure shows the graph of $f(x)=3x^4-16x^3+18x^2$, $-1\leq x\leq 4$. It has a local maximum at $x=1$ and a local minimum at $x=3$. The local minimum $f(3)=-27$ is also an absolute minimum. $f$ has an absolute maximum $f(-1)=37$. This $f(-1)=37$ is not a local maximum by the way. The reason is that there is no local neighborhood around $x=-1$ as the domain is given by $[-1,4]$.

A natural question one may ask is whether a function always has an absolute maximum and an absolute minimum. You can easily find many examples that show that a function does not necessarily have an absolute maximum or an absolute minimum value. For instance, $y=x$ on $(-\infty,\infty)$ has neither an absolute maximum nor an absolute minimum. The function $y=x^2$ on $[0,1)$ has an absolute minimum 0 at $x=0$ but has no absolute maximum.

Theorem. [Max-Min Theorem, Fermat]
If $f$ is continuous on a closed interval $[a,b]$, then $f$ attains an absolute maximum and an absolute minimum on $[a,b]$.

The following theorem is also due to Fermat.

Theorem. If $f$ has a local maximum or a local minimum at $c$ and if $f'(c)$ exists, then $f'(c)=0$.

The converse of this theorem is not necessarily true i.e. $f'(c)=0$ does not necessarily mean that $f(c)$ is a local maximum or a local minimum. For example, consider $f(x)=x^3$. $f'(0)=0$ but $f(x)$ has neither a local maximum nor a local minimum at $x=0$ as shown in figure below.

The graph of f(x)=x^3

The graph of f(x)=x^3

The above theorem is important as an absolute maximum and an absolute minimum may be found among local maximum values, local minimum values and the evaluations of $f$ at the end points, $f(a)$ and $f(b)$. To find local maximum values and local minimum values, we first find points $c$ such that $f'(c)=0$. Such points are called critical points. The reason they are called critical points is that the graph of a function changes from increasing to decreasing or from decreasing to increasing at a critical point.

Definition. A critical point of a function $f(x)$ is a number $c$ in the domain of $f$ such that either $f'(c)=0$ or $f'(c)$ does not exist.

Recipe of Finding Absolute Maximum and Absolute Minimum

Let $f$ be a continuous function on a closed interval $[a,b]$.

Step 1. Find all critical points of $f$ in $(a,b)$.

Step 2. Evaluate $f$ at each critical point obtained in Step 1.

Step 3. Find $f(a)$ and $f(b)$.

Step 4. Compare all the values obtained in Steps 2 and 3. The largest value is the absolute maximum and the smallest value is the absolute minimum.

Example. Find the absolute maximum and the absolute minimum values of
$$f(x)=x^3-3x^2+1,\ -\frac{1}{2}\leq x\leq 4.$$

Solution.

Step 1. Find all critical points of $f$ in $\left(-\frac{1}{2},4\right)$.

$f'(x)=3x^2-6x$. Set $f'(x)=0$ i.e. $3x^2-6x=0$. $3x^2-6x$ is factored as $3x(x-2)$. So we find two critical points $0, 2$.

Step 2. Evaluate $f$ at each critical point obtained in Step 1.

$f(0)=1$ and $f(2)=-3$.

Step 3. Find $f\left(-\frac{1}{2}\right)$ and $f(4)$.

$f\left(-\frac{1}{2}\right)=\frac{1}{8}$ and $f(4)=17$.

Step 4. Compare all the values obtained in Steps 2 and 3.

The largest value is $f(4)=17$ so this is the absolute maximum value of $f$ on $\left[-\frac{1}{2},4\right]$. The smallest value is $f(2)=-3$ so this is the absolute minimum of $f$ on $\left[-\frac{1}{2},4\right]$.

Linear Approximations and Differentials

Linear Approximation

Figure 1. Linear Approximation

Let $y=f(x)$ be a differentiable function. The function $f(x)$ can be approximated by the tangent line to $y=f(x)$ at $a$ if $x$ is near $a$. Such an approximation is called a linear approximation.

If $x\approx a$ then $\Delta x=x-a\approx 0$, so we have
\begin{align*}
\frac{\Delta y}{\Delta x}&\approx \frac{dy}{dx}\\
&=f'(a).
\end{align*}
This means that
$$\frac{f(x)-f(a)}{x-a}\approx f'(a),$$
i.e.
\begin{equation}
\label{eq:lineapprox}
f(x)\approx f(a)+f'(a)(x-a).
\end{equation}
The equation \eqref{eq:lineapprox} is called the linear approximation or tangent line approximation of $f$ at $a$. The linear function
\begin{equation}
L(x):=f(a)+f'(a)(x-a)
\end{equation}
is called the linearization of $f$ at $a$. Notice that $L(x)$ is the equation of tangent line to $f$ at $a$.

Example. Find the linearlization of $f(x)=\sqrt{x+3}$ at $a=1$ and use it to approximate $\sqrt{3.98}$ and $\sqrt{4.05}$.

Solution. $f'(x)=\frac{1}{2\sqrt{x+3}}$, so
\begin{align*}
L(x)&=f(1)+f'(1)(x-1)\\
&=2+\frac{1}{4}(x-1)\\
&=\frac{x}{4}+\frac{7}{4}.
\end{align*}
When $x\approx 1$, we have the approximation
$$\sqrt{x+3}\approx \frac{x}{4}+\frac{7}{4}.$$

Linear approximation of f(x)=sqrt(x+3) at a=1

Figure 2. Linear approximation of f(x)=sqrt(x+3) at a=1

Setting $x+3=3.98$ we find $x=0.98$. Hence,
\begin{align*}
\sqrt{3.98}&\approx \frac{0.98}{4}+\frac{7}{4}\\
&=1.995.
\end{align*}
Setting $x+3=4.05$ we find $x=1.05$. Hence,
\begin{align*}
\sqrt{4.05}&\approx \frac{1.05}{4}+\frac{7}{4}\\
&=2.0125.
\end{align*}

Example. Use linear approximation to estimate $\sqrt{99.8}$.

Solution. In order to use linear approximation we need to choose $f(x)$, $x$ and $a$. First clearly from the given quantity we see that $f(x)=\sqrt{x}$ and thereby $x=99.8$. Since $f'(x)=\frac{1}{2\sqrt{x}}$, the linear approximation of $\sqrt{99.8}$ at $a$ is $$\sqrt{99.8}\approx \sqrt{a}+\frac{1}{2\sqrt{a}}(99.8-a)$$ How do we choose a suitable $a$? There are two criteria you have to have in mind. One is $a$ has to be close to $x$ for the linear approximation to be useful. Second $a$ needs to be chosen so that $f(a)$ and $f'(a)$ can be calculated easily (meaning by hand without aid of a calculator). Why is this important? You have to understand that the use of linear approximation is not assuming any use of a calculator. (If you can use a calculator, what is the point of doing this approximation?) This is a method that was developed when there were no calculators available so people could calculate values like $\sqrt{99.8}$ by hand. Considering the two criteria, we find that $a=100$ is the one. Hence, $$\sqrt{99.8}\approx \sqrt{100}+\frac{1}{2\sqrt{100}}(99.8-100)=10+\frac{1}{20}(-0.2)=9.99$$

Example. Use linear approximation to estimate $\cos 29^\circ$.

Solution. $f(x)=\cos x$ and $x=29^\circ=\frac{29\pi}{180}$ ($29^\circ$ is not a number but $\frac{29\pi}{180}$ is). Since $f'(x)=-\sin x$, the linear approximation of $\cos 29^\circ$ at $a$ is $$\cos 29^\circ\approx \cos a-\sin a \left(\frac{29\pi}{180}-a\right)$$ The suitable $a$ is $=\frac{30\pi}{180}=\frac{\pi}{6}$ in the spirit of the two criteria we discussed in the example above. Therefore, we have $$\cos 29^\circ\approx \cos\frac{\pi}{6}-\sin\frac{\pi}{6}\left(-\frac{\pi}{180}\right)=\frac{\sqrt{3}}{2}+\frac{\pi}{360}$$

Differentials

Differentials

Figure 3. Differentials

As seen in Figure 3 above, when $\Delta x\approx 0$, $\Delta x=dx$ and $\Delta y\approx dy$. On the other hand, $\frac{dy}{dx}=f'(x)$. Hence, we obtain
\begin{equation}
\label{eq:differential}
\Delta y\approx dy=f'(x)dx=f'(x)\Delta x.
\end{equation}

Example. The radius of a sphere was measured and found to be 21 cm with a possible error in measurement of at most 0.05 cm. What is the maximum error in using this value of the radius to compute the volume of the sphere?

Solution. Let $V$ denote the volume of a sphere of radius $r$. Then $V=\frac{4}{3}\pi r^3$. What we are trying to find is $\Delta V$ with $\Delta r\leq 0.05$ cm. As seen in \eqref{eq:differential}, $\Delta V\approx dV$, so we find $dV$ instead because finding $dV$ is easier than findingthe exact error $\Delta V$. Differentiating $V$ with respect to $r$, we obtain
\begin{align*}
\Delta V&\approx dV\\&=4\pi r^2 dr\\
&=4\pi r^2\Delta r\\
&\leq 4\pi\cdot(21)^2\cdot 0.05\\
&=277.
\end{align*}
So the maximum error in the calculated volume is about 277 $\mbox{cm}^3$.

Linear approximation and differentials may appear to be different entities but the two methods are indeed equivalent and they serve the same purpose. To illustrate this, let us take a look at the following example which will be answered by linear approximation and differentials.

Example. Approximate $\sqrt{81.1}$.

Solution by Linear Approximation. Let $f(x)=\sqrt{x}$ and choose $a=81$. The reason for this choice of $a$ is that one can easily calculateĀ  without the aid of a calculator (which is the main point of using this method) and also $a=81$ is close to 81.1. Now we find the tangent line to $f(x)$ at $a=81$, or equivalently the linear approximation $L(x)$ at $a=81$. It is $$L(x)=\frac{1}{2\cdot 9}(x-81)+9$$ Then \begin{align*}L(81.1)&=\frac{1}{18}(81.1-81)+9\\&=\frac{1}{180}+9\\&=9.005555555555556\end{align*} approximates $\sqrt{81.1}$.

Solution by Differentials. Recall that $\Delta y=f(x+\Delta x)-f(x)$ is approximated by the differential $dy=f'(x)dx=f'(x)\Delta x$ for very small $\Delta x$. Now with $f(x)=\sqrt{x}$, $dy=\frac{1}{2\sqrt{x}}\Delta x$. From $\Delta y\approx dy$, we have $$f(x+\Delta x)\approx f(x)+\frac{1}{2\sqrt{x}}\Delta x$$ If we set $f(x+\Delta x)=\sqrt{81.1}$, we can choose $x=81$ and $\Delta x=0.1$. Accordingly, we find \begin{align*}\sqrt{81.1}&\approx\sqrt{81}+\frac{1}{2\sqrt{81}}0.1\\&=9+\frac{1}{180}=9.005555555555556\end{align*}

Implicit Differentiation

A lot of time we have seen functions defined as $y=f(x)$. This clearly shows that $y$ is a function of the independent variable $x$. But often functions are defined implicitly. For instance, consider the equation $x^2+y^2=25$. Of course this is the equation of circle centered at the center $(0,0)$ with radius $5$. Also circles are not functions. But if we say $y\geq 0$, then the equation describes the upper half-circle which is a function defined by $y=\sqrt{25-x^2}$. Functions defined by equations like $x^2+y^2=25$ are called implicit functions. In some cases like $x^2+y^2=25$, we can easily write an implicit function explicitly as $y=f(x)$, but in many cases we cannot. For example, $x^3+y^3=6xy$. So, we need to devise a way to differentiate an implicit function without writing it as $y=f(x)$. This can indeed be done by the chain rule. You just assume that $y$ is a function of $x$ and use the chain rule. For example,
\begin{align*}
\frac{d}{dx}y^n&=(y^n)’\frac{dy}{dx}\ (y\ \mbox{is the innermost function})\\
&=ny^{n-1}\frac{dy}{dx}.
\end{align*}
Let us take a look at another example.
\begin{align*}
\frac{d}{dx}\cos y&=(\cos y)’\frac{dy}{dx}\ (y\ \mbox{is the innermost function})\\
&=-\sin y\frac{dy}{dx}.
\end{align*}
Here come more examples.

Example. If $x^2+y^2=25$, find $\frac{dy}{dx}$.

Solution. Differentiating the equation with respect to $x$, we obtain
$$2x+2y\frac{dy}{dx}=0.$$
Solving the resulting equation for $\frac{dy}{dx}$, we obtain
$$\frac{dy}{dx}=-\frac{x}{y}.$$

Example.

1. Find $y’$ if $x^3+y^3=6xy$.

Solution. Differentiate the equation with respect to $x$. Then we obtain
$$3x^2+3y^2\frac{dy}{dx}=6y+6x\frac{dy}{dx}.$$
Solving the resulting equation for $\frac{dy}{dx}$, we obtain
$$\frac{dy}{dx}=\frac{2y-x^2}{y^2-2x}.$$

2. Find the tangent to $x^3+y^3=6xy$ at $(3,3)$.

Solution. The equation of tangent is
$$y-3=\left[\frac{dy}{dx}\right]_{(3,3)}(x-3).$$
$$\left[\frac{dy}{dx}\right]_{(3,3)}=\frac{2\cdot 3-(3)^2}{3^2-2\cdot 3}=-1.$$ Therefore, the tangent is given by $y=-x+6$.

The Chain Rule

Let us consider the function $y=\sqrt{x^2+1}$. Notice that this is a composite function $y=\sqrt{u}$ and $u=x^2+1$. In general, a composite function can be written as $y=f(u)$ where $u$ is a function of $x$, $u=g(x)$. While we know how to differentiate $y=\sqrt{u}$ (i.e. finding $\frac{dy}{du}$) and $u=x^2+1$ (i.e. finding $\frac{du}{dx}$), we do not know how to differentiate $y=\sqrt{x^2+1}$ (i.e finding $\frac{dy}{dx}$). In this lecture, we would like to devise a way to differentiate a composite function. This is actually very important because the differentiable functions we stumble onto most of time are composite functions.

Let $y=f(u)$ and $u=g(x)$ and assume that both $\frac{dy}{du}$ and $\frac{du}{dx}$ exist. Now,
\begin{align*}
\frac{\Delta y}{\Delta x}&=\frac{\Delta y}{\Delta u}\cdot\frac{\Delta u}{\Delta x}\\
&=\frac{f(u+\Delta u)-f(u)}{\Delta u}\cdot\frac{g(\Delta x+x)-g(x)}{\Delta x}.
\end{align*}
Hence,
\begin{align*}
\frac{dy}{dx}&=\lim_{\Delta x\to 0}\frac{\Delta y}{\Delta x}\\
&=\lim_{\Delta u\to 0}\frac{\Delta y}{\Delta u}\cdot\lim_{\Delta x\to 0}\frac{\Delta u}{\Delta x}\ (\Delta u\to 0\ \mbox{as}\ \Delta x\to 0)\\
&=\frac{dy}{du}\cdot\frac{du}{dx}
\end{align*}
or
\begin{align*}
\frac{dy}{dx}&=\lim_{\Delta u\to 0}\frac{f(u+\Delta u)-f(u)}{\Delta u}\cdot\lim_{\Delta x\to 0}\frac{g(\Delta x+x)-g(x)}{\Delta x}\\
&=f'(u)g'(x).
\end{align*}

Theorem. [The Chain Rule]
Let $y=f(u)$ and $u=g(x)$. If both $\frac{dy}{du}$ and $\frac{du}{dx}$ exist, then $\frac{dy}{dx}$ exists and
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=f'(u)g'(x).
\end{align*}

Remark. The derivation of the chain rule shown above is not rigorously correct. The reason is that $\Delta u$ may become $0$. There is a more rigorous proof of the chain rule but we will not discuss that here.

Remark. Students commonly feel a difficulty with applying the chain rule when they learn it for the first time. The difficulty usually is not about understanding the chain rule itself but identifying the function $u=g(x)$. The candidate for $u$ is usually the function inside parentheses (or brackets) or the innermost function.

Example. We are now ready to find $\frac{dy}{dx}$ when $y=\sqrt{x^2+1}$. In this case, we don’t see parentheses or brackets but the innermost function is $x^2+1$. Let $u=x^2+1$. Then $y=\sqrt{u}$. Now,
\begin{align*}
\frac{dy}{du}&=\frac{1}{2\sqrt{u}}\\
&=\frac{1}{2\sqrt{x^2+1}},\\
\frac{du}{dx}&=2x.
\end{align*}
so, we have by the chain rule
$$\frac{dy}{dx}=\frac{dy}{du}\cdot\frac{du}{dx}=\frac{x}{\sqrt{x^2+1}}.$$

Example. Differentiate $y=(x^3-1)^{100}$.

Solution. The function inside parentheses is $x^3-1$. So, it is our candidate. Let $u=x^3-1$. Then $y=u^{100}.$
By the chain rule,
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=100u^{99}\cdot(3x^2)\\
&=300x^2(x^3-1)^{99}.
\end{align*}

Example. Find the derivative of each function.

1. $y=\sin 4x$.

Solution. The innermost function is $4x$. Let $u=4x$. Then $y=\sin u$. By the chain rule,
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=\cos u\cdot4\\
&=4\cos 4x.
\end{align*}

2. $y=\sqrt{\sin x}$.

Solution. The innermost function is $\sin x$. Let $u=\sin x$. Then $y=\sqrt{u}$. By the chain rule,
\begin{align*}
\frac{dy}{dx}&=\frac{dy}{du}\cdot\frac{du}{dx}\\
&=\frac{1}{2\sqrt{u}}\cdot\cos x\\
&=\frac{\cos x}{2\sqrt{\sin x}}.
\end{align*}

Update: For those who are interested, the rigorous proof of the Chain Rule can be found here.

Fourier Series

d’Alembert (1717-83) studied a partial differential equation (wave equation) that describes motion of a vibrating string and Jacob Bernoulli (1654-1705) showed that its solution is represented as a trigonometric series. Fourier (1768-1830) also showed that the solution of a heat conduction problem is represented as a trigonometric series. Suppose that $f(\theta)$ satisfies $f(\theta+2\pi)=f(\theta)$ for all $\theta$. That is, $f(\theta)$ is a periodic function with period $2\pi$. Assume that $f$ is Riemann integrable on every bounded interval. Then can $f$ be expended in a series (a trigonometric series)
\begin{equation}
\label{eq:fourier}
f(\theta)=\frac{1}{2}a_0+\sum_{n=1}^\infty(a_n\cos n\theta+b_n\sin n\theta)
\end{equation}
? The answer is yes. The series \eqref{eq:fourier} can be written as
\begin{equation}
\label{eq:fourier2}
f(\theta)=\sum_{n=-\infty}^\infty c_ne^{in\theta},
\end{equation}
when $c_0=\frac{1}{2}a_0$, $c_n=\frac{1}{2}(a_n-ib_n)$, and $c_{-n}=\frac{1}{2}(a_n+ib_n)$ for $n=1,2,3,\cdots$.
\begin{align*}
\int_{-\pi}^{\pi}f(\theta)e^{-ik\theta}d\theta&=\sum_{n=-\infty}^\infty c_n\int_{-\pi}^{\pi}e^{i(n-k)\theta}d\theta\\
&=2\pi\sum_{n=-\infty}^\infty c_n\delta_{nk},
\end{align*}
where $\delta_{nk}$ denotes Kronecker’s delta. Hence we obtain
\begin{equation}
\label{eq:fc}
c_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)e^{-in\theta}d\theta,
\end{equation}
$n=1,2,3,\dots$. $a_n$ and $b_n$ are then given by
\begin{align}
\label{eq:fc2}
a_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(\theta)\cos n\theta d\theta,\ n=0,1,2,\cdots,\\
\label{eq:fc3}
b_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(\theta)\sin n\theta d\theta,\ n=1,2,\cdots.
\end{align}
The series of the form \eqref{eq:fourier} or \eqref{eq:fourier2} is called a Fourier series and $c_n$ or $a_n$, $b_n$ are called the Fourier coefficients of $f$.

Lemma. If $F$ is periodic with period $P$ then $\int_a^{a+P} F(x)dx$ is independent of $a$.

Proof. Define
\begin{align*}
g(a)&:=\int_a^{a+P} F(x)dx\\
&=\int_0^{a+P}F(x)dx-\int_0^a F(x)dx.
\end{align*}
Then $g'(a)=F(a+P)-F(a)=0$ for all $a$. This means that $g$ is a constant function.

Lemma. Suppose that $f$ is periodic with period $2\pi$ and integrable on $[-\pi,\pi]$. If $f$ is even,
$$a_n=\frac{2}{\pi}\int_0^\pi f(\theta)\cos n\theta d\theta,\ b_n=0.$$
If $f$ is odd,
$$a_n=0,\ b_n=\frac{2}{\pi}\int_0^\pi f(\theta)\sin n\theta d\theta.$$

Remark. $c_0=\frac{1}{2}a_0=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(\theta)d\theta$. Notice that this is the mean value of $f$ on $[-\pi,\pi]$.

If $f(x)$ is a periodic function with period $2L$, then it can be represented on $[-L,L]$ as
\begin{equation}
\label{eq:fourier3}
f(x)=\frac{1}{2}a_0+\sum_{n=1}^\infty\left\{a_n\cos\left(\frac{n\pi}{L}x\right)+b_n\sin\left(\frac{n\pi}{L}x\right)\right\},
\end{equation}
\begin{align}
\label{eq:fc4}
a_n&=\frac{1}{L}\int_{-L}^L f(x)\cos\left(\frac{n\pi}{L}x\right)dx,\ n=0,1,2,\cdots,\\
\label{eq:fc5}
b_n&=\frac{1}{L}\int_{-L}^L f(x)\sin\left(\frac{n\pi}{L}x\right)dx,\ n=1,2,\cdots.
\end{align}

Example. [Sawtooth Function] Let $f$ be defined by
$$f(x)=x,\ -L<x<L$$
and $f(x+2L)=f(x)$.

Sawtooth Function

Sawtooth Function with L=1

Since $x$ is an odd function,
$$a_n=\frac{1}{L}\int_{-}^L x\cos\left(\frac{n\pi}{L}x\right)dx=0,\ n=0,1,2,\cdots.$$
For $n=1,2,\cdots$,
\begin{align*}
b_n&=\frac{1}{L}\int_{-L}^L x\sin\left(\frac{n\pi}{L}x\right)dx\\
&=\frac{2}{L}\int_0^L x\sin\left(\frac{n\pi}{L}x\right)dx\\
&=-\frac{2L(-1)^n}{n\pi}.
\end{align*}
Hence, $f(x)$ is represented as the Fourier series
$$f(x)=-\frac{2L}{\pi}\sum_{n=1}^\infty\frac{(-1)^n}{n}\sin\left(\frac{n\pi}{L}x\right)$$
on the interval $[-L,L]$.

n-th Partial Sum of Fourier Series with n=5

n-th Partial Sum of Fourier Series with n=5

n-th Partial Sum of Fourier Series with n=10

n-th Partial Sum of Fourier Series with n=10

n-th Partial Sum of Fourier Series with n=30

n-th Partial Sum of Fourier Series with n=30

n-th Partial Sum of Fourier Series with n=100

n-th Partial Sum of Fourier Series with n=100

Example. [Square Wave] Let $f$ be defined by
$$f(x)=\left\{\begin{array}{ccc}
-k & \mbox{if} & -\pi<x<0\\
k & \mbox{if} & 0<x<\pi
\end{array}\right.$$
and $f(x+2\pi)=f(x)$.

Square Wave

Square Wave

The Fourier coefficients are computed to be
\begin{align*}
a_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos nxdx=0,\ n=0,1,2,\cdots,\\
b_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(x)\sin nxdx\\
&=\frac{2k}{n\pi}[1-(-1)^n],\ n=1,2,\cdots.
\end{align*}
So, $b_n=0$ if $n$ is even. Now,
$$b_{2n-1}=\frac{4k}{(2n-1)\pi},\ n=1,2,\cdots$$
and
$$f(x)=\frac{4k}{\pi}\sum_{n=1}^\infty\frac{\sin(2n-1)x}{2n-1}.$$

n-th partial Sum of Fourier Series with n=5

n-th partial Sum of Fourier Series with n=5

n-th partial Sum of Fourier Series with n=10

n-th partial Sum of Fourier Series with n=10

n-th partial Sum of Fourier Series with n=30

n-th partial Sum of Fourier Series with n=30

n-th partial Sum of Fourier Series with n=100

n-th partial Sum of Fourier Series with n=100

Since $0<\frac{\pi}{2}<\pi$, $f\left(\frac{\pi}{2}\right)=k$. On the other hand,
\begin{align*}
f\left(\frac{\pi}{2}\right)&=\frac{4k}{\pi}\sum_{n=1}^\infty\frac{\sin\left(\frac{2n-1}{2}\right)\pi}{2n-1}\\
&=\frac{4k}{\pi}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{2n-1}\\
&=\frac{4k}{\pi}\left(1-\frac{1}{2}+\frac{1}{5}-\frac{1}{7}+\cdots\right).
\end{align*}
Hence, we obtain
$$\frac{\pi}{4}=1-\frac{1}{2}+\frac{1}{5}-\frac{1}{7}+\cdots$$
i.e.
$$\pi=4\sum_{n=1}^\infty\frac{(-1)^{n+1}}{2n-1}.$$
This is a famous result obtained by Gottfried Wilhelm Leibniz in 1673 from geometric considerations.

Pi as Leibniz series

Pi as Leibniz series

Gibbs Phenomenon

The Gibbs Phenomenon is an overshoot, a peculiarity of the Fourier series and other eigenfunction series at a simple discontinuity: the $n$th partial sum of the Fourier series has large oscillations near the jump, which may increase the maximum of the partial sum above that of the function itself. The Gibbs phenomenon is observed in the above two examples. The overshoot does not die out as the frequency increases, but approaches to a finite limit. It is a consequence of trying to approximate a discontinuous function with a finite Fourier series i.e. a partial sum of continuous functions which is always continuous.

Gibbs Phenomenon

Gibbs Phenomenon