Category Archives: Calculus

Approximating Functions with Polynomials

Linear Approximation of $f(x)$ at $a$

As seen in differential calculus, when $x$ is near $a$ the function $f(x)$ can be approximated by the tangent line to $y=f(x)$ at $a$.  (For details see here.) \begin{equation}\label{eq:linapprox}T_1(x)=f(a)+f'(a)(x-a)\end{equation} Note that $T_1(a)=f(a)$ and $T_1′(a)=f'(a)$.

Quadratic Approximation of $f(x)$ at $a$

If the graph of $y=f(x)$ is curved a lot near $a$, a polynomial of higher degree may be preferred for a better approximation, for instance a quadratic polynomial. Let us call such polynomial $T_2(x)$. We require that $T_2(x)$ satisfies $T_2(a)=f(a)$, $T_2′(a)=f'(a)$ and $T_2^{\prime\prime}(a)=f^{\prime\prime}(a)$. We can find $T_2(x)$ be setting $$T_2(x)=f(a)+f'(a)(x-a)+cf^{\prime\prime}(a)(x-a)^2$$ Then the condition $T_2^{\prime\prime}(a)=f^{\prime\prime}(a)$ implies that $c=\frac{1}{2}$. Hence \begin{equation}\label{eq:quadapprox}T_2(x)=f(a)+f'(a)(x-a)+\frac{f^{\prime\prime}(a)}{2!}(x-a)^2\end{equation} There is a reason for the appearance of the factorial notation in the leading coefficient.

Example. Approximation for $\ln x$.

  1. Find the linear approximation to $f(x)=\ln x$ at $a=1$.
  2. Find the quadratic approximation to $f(x)=\ln x$ at $a=1$.
  3. Use these approximations to estimate the value of $\ln(1.05)$.

Solution. $f'(x)=\frac{1}{x}$ and $f^{\prime\prime}(x)=-\frac{1}{x^2}$. So

  1. $T_1(x)=x-1$ and
  2. $T_2(x)=(x-1)-\frac{1}{2}(x-1)^2$.
  3. $T_1(1.05)=0.05$ and $T_2(1.05)=0.04875$. For comparison, the actual value of $\ln(1.05)$ is $0.04879016417\cdots$.

The graphs of y=ln(x) (in black), T_1(x)=x-1 (in blue) and T_2(x)=(x-1)-(x-2)^2/2 (in red)

The $n$-th Order Approximation of $f(x)$ at $a$

The $n$-th order approximation is given by the Taylor polynomial $T_n(x)$ centered at $a$ \begin{equation}\label{eq:taylorpolynomial}T_n(x)=f(a)+\frac{f'(a)}{1!}(x-a)+\frac{f^{\prime\prime}(a)}{2!}(x-a)^2+\cdots+\frac{f^{(n)}(a)}{n!}(x-a)^n\end{equation}

Example. Find the Taylor polynomials $T_1,T_2,\cdots,T_7$ for $f(x)=\sin x$ at $a=0$.

Solution. $T_1(x)=T_2(x)=x$, $T_3(x)=T_4(x)=x-\frac{x^3}{3!}$, $T_5(x)=T_6(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}$, $T_7(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}$.

Example. Use Taylor polynomials of order $n=0,1,2,3$ to approximate $\sqrt{18}$.

Solution. There are two things we need to choose $f(x)$ and $a$. Obviously the function we need to use is $f(x)=\sqrt{x}$. How do we choose $a$ then? There are two things to consider for choosing a suitable $a$. One is $a$ has to be close to 18 and the other is $f(a)=\sqrt{a}$ is a number that can be calculated without a calculator. If we are allowed to use a calculator, the point of doing an approximation is moot. Besides, this is how people were able to calculate a number like $\sqrt{18}$ along time ago when calculators did not exist. In fact, calculators or computers cannot calculate the exact $\sqrt{18}$. What they can do is also approximations using the Taylor polynomials. With the two things in mind, the suitable choice for $a$ would be $a=16$. $f'(x)=\frac{1}{2\sqrt{x}}$, $f^{\prime\prime}(x)=-\frac{1}{4x\sqrt{x}}$, $f^{\prime\prime\prime}(x)=\frac{3}{8x^2\sqrt{x}}$. So, \begin{align*}T_0(x)&=\sqrt{16}=4\\T_1(x)&=4+\frac{1}{8}(x-16)\\T_2(x)&=4+\frac{1}{8}(x-16)-\frac{1}{512}(x-16)^2\\T_3(x)&=4+\frac{1}{8}(x-16)-\frac{1}{512}(x-16)^2+\frac{1}{16,384}(x-16)^3\end{align*} Hence, $T_0(18)=4$, $T_1(18)=4.25$, $T_2(18)=4.242188$, $T_3(18)=4.242676$. The actual value is $\sqrt{18}=4.242640686\cdots$.

When a function $f(x)$ is approximated by a Taylor polynomial $T_n(x)$, the error bound must also be taken into consideration because it can tell us about the accuracy of the approximation. The error is given by the remainder \begin{equation}\label{eq:remainder}R_n(x)=f(x)-T_n(x)\end{equation}

Theorem (Taylor’s Theorem). Let $f$ have continuous derivatives up to $f^{(n+1)}$ on an open interval $I$ containing $a$. Then for all $x$ in $I$, $$f(x)=T_n(x)+R_n(x),$$ where $T_n(x)$ is the $n$-th order Taylor polynomial for $f$ centered at $a$ and the remainder $R_n$ is \begin{equation}\label{eq:remainder2}R_n(x)=\frac{f^{(n+1)}(\xi)}{(n+1)!}(x-a)^{n+1}\end{equation} for some $\xi$ between $x$ and $a$.

Remark. Recall the Mean Value Theorem: If $f(x)$ is continuous on $[a,x]$ and is differentiable on $(a,x)$, then there exists $a<\xi<x$ such that $$\frac{f(x)-f(a)}{x-a}=f'(\xi)$$ or $$f(x)=f(a)+f'(\xi)(x-a)$$ Hence we see that the Taylor’s theorem is a generalization of the Mean Value Theorem.

\eqref{eq:remainder2} can be used to obtain the maximum error bound \begin{equation}\label{eq:errorbd}|R_n(x)|=|f(x)-T_n(x)|\leq M\frac{|x-a|^{n+1}}{(n+1)!}\end{equation}where $|f^{(n+1)}(\xi)|\leq M$ for all $\xi$ between $a$ and $x$.

Example.

  1. What is the maximum error possible in using the approximation $$\sin x\approx x-\frac{x^3}{3!}+\frac{x^5}{5!}$$ when $-0.3\leq x\leq 0.3$? Use this approximation to find $\sin 12^\circ$, correct to six decimal places.
  2. For what values of $x$ is this approximation accurate to within 0.00005?

Solution.

  1. Recall from the second example above $x-\frac{x^3}{3!}+\frac{x^5}{5!}=T_5(x)=T_6(x)$ but we regard it as $T_6(x)$. The reason is that the error $R_6$ is smaller than $R_5$. (This choice is also consistent with the alternating series estimate because $-\frac{x^7}{7!}$ comes after the term $\frac{x^5}{5!}$. See below.) Since $|f^{(7)}(\xi)|=|\cos(\xi)|\leq 1$ for all $\xi$ between 0 and $x$, from \eqref{eq:errorbd}, we have $|R_6(x)|\leq\frac{|x|^7}{7!}=\frac{|x|^7}{5040}$. Since $|x|\leq 0.3$, $|R_6|\leq\frac{(0.3)^7}{5040}\approx 4.3\times 10^{-8}$. Thus the maximum possible error is $4.3\times 10^{-8}$. Note that $T_6(x)$ is an alternating series, so it is actually easier to use the remainder estimate for alternating series $$|R_6|\leq a_7=\frac{|x|^7}{7!}=\frac{|x|^7}{5040}$$ \begin{align*}\sin 12^\circ&=\sin\left(\frac{12\pi}{180}\right)\\&=\sin\left(\frac{\pi}{15}\right)\\&\approx\frac{\pi}{15}-\frac{\left(\frac{\pi}{15}\right)^3}{3!}+\frac{\left(\frac{\pi}{15}\right)^5}{5!}\\&\approx 0.207911694\end{align*} Using the maximum possible error, $$\sin 12^\circ=T_6\left(\frac{\pi}{15}\right)+R_6=0.207911694+0.0000000043=0.207911737$$ So $\sin 12^\circ$ correct to six decimal places is 0.207911.
  2. $\frac{|x|^7}{5040}<0.00005$ so $|x|^7<0.00005\times 5045=0.252$. Hence, $|x|<(0.252)^{\frac{1}{7}}\approx 0.821$.

Example.

  1. Approximate $f(x)=\root 3\of{x}$ by a Taylor polynomial of order (degree) 2 at $a=8$.
  2. How accurate is this approximation when $7\leq x\leq 9$?

Solution.

  1. First we find the following derivatives. \begin{align*}f'(x)&=\frac{1}{3}x^{-\frac{2}{3}}\\f^{\prime\prime}(x)&=-\frac{2}{9}x^{-\frac{5}{3}}\\f^{\prime\prime\prime}(x)&=\frac{10}{27}x^{-\frac{8}{3}}\end{align*} Now $$\root 3\of{x}\approx T_2(x)=2+\frac{1}{12}(x-8)-\frac{1}{288}(x-8)^2$$
  2. Note that the Taylor polynomial is not alternating when $x<8$. So we use \eqref{eq:remainder2} instead for the remainder estimation.\begin{align*}R_2(x)&=\frac{f^{\prime\prime\prime}(\xi)}{3!}(x-8)^3\\&=\frac{10}{27}\xi^{-\frac{8}{3}}\frac{(x-8)^3}{3!}\\&=\frac{5}{81}\frac{(x-8)^3}{\xi^{\frac{8}{3}}}\end{align*} If $7\leq x\leq 9$,  $-1\leq x-8\leq 1$. Since $\xi>7$, $\xi^{\frac{8}{3}}>7^{\frac{8}{3}}>179$. Hence, $$|R_2(x)|\leq\frac{5}{81}\frac{|x-8|^3}{\xi^{\frac{8}{3}}}<\frac{5}{81}\cdot\frac{1}{179}\approx 0.00034485<0.0004$$

A Physical Application

In Einstein’s theory of special relativity, the mass of an object moving with velocity $v$ is \begin{equation}\label{eq:relmass}m=\frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}\end{equation} where $m_0$ is the mass of the object at rest and $c$ is the speed of light in vacuum spacetime. $m$ in \eqref{eq:relmass} is called the relativistic mass. The kinetic energy is the difference between its total energy and its energy at rest \begin{equation}\label{eq:kenergy}K=mc^2-m_0c^2\end{equation}

  1. Show that when $v\ll c$ (this means $v$ is very small compared with $c$), $K=\frac{1}{2}m_0v^2$.
  2. Use Taylor’s formula to estimate the difference in these expressions for $K$ when $|v|\leq 100$ m/s.

Solution.

  1. First note that \begin{align*}(1+x)^{-\frac{1}{2}}&=1-\frac{1}{2}x+\frac{\left(-\frac{1}{2}\right)\left(-\frac{3}{2}\right)}{2!}x^2+\frac{\left(-\frac{1}{2}\right)\left(-\frac{3}{2}\right)\left(-\frac{5}{2}\right)}{3!}x^3+\cdots\\&=1-\frac{1}{2}x+\frac{3}{8}x^2-\frac{5}{16}x^3+\cdots\end{align*} (For details on how to obtain this read my note on Taylor series here, especially under The Binomial Series.) Thus, \begin{align*}K&=\frac{m_0c^2}{\sqrt{1-\frac{v^2}{c^2}}}-m_0c^2\\&=m_0c^2\left[\left(1-\frac{v^2}{c^2}\right)^{-\frac{1}{2}}-1\right]\\&=m_0c^2\left[\left(1+\frac{1}{2}\frac{v^2}{c^2}+\frac{3}{8}\frac{v^4}{c^4}+\frac{5}{16}\frac{v^6}{c^6}+\cdots\right)-1\right]\\&=m_0c^2\left(\frac{1}{2}\frac{v^2}{c^2}+\frac{3}{8}\frac{v^4}{c^4}+\frac{5}{16}\frac{v^6}{c^6}+\cdots\right)\end{align*} Hence, if $v\ll c$ then $K\approx\frac{1}{2}m_0v^2$.
  2. $R_1(x)=\frac{f^{\prime\prime}(\xi)}{2!}x^2$ where $f(x)=m_0c^2[(1+x)^{-\frac{1}{2}}-1]$ with $x=-\frac{v^2}{c^2}$. $f^{\prime\prime}(x)=\frac{3}{4}m_0c^2(1+x)^{-\frac{5}{2}}$ so $R_1(x)=\frac{3m_0c^2}{8(1+\xi)^{\frac{5}{2}}}\cdot\frac{v^4}{c^4}$ where $-\frac{v^2}{c^2}<\xi<0$. With $c=3\times 10^8$ m/s and $|v|\leq 100$ m/s. we obtain \begin{align*}R_1(x)&\leq\frac{3m_0(9\times 10^{16})}{8\left(1-\frac{100^2}{c^2}\right)^{\frac{5}{2}}}\left(\frac{100}{c}\right)^4\\&<(4.17\times 10^{-10})m_0\end{align*}

Derivatives of Logarithmic and Exponential Functions

In this note, we study derivatives of logarithmic and exponential functions.

Derivatives of Logarithmic Functions

First recall that \begin{equation}\label{eq:euler}\lim_{t\to 0}(1+t)^{\frac{1}{t}}=e\end{equation}\begin{align*}\frac{d}{dx}\ln x&=\lim_{h\to 0}\frac{\ln(x+h)-\ln x}{h}\\&=\lim_{h\to 0}\frac{1}{h}\ln\left(\frac{x+h}{x}\right)\\&=\frac{1}{x}\lim_{h\to 0}\ln\left(1+\frac{h}{x}\right)^{\frac{x}{h}}\\&=\frac{1}{x}\lim_{t\to 0}\ln(1+t)^{\frac{1}{t}}\\&=\frac{1}{x}\end{align*} with $t=\frac{h}{x}$.\begin{equation}\label{eq:dln}\frac{d}{dx}\ln x=\frac{1}{x}\end{equation} Using the change of base formula $\log_ax=\frac{\ln x}{\ln a}$, we obtain \begin{equation}\label{eq:dlog}\frac{d}{dx}\log_ax=\frac{1}{x\ln a}\end{equation}

Derivatives of Exponential Functions

We can find the derivative of the natural exponential function $y=e^x$ using the relationship $x=\ln y$ and implicit differentiation. Differentiating $x=\ln y$ with respect to $x$ we obtain $1=\frac{1}{y}\frac{dy}{dx}$ i.e. $\frac{dy}{dx}=y=e^x$. Hence \begin{equation}\label{eq:dnatexp}\frac{d}{dx}e^x=e^x\end{equation} Note that $a^x=e^{x\ln a}$. So by the chain rule we find $$\frac{d}{dx}a^x=\frac{d}{dx}e^{x\ln a}=e^{x\ln a}\ln a=a^x\ln a$$ Hence\begin{equation}\label{label:dexp}\frac{d}{dx}a^x=a^x\ln a\end{equation}

The Power Rule (General Form)

Let us consider $x^n$ for any $x>0$ and any real number $n$. As we have seen above $x^n=e^{n\ln x}$ so by the chain rule $$\frac{d}{dx}x^n=\frac{d}{dx}e^{n\ln x}=e^{n\ln x}\frac{n}{x}=nx^{n-1}$$ This completes the proof of the general power rule.

Logarithmic Differentiation

The derivatives of functions involving products, quotients, and powers may be found more easily (quickly) by taking the natural logarithm of such functions before differentiating. This allows us to break a complicated function into simpler pieces using properties of the natural logarithm. This whole process, which is called logarithmic differentiation, makes differentiation much easier and quicker.

Example. Use logarithmic differentiation to find the derivative of $y=\frac{x\sqrt{x^2+1}}{(x+1)^{\frac{2}{3}}}$.

Solution. \begin{align*}\ln y&=\ln \frac{x\sqrt{x^2+1}}{(x+1)^{\frac{2}{3}}}\\&=\ln x+\frac{1}{2}\ln(x^2+1)-\frac{2}{3}\ln(x+1)\end{align*} Differentiating with respect to $x$, $$\frac{1}{y}\frac{dy}{dx}=\frac{1}{x}+\frac{x}{x^2+1}-\frac{2}{3(x+1)}$$ Therefore, $$\frac{dy}{dx}=\left[\frac{1}{x}+\frac{x}{x^2+1}-\frac{2}{3(x+1)}\right]\frac{x\sqrt{x^2+1}}{(x+1)^{\frac{2}{3}}}$$

Example. Let $y=x^x$, $x>0$. Find $\frac{dy}{dx}$.

Solution 1. $y=x^x=e^{x\ln x}$ and by the chain rule we obtain $$\frac{dy}{dx}=x^x(1+\ln x)$$

Solution 2. Use logarithmic differentiation. $\ln y=x\ln x$ and differentiating this with respect to $x$, we have $$\frac{1}{y}\frac{dy}{dx}=1+\ln x$$ Hence, $$\frac{dy}{dx}=x^x(1+\ln x)$$

Alternative Approach

In the earlier approach we started out with $e^x$ and regarded $\ln x$ as its inverse function. It can also be done the other way around, namely we first define $\ln x$ and regard $e^x$ as its inverse function. The natural logarithmic function $\ln x$ can be defined by \begin{equation}\label{eq:natlog}\ln x=\int_1^x\frac{1}{t}dt,\ x>0\end{equation} The number $x$ that satisfies the equation $\ln x=1$ is denoted by $e$. All properties of natural logarithm can be derived from definition \eqref{eq:natlog}. Also from definition \eqref{eq:natlog}, we obtain \eqref{eq:dln} by the Fundamental Theorem of Calculus. Using \eqref{eq:dln} one can show the limit $$\lim_{x\to 0}(1+x)^{\frac{1}{x}}=e$$

Proof. Let $f(x)=\ln x$. Then $f'(x)=\frac{1}{x}$ and so $f'(1)=1$. On the other hand, \begin{align*}f'(1)&=\lim_{x\to 0}\frac{f(1+x)-f(1)}{x}\\&=\lim_{x\to 0}\frac{\ln(1+x)}{x}\\&=\lim_{x\to 0}\ln(1+x)^{\frac{1}{x}}\\&=\ln[\lim_{x\to 0}(1+x)^{\frac{1}{x}}]\end{align*} Therefore, $$\lim_{x\to 0}(1+x)^{\frac{1}{x}}=e$$

Remark. By substituting $y=\frac{1}{x}$, $$e=\lim_{y\to\infty}\left(1+\frac{1}{y}\right)^y$$

Remark. An alternative definition of $e$ is as an infinite series $$e=1+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+\cdots$$ For details see here.

Derivatives of Trigonometric Functions

In this note, we study derivatives of trigonometric functions $y=\sin x$, $y=\cos x$, $y=\sec x$, $y=\csc x$, $y=\tan x$, and $y=\cot x$.  First we calculate the derivative of $y=\sin x$. \begin{align*}\frac{d}{dx}\sin x&=\lim_{h\to 0}\frac{\sin(x+h)-\sin x}{h}\\&=\lim_{h\to 0}\frac{\sin x\cos h+\cos x\sin h-\sin x}{h}\\&=\lim_{h\to 0}\left[\sin x\frac{\cos h-1}{h}+\cos x\frac{\sin h}{h}\right]\end{align*} Recall that $\lim_{h\to 0}\frac{\cos h -1}{h}=0$ and $\lim_{h\to 0}\frac{\sin h}{h}=1$. Hence we obtain \begin{equation}\label{eq:dsin}\frac{d}{dx}\sin x=\cos x\end{equation} In a similar manner we can also obtain \begin{equation}\label{eq:dcos}\frac{d}{dx}\cos x=-\sin x\end{equation} Using the reciprocal rule (baby quotient rule) along with \eqref{eq:dsin} and \eqref{eq:dcos}, we find the derivatives of $y=\sec x$, $y=\csc x$ as \begin{align}\label{eq:d\sec}\frac{d}{dx}\sec x&=\sec x\tan x\\\label{eq:dcsc}\frac{d}{dx}\csc x&=-\csc x\cot x\end{align} Finally using the quotient rule along with \eqref{eq:dsin} and \eqref{eq:dcos}, we find the derivatives of $y=\tan x$, $y=\cot x$ as \begin{align}\label{eq:d\tan}\frac{d}{dx}\tan x&=\sec^2 x\\\label{eq:dcot}\frac{d}{dx}\cot x&=-\csc^2 x\end{align}

Alternating Series, Absolute and Conditional Convergence

The Alternating Series Test

The alternating series $\sum_{k=1}^\infty (-1)^{k+1}a_k$ converges provided:

  1. $0<a_{k+1}\leq a_k$ for all $k=1,2,3,\cdots$ i.e. $\{a_k\}$ is a decreasing sequence.
  2. $\lim_{k\to\infty}a_k=0$.

Example. The alternating harmonic series $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}$ converges.

Example. The alternating series $\sum_{k=1}^\infty(-1)^{k+1}\frac{k+1}{k}$ diverges because $\lim_{k\to\infty}\frac{k+1}{k}=1\ne 0$.

Example. The alternating series $\sum_{k=2}^\infty(-1)^k\frac{\ln k}{k}$ converges because

  1. $\left\{\frac{\ln k}{k}\right\}$ is decreasing for all $n\geq 3$. (Let $f(x)=\frac{\ln x}{x}$. Then $f'(x)=\frac{1-\ln x}{x^2}<0$ for all $x>e=2.7182818284590\cdots$.)
  2. $\lim_{k\to\infty}\frac{\ln k}{k}=0$.

Remainder in Alternating Series

Let $S=\sum_{k=1}^\infty(-1)^{k+1}a_k=a_1-a_2+a_3-a_4+\cdots$. Then we see that the distribution of its partial sums would be like the following figure.

From the figure, we obtain the inequality \begin{equation}\label{eq:altser}|R_n|=|S-S_n|\leq|S_{n+1}-S_n|=a_{n+1}\end{equation} The inequality \eqref{eq:altser} can serve as an estimate for the error (remainder) $|R_n|$ whose error bound is given by $a_{n+1}$.

Example.

  1. How many terms of the series $$\ln 2=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots=\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}$$ are required to approximate the value of the series with a remainder less than $10^{-6}$?
  2. If $n=9$ terms of the series $\sum_{k=1}^\infty\frac{(-1)^k}{k!}=e^{-1}-1$ are summed, what is the maximum error committed in approximating the value of the series?

Solution.

  1. $|R_n|\leq a_{n+1}=\frac{1}{n+1}<10^{-6}$ so $n+1>1000000$ i.e. $n\geq 1000000$.
  2. $|R_9|\leq\frac{1}{10!}\approx 2.8\times 10^{-7}$.

Absolute and Conditional Convergence

Assume that $\sum_{k=1}^\infty a_k$ converges. $\sum_{k=1}^\infty a_k$ is said to converge absolutely if $\sum_{k=1}^\infty |a_k|$ converges. Otherwise, $\sum_{k=1}^\infty a_k$ converges conditionally.

Example. The alternating harmonic series $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}$ converges conditionally.

Theorem. If $\sum_{k=1}^\infty |a_k|$ converges, then so does $\sum_{k=1}^\infty a_k$. That is, absolute convergence implies convergence. However, the converse need not be true as seen in the example above.

Proof. \begin{align*}\sum_{k=1}^\infty a_k&=\sum_{k=1}^\infty(a_k+|a_k|-|a_k|)\\&=\sum_{k=1}^\infty(a_k+|a_k|)-\sum_{k=1}^\infty|a_k|\end{align*} Since $0\leq a_k+|a_k|\leq 2|a_k|$, $\sum_{k=1}^\infty(a_k+|a_k|)$ converges. Therefore, $\sum_{k=1}^\infty a_k$ converges.

Example. Determine whether each of the following series diverges, converge absolutely, or converge conditionally.

  1. $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{\sqrt{k}}$
  2. $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{\sqrt{k^3}}$
  3. $\sum_{k=1}^\infty\frac{\sin k}{k^2}$
  4. $\sum_{k=1}^\infty\frac{(-1)^kk}{k+1}$

Solution.

  1. By the alternating series test, the series converges. However, $\sum_{k=1}^\infty\frac{1}{\sqrt{k}}$ is a $p$-series with $p=\frac{1}{2}<1$, so it diverges. Hence, $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{\sqrt{k}}$ converges conditionally.
  2. $\sum_{k=1}^\infty\frac{1}{\sqrt{k^3}}$ is a $p$-series with $p=\frac{3}{2}>1$, so it converges. Therefore, $\sum_{k=1}^\infty\frac{(-1)^{k+1}}{\sqrt{k^3}}$ converges absolutely.
  3. $|\sin k|\leq 1$, so $\frac{|\sin k|}{k^2}\leq\frac{1}{k^2}$. Since $\sum_{k=1}^\infty\frac{1}{K^2}$ converges, so does $\sum_{k=1}^\infty\frac{|\sin k|}{k^2}$ by the comparison test. Therefore, $\sum_{k=1}^\infty\frac{\sin k}{k^2}$ converges absolutely.
  4. $\lim_{k\to\infty}\frac{k}{k+1}=1\ne 0$ so the alternating series diverges.

The Ratio, Root and Comparison Tests

d’Alembert-Cauchy Ratio Test

The following d’Alembert-Cauchy ratio test is one of the easiest to apply and is widely used.

Theorem (d’Alembert-Cauchy Ratio Test). Suppose that $\sum_{n=1}^\infty a_n$ is a series with positive terms.

  1. If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}<1$ then $\sum_{n=1}^\infty a_n$ converges.
  2. If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}>1$ then $\sum_{n=1}^\infty a_n$ diverges.
  3. If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=1$ then $\sum_{n=1}^\infty a_n$ then the convergence is indeterminant, i.e., the ratio test provides no information regarding the convergence of the series $\sum_{n=1}^\infty a_n$.

Example. Test $\sum_{n=1}^\infty\frac{n}{2^n}$ for convergence.

Solution. \begin{align*}\lim_{n\to\infty}\frac{a_{n+1}}{a_n}&=\lim_{n\to\infty}\frac{\frac{n+1}{2^{n+1}}}{\frac{n}{2^n}}\\&=\lim_{n\to\infty}\frac{n+1}{2n}\\&=\frac{1}{2}<1\end{align*} Hence by the ratio test the series converges.

Example. Test the convergence of the series $\sum_{n=1}^\infty\frac{n^n}{n!}$.

Solution.
\begin{align*}
\lim_{n\to\infty}\frac{a_{n+1}}{a_n}&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n\\
&=e>1.
\end{align*}
Hence, the series diverges.

Remark. There is an easier way to show the divergence of the series $\sum_{n=1}^\infty\frac{n^n}{n!}$.

Note that
$$a_n=\frac{n^n}{n!}=\frac{n\cdot n\cdot n\cdots n}{1\cdot 2\cdot 3\cdots n}\geq n.$$
This implies that $\lim_{n\to\infty}a_n=\infty$. Hence by the divergence test the series diverges.

Cauchy Root Test

Theorem (Cauchy Root Test). Suppose that $\sum_{n=1}^\infty a_n$ be a series with positive terms.

  1. If $\lim_{n\to\infty}\root n\of{a_n}=r<1$ then $\sum_{n=1}^\infty a_n$ converges.
  2. If $\lim_{n\to\infty}\root n\of{a_n}=r> 1$ then $\sum_{n=1}^\infty a_n$ diverges.
  3. If $\lim_{n\to\infty}\root n\of{a_n}=r=1$ then the test fails, i.e., the root test is inclusive.

Example. Test the convergence of the series $\sum_{n=1}^\infty\left(\frac{2n+3}{3n+2}\right)^n$.

Solution. \begin{align*}\lim_{n\to\infty}\root n\of{a_n}&=\lim_{n\to\infty}\root n\of{\left(\frac{2n+3}{3n+2}\right)^n}\\&=\lim_{n\to\infty}\frac{2n+3}{3n+2}\\&=\frac{2}{3}<1\end{align*}Hence by the root test the series converges.

Comparison Test

Theorem (Comparison Test). Suppose that $\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty b_n$ be series with positive terms.

  1. If $\sum_{n=1}^\infty b_n$ converges and $a_n\leq b_n$ for all $n$, then $\sum_{n=1}^\infty a_n$ also converges.
  2. If $\sum_{n=1}^\infty b_n$ diverges and $b_n\leq a_n$ for all $n$, then $\sum_{n=1}^\infty a_n$ also diverges.

Remark. For a convergent series we have the geometric series, whereas the harmonic series will serve as a divergent series. As other series are identified as either convergent or divergent, they may be used for the known series in this comparison test.

Example. Determine whether the series $\sum_{n=1}^\infty\frac{5}{2n^2+4n+3}$ converges.

Solution. Notice that $\frac{5}{2n^2+4n+3}<\frac{5}{n^2}$ for all $n$. Since $\sum_{n=1}\frac{1}{n^2}$ converges (it is a $p$-series with $p=2$), by the comparison test the series converges.

Example. Test the series $\sum_{n=1}^\infty\frac{n^3}{2n^4-1}$ for convergence or divergence.

Solution. $2n^4-1<2n^4$ so $\frac{n^3}{2n^4-1}>\frac{n^3}{2n^4}=\frac{1}{2n}$. Since the harmonic series $\sum_{n=1}^\infty\frac{1}{n}$ diverges, the series diverges.

Example. Test the series $\sum_{n=2}^\infty\frac{\ln n}{n}$ for convergence or divergence.

Solution. $\left(\frac{\ln x}{x}\right)’=\frac{1-\ln x}{x^2}<0$ on $(e,\infty)$ i.e. $\frac{\ln n}{n}>\frac{1}{n}$ for all $n\geq 3$. Since $\sum_{n=1}^\infty\frac{1}{n}$ diverges (the harmonic series, also $p$-series with $p=1$), by the comparison test, the series diverges.

Example. Test the series $\sum_{n=2}^\infty\frac{\ln n}{n^3}$ for convergence or divergence.

Solution. As seen in the following figure, $\ln n<n$ for all $n\geq 2$.

The graphs of y=ln(x) (in red) and y=x (in blue).

So $\frac{\ln n}{n^3}<\frac{n}{n^3}=\frac{1}{n^2}$. Since $\sum_{n=1}^\infty\frac{1}{n^2}$ converges ($p$-series with $p=2>1$), $\sum_{n=2}^\infty\frac{\ln n}{n^3}$ also converges.

Example (The $p$ series). Let $p\leq 1$ Then
$\frac{1}{n}<\frac{1}{n^p}$ for all $n$, so by the Comparison Test
$\sum_{n=1}^\infty\frac{1}{n^p}$ is divergent for all $p\leq 1$.

The Limit Comparison Test

The limit comparison Test is a variation of the comparison test.

Theorem (The Limit Comparison Test). Suppose that $\sum_{n=1}^\infty a_n$ (this is the test subject) and $\sum_{n=1}^\infty b_n$ (this is the series you know its convergence or divergence) are series with positive terms. Let $L=\lim_{n\to\infty}\frac{a_n}{b_n}$. Then the following holds.

  1. If $0<L<\infty$, then either both series converge or both diverge.
  2. If $L=0$ and $\sum_{n=1}^\infty b_n$ converges, then $\sum_{n=1}^\infty a_n$ converges.
  3. If $L=\infty$ and $\sum_{n=1}^\infty b_n$ diverges, then $\sum_{n=1}^\infty a_n$.

The limit comparison test is inconclusive otherwise.

Remark. Just like the comparison test the hardest part of using the limit comparison test is choosing a right series for $\sum_{n=1}^\infty b_n$ and unfortunately there is no systematic way of choosing a right one. It just depends on the given series. It could be a geometric series as you will see in an example below. For certain types of series, a good candidate for $b_n$ is $\frac{1}{n^p}$ from the $p$-series with an appropriate $p$-value.

Example. Test the series $\sum_{n=1}^\infty\frac{1}{2^n-1}$ for convergence or divergence.

Solution. Considering that $a_n=\frac{1}{2^n-1}$ and the geometric series $\sum_{n=1}^\infty\frac{1}{2^n}$ converges, it would be reasonable to try $b_n=\frac{1}{2^n}$. \begin{align*}\lim_{n\to\infty}\frac{a_n}{b_n}&=\lim_{n\to\infty}\frac{2^n}{2^n-1}\\&=\lim_{n\to\infty}\frac{1}{1-\frac{1}{2^n}}\\&=1\end{align*} Since $\sum_{n=1}^\infty\frac{1}{2^n}$ converges, so should $\sum_{n=1}^\infty\frac{1}{2^n-1}$ by the limit comparison test.

Example. Test the series $\sum_{n=1}^\infty\frac{1}{\sqrt{n^2+1}}$ for convergence or divergence.

Solution. The dominant part of $a_n=\frac{1}{\sqrt{n^2+1}}$ is $\frac{1}{\sqrt{n^2}}=\frac{1}{n}$ so we choose $b_n=\frac{1}{n}$. Then \begin{align*}\lim_{n\to\infty}\frac{a_n}{b_n}&=\lim_{n\to\infty}\frac{n}{\sqrt{n^2+1}}\\&=\lim_{n\to\infty}\frac{1}{\sqrt{1+\frac{1}{n^2}}}\\&=1\end{align*} Since $\sum_{n=1}^\infty\frac{1}{n}$ diverges, so should $\sum_{n=1}^\infty\frac{1}{\sqrt{n^2+1}}$ by the limit comparison test.

Example. Test the series $\sum_{n=1}^\infty\frac{n^4-2n^2+3}{2n^6-n+5}$ for convergence or divergence.

Solution. The dominant part of $a_n$ is $\frac{n^4}{n^6}=\frac{1}{n^2}$ so we choose $b_n=\frac{1}{n^2}$. Then $$\frac{\frac{n^4-2n^2+3}{2n^6-n+5}}{\frac{1}{n^2}}=\frac{n^6-2n^4+3n^2}{2n^6-n+5}\to\frac{1}{2}$$ as $n\to\infty$. Since $\sum_{n=1}^\infty\frac{1}{n^2}$ converges, so does the given series by the limit comparison test.

Example. Test the series $\sum_{n=1}^\infty\frac{\ln n}{n^2}$ for convergence or divergence.

Solution. In this case, we try the $p$-series but we don’t know what $p$-value may work. To figure it out, let $b_n=\frac{1}{n^p}$. Then $\frac{a_n}{b_n}=\frac{\frac{\ln n}{n}}{\frac{1}{n^p}}=\frac{\ln n}{n^{2-p}}$. If $p\geq 2$ then $\lim_{n\to\infty}\frac{a_n}{b_n}=\infty$ but $\sum_{n=1}^\infty b_n$ converges so the test is inconclusive. This means that $p<2$. Now, using the L’Hôpital’s rule we get $$\lim_{n\to\infty}\frac{a_n}{b_n}=\lim_{n\to\infty}\frac{1}{(2-p)n^{2-p}}=0$$ If $p\leq 1$ then $\sum_{n=1}^\infty b_n$ diverges so the test would be inconclusive. This would leave us the condition $1<p<2$ for the limit comparison test to work. This means that for any value of $1<p<2$ the limit comparison test will tell us that the series $\sum_{n=1}^\infty\frac{\ln n}{n^2}$ converges. For instance, let us choose $p=\frac{3}{2}$. Then $$\frac{a_n}{b_n}=\frac{\ln n}{\sqrt{n}}\to 0$$ as $n\to\infty$. Since $\sum_{n=1}^\infty\frac{1}{n^{\frac{3}{2}}}$ converges, the series converges.

Remark. Doing the same analysis we did in the example above, we can also see why using the dominant part of $a_n$ worked out in some earlier examples. For instance, consider the series $\sum_{n=1}^\infty\frac{n^4-2n^2+3}{2n^6-n+5}$ that we discussed earlier. Again $a_n=\frac{n^4-2n^2+3}{2n^6-n+5}$ and let $b_n=\frac{1}{n^p}$. An appropriate $p$-value is yet to be determined. Now $\frac{a_n}{b_n}=\frac{n^{p-2}-2n^{p-4}+3n^{p-6}}{2-\frac{1}{n^5}+\frac{5}{n^6}}$. First, if $p\leq 1$, the $p$-series $\sum_{n=1}^\infty\frac{1}{n^p}$ diverges but $\lim_{n\to\infty}\frac{a_n}{b_n}=0$ so the test is inconclusive and hence $p>1$ in which case the $p$-series converges. If $p>2$ then $\lim_{n\to\infty}\frac{a_n}{b_n}=\infty$ which makes the test inconclusive. Therefore we see that $1<p\leq 2$. $p=2$ is what we get from the dominant part $\frac{n^4}{n^6}$ of $a_n$. But that is not the only choice. You can choose any $1<p\leq 2$ in order for the test to work, for example your could’ve chosen $p=\frac{3}{2}$ in which case $\lim_{n\to\infty}\frac{a_n}{b_n}=0$. The limit comparison test says then the series converges.