The Ratio, Root and Comparison Tests

d’Alembert-Cauchy Ratio Test

The following d’Alembert-Cauchy ratio test is one of the easiest to apply and is widely used.

Theorem (d’Alembert-Cauchy Ratio Test). Suppose that $\sum_{n=1}^\infty a_n$ is a series with positive terms.

  1. If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}<1$ then $\sum_{n=1}^\infty a_n$ converges.
  2. If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}>1$ then $\sum_{n=1}^\infty a_n$ diverges.
  3. If $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=1$ then $\sum_{n=1}^\infty a_n$ then the convergence is indeterminant, i.e., the ratio test provides no information regarding the convergence of the series $\sum_{n=1}^\infty a_n$.

Example. Test $\sum_{n=1}^\infty\frac{n}{2^n}$ for convergence.

Solution. \begin{align*}\lim_{n\to\infty}\frac{a_{n+1}}{a_n}&=\lim_{n\to\infty}\frac{\frac{n+1}{2^{n+1}}}{\frac{n}{2^n}}\\&=\lim_{n\to\infty}\frac{n+1}{2n}\\&=\frac{1}{2}<1\end{align*} Hence by the Ratio test the series converges.

Example. Test the convergence of the series $\sum_{n=1}^\infty\frac{n^n}{n!}$.

Hence, the series diverges.

Remark. There is an easier way to show the divergence of the series $\sum_{n=1}^\infty\frac{n^n}{n!}$.

Note that
$$a_n=\frac{n^n}{n!}=\frac{n\cdot n\cdot n\cdots n}{1\cdot 2\cdot 3\cdots n}\geq n.$$
This implies that $\lim_{n\to\infty}a_n=\infty$. Hence by the Divergence Test the series diverges.

Cauchy Root Test

Theorem (Cauchy Root Test). Suppose that $\sum_{n=1}^\infty a_n$ be a series with positive terms.

  1. If $\lim_{n\to\infty}\root n\of{a_n}=r<1$ then $\sum_{n=1}^\infty a_n$ converges.
  2. If $\lim_{n\to\infty}\root n\of{a_n}=r> 1$ then $\sum_{n=1}^\infty a_n$ diverges.
  3. If $\lim_{n\to\infty}\root n\of{a_n}=r=1$ then the test fails, i.e., the root test is inclusive.

Example. Test the convergence of the series $\sum_{n=1}^\infty\left(\frac{2n+3}{3n+2}\right)^n$.

Solution. \begin{align*}\lim_{n\to\infty}\root n\of{a_n}&=\lim_{n\to\infty}\root n\of{\left(\frac{2n+3}{3n+2}\right)^n}\\&=\lim_{n\to\infty}\frac{2n+3}{3n+2}\\&=\frac{2}{3}<1\end{align*}Hence by the Root Test the series converges.

Comparison Test

Theorem (Comparison Test). Suppose that $\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty b_n$ be series with positive terms.

  1. If $\sum_{n=1}^\infty b_n$ converges and $a_n\leq b_n$ for all $n$, then $\sum_{n=1}^\infty a_n$ also converges.
  2. If $\sum_{n=1}^\infty b_n$ diverges and $b_n\leq a_n$ for all $n$, then $\sum_{n=1}^\infty a_n$ also diverges.

Remark. For a convergent series we have the geometric series, whereas the harmonic series will serve as a divergent series. As other series are identified as either convergent or divergent, they may be used for the known series in this comparison test.

Example. Determine whether the series $\sum_{n=1}^\infty\frac{5}{2n^2+4n+3}$ converges.

Solution. Notice that $\frac{5}{2n^2+4n+3}<\frac{5}{n^2}$ for all $n$. Since $\sum_{n=1}\frac{1}{n^2}$ converges (it is a $p$-series with $p=2$), by the Comparison Test the series converges.

Example. Test the series $\sum_{n=1}^\infty\frac{\ln n}{n}$ for convergence or divergence.

Solution. $\left(\frac{\ln x}{x}\right)’=\frac{1-\ln x}{x}<0$ on $(e,\infty)$ i.e. $\frac{\ln n}{n}>\frac{1}{n}$ for all $n\geq 3$. Since $\sum_{n=1}^\infty\frac{1}{n}$ diverges (the harmonic series, also $p$-series with $p=1$), by the Comparison Test, the series diverges.

Example (The $p$ series). Let $p\leq 1$ Then
$\frac{1}{n}<\frac{1}{n^p}$ for all $n$, so by the Comparison Test
$\sum_{n=1}^\infty\frac{1}{n^p}$ is divergent for all $p\leq 1$.

The Limit Comparison Test

The Limit Comparison Test is a variation of the Comparison Test.

Theorem (The Limit Comparison Test). Suppose that $\sum_{n=1}^\infty a_n$ and $\sum_{n=1}^\infty b_b$ are series with positive terms. If $$\lim_{n\to\infty}\frac{a_n}{b_n}=c$$ where $c$ is a number and $c>0$, then either both series converge or both diverge.

Example. Test the series $\sum_{n=1}^\infty\frac{1}{2^n-1}$ for convergence or divergence.

Solution. Let $a_n=\frac{1}{2^n-1}$ and $b_n=\frac{1}{2^n}$. Then \begin{align*}\lim_{n\to\infty}\frac{a_n}{b_n}&=\lim_{n\to\infty}\frac{2^n}{2^n-1}\\&=\lim_{n\to\infty}\frac{1}{1-\frac{1}{2^n}}\\&=1>0\end{align*} Since $\sum_{n=1}^\infty\frac{1}{2^n}$ converges, so should $\sum_{n=1}^\infty\frac{1}{2^n-1}$ by the Limit Comparison Test.

Example. Test the series $\sum_{n=1}^\infty\frac{1}{\sqrt{n^2+1}}$ for convergence or divergence.

Solution. Let $a_n=\frac{1}{\sqrt{n^2+1}}$ and $b_n=\frac{1}{n}$. Then \begin{align*}\lim_{n\to\infty}\frac{a_n}{b_n}&=\lim_{n\to\infty}\frac{n}{\sqrt{n^2+1}}\\&=\lim_{n\to\infty}\frac{1}{\sqrt{1+\frac{1}{n^2}}}\\&=1>0\end{align*} Since $\sum_{n=1}^\infty\frac{1}{n}$ diverges, so should $\sum_{n=1}^\infty\frac{1}{\sqrt{n^2+1}}$ by the Limit Comparison Test.

Cauchy-Maclaurin Integral Test

Theorem (Cauchy-Maclaurin Integral Test)

Let $f(x)$ be a continuous, positive, decreasing function on $[1,\infty)$ in which $f(n)=a_n$. Then $\sum_{n=1}^\infty a_n$ converges if $\int_1^\infty f(x)dx$ is finite and diverges if the integral is infinite.

Proof. Using the left-end point method as seen in Figure 1

Figure 1. Integral Test

we see that $$a_1+a_2+\cdots+a_{n-1}\geq \int_1^nf(x)dx$$ This means that if $\int_1^{\infty}f(x)dx$ is infinite, $\sum_{n=1}^\infty a_n$ diverges. Now using the right-end point method as seen in Figure 2

Figure 2. Integral Test

we see that $$a_2+a_3+\cdots+a_n\leq\int_1^n f(x)dx$$ This means that if $\int_1^\infty f(x)dx$ is finite, then $\sum_{n=1}^\infty a_n$ converges. This completes the proof.

Example (The $p$-series).
For what values of $p$ is the series $\sum_{n=1}^\infty\frac{1}{n^p}$ convergent?

Solution. If $p<0$ then $\lim_{n\to\infty}\frac{1}{n^p}=\infty$. If $p=0$ then $\lim_{n\to\infty}\frac{1}{n^p}=1$. In either case, $\lim_{n\to\infty}\frac{1}{n^p}\ne 0$, so the series diverges. If $p>0$ then the function $f(x)=\frac{1}{x^p}$ is continuous, positive and decreasing on $[1,\infty)$.
\left.\frac{x^{-p+1}}{-p+1}\right|_1^\infty & {\rm if} & p\ne 1,\\
\ln x|_1^\infty & {\rm if} & p=1.
Therefore the series converges if $p>1$ and diverges if $p\leq 1$.

Example. Test the series $\sum_{n=1}^\infty\frac{1}{n^2+1}$ for convergence or divergence.

Solution. $f(x)=\frac{1}{x^2+1}$ is continuous, positive and decreasing on $[1,\infty)$. \begin{align*}\int_1^\infty\frac{1}{x^2+1}dx&=\left.\arctan x\right|_1^\infty\\&=\arctan \infty-\arctan 1\\&=\frac{\pi}{4}\end{align*} Therefore, by the Integral Test the series converges.

Example. Determine whether $\sum_{n=1}^\infty\frac{\ln n}{n}$ converges or diverges.

Solution. $f(x)=\frac{\ln x}{x}$ is continuous, positive and decreasing on $[3,\infty)$. (One can easily check $f(x)$ is decreasing on $(e,\infty)$ by its derivative $f'(x)$.) \begin{align*}\int_3^\infty\frac{\ln x}{x}dx&=\frac{1}{2}\left.(\ln x)^2\right|_3^\infty\\&=\infty\end{align*} Therefore, $\sum_{n=1}^\infty \frac{\ln n}{n}$ diverges.

Theorem (Remainder Estimate for the Integral Test)
If $\sum_{n=1}^\infty a_n$ converges by the Integral Test and $R_n=S-s_n$, then
\begin{equation}\label{eq:remest}\int_{n+1}^\infty f(x)dx\leq R_n\leq\int_n^\infty f(x)dx\end{equation}

Proof. Using the left-end point method we obtain $$R_n=a_{n+1}+a_{n+2}+\cdots\geq\int_{n+1}^\infty f(x)dx$$ as seen in Figure 3.

Figure 3. Remainder Estimate

Now using the right-end point method we obtain $$R_n=a_{n+1}+a_{n+2}+\cdots\leq\int_n^\infty f(x)dx$$ as seen in Figure 4.

Figure 4. Remainder Estimate

Hence proves \eqref{eq:remest}.


  1. Approximate the sum of the series $\sum_{n=1}^\infty\frac{1}{n^3}$ by using the sum of the first 10 terms. Estimate the error involved in this approximation.
  2. How many terms are required to ensure that the sum is accurate to within $0.0005$?

Solution. First we calculate $$\int_n^\infty\frac{1}{x^3}dx=\frac{1}{2n^2}$$

  1. $s_{10}=\frac{1}{1^3}+\frac{1}{2^3}+\cdots+\frac{1}{10^3}\approx 1.197532$. By the remainder estimate \eqref{eq:remest} $$R_{10}\leq\int_{10}^\infty\frac{1}{x^3}dx=\frac{1}{200}=0.005$$ So the size of the error is at most 0.005.
  2. $R_n\leq\int_n^\infty\frac{1}{x^3}dx=\frac{1}{2n^2}$.  Suppose $\frac{1}{2n^2}<0.0005$. Then we find $n>\sqrt{1000}\approx 31.6$. This means we need 32 terms to guarantee accuracy to within 0.0005.

Corollary. \begin{equation}\label{eq:sumest}s_n+\int_{n+1}^\infty f(x)dx\leq s\leq s_n+\int_n^\infty f(x)dx\end{equation}

Proof. Add $s_n$ to each side of the inequalities in \eqref{eq:remest}

Example. Use the inequality \eqref{eq:sumest} with $n=10$ to estimate the sum of the series $\sum_{n=1}^\infty\frac{1}{n^3}$.

Solution. Using \eqref{eq:sumest} for $n=10$ we have $$s_{10}+\int_{11}^\infty\frac{1}{x^3}dx\leq s\leq s_{10}+\int_{10}^\infty\frac{1}{x^3}dx$$ i.e. $$s_{10}+\frac{1}{2(11)^2}\leq s\leq s_{10}+\frac{1}{2(10)^2}$$ Hence we get $$1.201664\leq s\leq 1.202532$$ We can approximate $s$ by taking the midpoint of this interval (i.e. the average of the boundary points) which is $s\approx 1.2021$. The error is then at most half the length of the interval i.e. the error is smaller than 0.0005. Recall that we had to use 32 terms to make error smaller than 0.0005 in the previous example but in this example we needed only 10 terms. So we can obtain a much improved estimate using \eqref{eq:sumest} than using $s_n$.

Infinite Series

Definition. Let $a_1,a_2,\cdots,a_n\cdots$ be any sequence of quantities. Then the symbol
\sum_{n=1}^\infty a_n=a_1+a_2+\cdots+a_n+\cdots
is called an infinite series. Let
The numbers $s_n$ is called the $n$-th partial sums of the series \eqref{eq:series}.

Definition. An infinite series $\sum_{n=1}^\infty a_n$ is said to converge if the sequence of partial sums $\{s_n\}$ converges i.e. $\sum_{n=1}^\infty a_n=s<\infty$ means that for any $\epsilon>0$ there exists a positive integer $N$ such that
$$|s_n-s|<\epsilon\ \mbox{for all}\ n\geq N$$ A series which does not converge is said to diverge.

Remark. It should be noted that there is no unique way to define the sum of an infinite series. While the definition we use is the conventional one, there are other ways to define the sum of an infinite series. Some of the divergent series according to the conventional definition may converge with a different definition. Although it may seem outrageous it can be shown that $1+2+3+\cdots=-\frac{1}{12}$. It was first proved by the genius Indian mathematician Srinivasa Ramanujan. If you have a Netflix account, you can watch a biographical movie about Ramanujan. The movie title is The Man Who Knew Infinity which is based on a biography by Robert Kanigel The Man Who Knew Infinity: A Life of the Genius Ramanujan. I find divergent series fascinating. In case you are interested, I wrote about divergent series in blog articles here and here.

Proposition. If $\sum_{n=1}^\infty a_n$ converges, then $\lim_{n\to\infty}a_n=0.$

Note that the converse of the proposition is not necessarily true. See the example on harmonic series below. The proposition, more precisely its contrapositive

If $\lim_{n\to\infty}a_n\ne 0$, then $\sum_{n=1}^\infty a_n$ diverges.

can be used as a divergence test for series. For example, the series $\sum_{n=1}^\infty\frac{n}{n+1}$ diverges because $\lim_{n\to\infty}\frac{n}{n+1}=1\ne 0$.

Theorem (Cauchy’s Criterion for the Convergence of a Sequence).
A necessary and sufficient condition for the convergence of a sequence $\{a_n\}$ is that for any $\epsilon>0$ there exists a positive integer $N$ such that
$$|a_n-a_m|<\epsilon\ \mbox{for all}\ n,m\geq N.$$

Corollary (Cauchy’s Criterion for the Convergence of a Series).
A necessary and sufficient condition for the convergence of a series $\sum_{n=1}^\infty u_n$ is that for any $\epsilon>0$ there exists a positive integer $N$ such that
$$|s_n-s_m|<\epsilon\ \mbox{for all}\ n,m\geq N.$$

Example (The Geometric Series).
The geometric sequence, starting with $a$ and ratio $r$, is given by
$$a, ar,ar^2,\cdots,ar^{n-1},\cdots.$$
The $n$th partial sum is given by
Taking the limit as $n\to\infty$,
$$\lim_{n\to\infty}s_n=\frac{a}{1-r}\ \mbox{for}\ -1<r<1.$$
Hence the infinite geometric series converges for $-1<r<1$ and is given by
$$\sum_{n=1}^\infty ar^{n-1}=\frac{a}{1-r}.$$
On the other hand, if $r\leq -1$ or $r\geq 1$ then the infinite series diverges.

Example (The Harmonic Series).
Consider the harmonic series
Group the terms as
Then each pair of parentheses encloses $p$ terms of the form
Forming partial sums by adding the parenthetical groups one by one, we obtain
$$s_1=1, s_2=1+\frac{1}{2}, s_4>1+\frac{2}{2}, s_8>1+\frac{3}{2}, s_{16}>1+\frac{4}{2},\cdots, s_{2^n}>1+\frac{n}{2},\cdots.$$
This shows that $\lim_{n\to\infty}s_{2^n}=\infty$ and so $\{s_n\}$ diverges. Therefore, the harmonic series diverges.

Example. The following type of series are called telescoping series. #2 is left as an exercise.

  1. Show that
    $$\sum_{n=1}^\infty\frac{1}{(2n-1)(2n+1)}=\frac{1}{2}$$ Solution. \begin{align*}s_n&=\sum_{k=1}^n\frac{1}{(2k-1)(2k+1)}\\&=\frac{1}{2}\sum_{k=1}^n\left(\frac{1}{2k-1}-\frac{1}{2k+1}\right)\\&=\frac{1}{2}\left(1-\frac{1}{2n+1}\right)\\&=\frac{n}{2n+1}\end{align*} and $\lim_{n\to\infty}s_n=\frac{1}{2}$.
  2. Show that $$\sum_{n=1}^\infty\frac{1}{n(n+1)}=1$$


Definition. A succession of real numbers $$a_1,a_2,\cdots,a_n,\cdots$$ in a definite order is called a sequence. $a_n$ is called the $n$-th term or the general term. The sequence $\{a_1,a_2,\dots,a_n,\cdots\}$ is denoted by $\{a_n\}$ or $\{a_n\}_{n=1}^\infty$.


  1. The set of natural numbers $1,2,3,4,\cdots,n,\cdots$
  2. $1,-2,3,-4,\cdots,(-1)^{n-1}n,\cdots$
  3. $\frac{1}{2},-\frac{1}{4},\frac{1}{8},-\frac{1}{16},\cdots,(-1)^{n-1}\frac{1}{2^n},\cdots$
  4. $0,1,0,1,\cdots,\frac{1}{2}[1+(-1)^n],\cdots$
  5. $2,3,5,7,11,\cdots,p_n,\cdots$

It is not essential that the general term of a sequence is given by some simple formula as is the case in the first four examples above. The sequence in 5 represents the succession of prime numbers. $p_n$ stands for the $n$-th prime number. There is no formula available for the determination of $p_n$.

The following is the quantifying definition of the limit of a sequence due to Augustin-Louis Cauchy.

Definition. A sequence $\{a_n\}$ has a limit $L$ and we write $\lim_{n\to\infty}a_n=L$ or $a_n\to L$ as $n\to\infty$ if for any $\epsilon>0$ there exists a positive integer $N$ such that $$|a_n-L|<\epsilon\ \mbox{for all}\ n\geq N.$$


  1. Show that $\lim_{n\to\infty}\frac{1}{n}=0$
  2. Show that $\lim_{n\to\infty}\frac{1}{10^n}=0$
  3. Let $\{a_n\}$ be a sequence defined by $$a_1=0.3, a_2=0.33, a_3=0.333,\cdots,$$ show that $\lim_{n\to\infty}a_n=\frac{1}{3}$

Proof. I will prove 1. 2 and 3 are left as exercises. Let $\epsilon>o$ be given. Then $|a_n-L|=\frac{1}{n}<\epsilon\Longrightarrow n>\frac{1}{\epsilon}$. Choose $N$ a positive integer $\frac{1}{\epsilon}$. Then for all $n>N$ $|a_n-L|<\epsilon$.

The following limit laws allow us to break a complicated limit to simpler ones.

Theorem. Let $\lim_{n\to\infty}a_n=L$ and $\lim_{n\to\infty}b_n=M$. Then

  1. $\lim_{n\to\infty}(a_n\pm b_m)=L\pm M$
  2. $\lim_{n\to\infty}ca_n=cL$ where $c$ is a constant.
  3. $\lim_{n\to\infty}a_nb_n=LM$
  4. $\lim_{n\to\infty}\frac{a_n}{b_n}=\frac{L}{M}$ provided $M\ne 0$.

Example. Find $\lim_{n\to\infty}\frac{n}{n+1}$.

Solution. \begin{align*}\lim_{n\to\infty}\frac{n}{n+1}&=\lim_{n\to\infty}\frac{1}{1+\frac{1}{n}}=1\end{align*} since $\lim_{n\to\infty}\frac{1}{n}=0$.

The following theorem is also an important tool for calculating limits of certain sequences.

Theorem (Squeeze Theorem). If $a_n\leq b_n\leq c_n$ for $n\geq n_0$ and $\lim_{n\to\infty}a_n=\lim_{n\to\infty}c_n=L$ then $\lim_{n\to\infty}b_n=L$.

Corollary. If $\lim_{n\to\infty}|a_n|=0$ then $\lim_{n\to\infty}a_n=0$.

Proof. It follows from the inequality $-|a_n|\leq a_n\leq |a_n|$ for all $n$ and the Squeeze Theorem.

Example. Use the Squeeze Theorem to show $$\lim_{n\to\infty}\frac{n!}{n^n}=0$$

Solution. It follows from $$0\leq\frac{n!}{n^n}=\frac{1\cdot 2\cdot 3\cdots n}{n\cdot n\cdot n\cdots n}\leq\frac{1}{n}$$ for all $n$.

The following theorem enables you to use a cool formula you learned in Calculus II, L’Hôpital’s rule!

Theorem. If $\lim_{x\to\infty}f(x)=L$ and $f(n)=a_n$, then $\lim_{n\to\infty}a_n=L$.

Example. Calculate $\lim_{n\to\infty}\frac{\ln n}{n}$.

Solution. Let $f(x)=\frac{\ln x}{x}$. Then $\lim_{x\to\infty}f(x)$ is an indeterminate form of  type $\frac{\infty}{\infty}$. So by L’Hôpital’s rule \begin{align*}\lim_{x\to\infty}\frac{\ln x}{x}&=\lim{x\to\infty}\frac{(\ln x)’}{x’}\\&=\lim{x\to\infty}\frac{1}{x}\\&=0\end{align*} Hence by the Theorem above $\lim_{n\to\infty}\frac{\ln n}{n}=0$.

Example. Calculate $\lim_{n\to\infty}\root n\of{n}$.

Solution. Let $f(x)=x^{\frac{1}{x}}$. Then $\lim_{x\to\infty}f(x)$ is an indeterminate form of type $\infty^0$. As you learned in Calculus II, you will have to convert the limit into an indeterminate form of type $\frac{\infty}{\infty}$ or type $\frac{0}{0}$ so that you can apply L’Hôpital’s rule to evaluate the limit. Let $y=x^{\frac{1}{x}}$. Then $\ln y=\frac{\ln x}{x}$. As we calculated in the previous example, $\lim_{x\to\infty}\ln y=0$. Since $\ln y$ is continuous on $(0,\infty)$, $$\lim_{x\to\infty}\ln y=\ln(\lim_{x\to\infty} y)$$ Hence, $$\lim_{x\to\infty}x^{\frac{1}{x}}=e^0=1$$ i.e. $\lim_{n\to\infty}\root n\of{n}=1$.

Theorem. $\lim_{n\to\infty}\root n\of{a}=1$ for $a>0$.

Example. $\lim_{n\to\infty}\frac{1}{\root n\of{2}}=1$.

Definition. A sequence $\{a_n\}$ is said to diverge if it fails to converge. Divergent sequences include sequences that tend to infinity or negative infinity, for example $1,2,3,\cdots,n,\cdots$ and sequences that oscillates such as  $1,-1,1,-1,\cdots$.

Definition. A sequence $\{a_n\}$ is said to be bounded if there exists $M>0$ such that $|a_n|<M$ for every $n$.

Theorem. A convergent sequence is bounded but the converse need not be true.

Definition. A sequence $\{a_n\}$ is said to be monotone if it satisfies either $$a_n\leq a_{n+1}\ \mbox{for all}\ n$$ or $$a_n\geq a_{n+1}\ \mbox{for all}\ n$$

Equivalently, one can show that a sequence $\{a_n\}$ is monotone increasing by checking to see if it satisfies  $$\frac{a_{n+1}}{a_n}\geq 1\ \mbox{for all}\ n$$ or $$a_{n+1}-a_n\geq 0\ \mbox{for all}\ n$$

The following theorem is called the Monotone Sequence Theorem.

Theorem. A monotone sequence which is bounded is convergent.


  1. Show that the sequence $$\frac{1}{2},\frac{1}{3}+\frac{1}{4},\frac{1}{4}+\frac{1}{5}+\frac{1}{6},\cdots,\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n},\cdots$$ is convergent.
  2. Show that the sequence $$1,1+\frac{1}{2},1+\frac{1}{2}+\frac{1}{4},\cdots,\frac{1}{2}+\frac{1}{4}+\cdots+\frac{1}{2^n},\cdots$$ is convergent.

Solution. 2 is left as an exercise. $a_{n+1}-a_n=\frac{1}{2n+2}+\frac{1}{2n+1}-\frac{1}{n+1}=\frac{4n+1}{(2n+2)(2n+1)}>0$ for all $n$. So it is monotone increasing. $$a_n=\frac{1}{n+1}+\cdots+\frac{1}{n+n}\leq\frac{1}{n}+\cdots+\frac{1}{n}=\frac{n}{n}=1$$ for all $n$. So it is bounded. Therefore, it is convergent by the monotone sequence theorem.

Why Can’t Speeds Exceed $c$?

This is a guest post by Dr. Lawrence R. Mead. Dr. Mead is a Professor Emiritus of Physics at the University of Southern Mississippi.

It is often stated in elementary books and repeated by many that the reason that an object with mass cannot achieve or exceed the speed of light in vacuum is the “mass becomes infinite”, or “time stops” or even “the object has zero size”. There are correct viewpoints for why matter or energy or any signal of any kind cannot exceed light speed and these reasons have little to do directly with mass changes or time dilations or Lorentz contractions.

Reason Number One

Consider a signal of any kind (mass, energy or just information) which travels at speed $u=\alpha c$ beginning at point $x_A$ at time $t=0$ and arriving at position $x_B$ at later time $t>0$. From elementary kinematics,
$$t=(x_B -x_A)/u = {\Delta x \over \alpha c}.$$
Now suppose the signal travels at a speed exceeding $c$, that is $\alpha > 1$. Let us calculate the elapsed time as measured by a frame going by at speed $V<c$. According to the Lorentz transformation,
\begin{equation}\label{eqno1}t’ = \gamma (t-{Vx \over c^2}),\end{equation} where $\gamma=\frac{1}{\sqrt{1-\frac{V^2}{c^2}}}$.
There are two events: the signal leaves $x=x_A$ at $t=0$, and the signal arrives at $x=x_B$ at time $t=\Delta t$. According to \eqref{eqno1}, these events in the moving frame happen at times,
$$t’_A=\gamma ( 0 -Vx_A/c^2)$$ and
$$t’_B=\gamma (\Delta t – Vx_B/c^2).$$
Thus, the interval between events in the moving frame is, \begin{equation}\begin{aligned}\Delta t’ &= t’_B-t’_A\\
&=\gamma \Delta t -\gamma \frac{V}{c^2}(x_B-x_A)\\
&=\gamma \Delta t ( 1-\alpha V/ c).\end{aligned}\label{eqno2}\end{equation}
Now suppose that $\alpha V/c > 1$, which implies that,
$$ c > V > c/\alpha .$$ Then for moving frames within that range of speeds it follows from \eqref{eqno2} that,
$$\Delta t’= t’_B-t’_A <0,$$ meaning physically that the signal arrived before it was sent! This is a logical paradox which is impossible on physical grounds; no one will argue that in any frame a person can be shot before the gun is fired, or you obtain the knowledge of the outcome of a horse race before the race has begun.

Well now what if the two events at $x_A$ and $x_B$ are not causally connected but one (say at $x_B$ for definiteness) simply happened after the other? How does the above argument change? How does the above math “know” that there is or is not a causal connection between the events? Everything goes the same up to the second line of equation \eqref{eqno2}:
\begin{equation}\label{eqno3} \Delta t’ = \gamma \{ \Delta t – V(x_B-x_A)/c^2 \}. \end{equation}
Can there be a moving frame of speed $V<c$ for which the event at $x_B$ (the later one in S) happens before the event at $x_A$ (the earlier one in S)? If so, $\Delta t’ < 0$; from \eqref{eqno3} then we find,
$$\Delta t – V(x_B-x_A)/c^2 < 0, $$ or solving for $V$,
$$ c > V > c{c\over \Delta x/ \Delta t}.$$ In order for $V$ to be less
than $c$, it must therefore be that, ${c\over \Delta x /\Delta t} < 1$, or
$${\Delta x \over \Delta t} > c.$$ This is possible for sufficiently large $\Delta x$ and/or sufficiently small $\Delta t$ because the ratio ${\Delta x \over \Delta t}$ is not the velocity of any signal, though it has the units of speed.

What is the Speed of “Light” Anyway?

Note that the Lorentz transformation contains the speed $c$ in it. What is this speed? Without referencing Maxwell’s equations of Electromagnetism, one does not know that $c$ is in fact the speed of light itself. But the above analysis shows – without reference to Maxwell – that the speed $c$ cannot be exceeded. And what is the speed talked about in the previous discussion? Well, it is the maximum speed at which one event can influence another with given (fixed) separation – thus, $c$ above isn’t really the speed of light at all; rather it is the speed of causality!

Reason Number Two

Imagine, for example, a constant force $F$ acting on a particle of (rest) mass $m$. Newton’s second Law in its relativistic form gives,
\begin{equation}\begin{aligned} F &= {dp \over dt} \\
&= {d \over dt} \, mv\gamma \\
&= m \gamma^3 \, \dot v, \end{aligned}\label{eqno4}\end{equation}
where we have assumed straight line motion. This is an autonomous differential equation whose solution, assuming the object is initially at rest, is,
$$ v(t)=at/(1+a^2t^2/c^2)^{1/2}, $$
where $a=F/m$. It is clear that as $t \to \infty$, $v(t)$
approaches $c$ and not infinity. Moreover, the differential impulse at arbitrary time $t$ on the particle can be found from taking the derivative of $v(t)$ given in the last equation,
\begin{equation}\label{eqno5}m\, dv = { F \, dt \over (1+a^2t^2/c^2)^{3/2}}. \end{equation}
From equation \eqref{eqno5}, it is clear that the incremental speed increase $dv$ over time $dt$ approaches zero as $t \to \infty$. Thus, from this point of view we see that while the force still does work, the increase in speed for a given interval of time and incremental amount of work, is less and less as time goes on which is why the speed never reaches $c$ over any finite time interval.

Reason Number Three

In the interval of time $dt$ as measured in some inertial frame observing a moving body, the clock attached to the body ticks off proper time
\begin{equation}\label{eqno6}d\tau = \sqrt{1-v^2/c^2}\, dt. \end{equation}
However, for light $v\equiv c$, and therefore $d\tau\equiv 0$. Light takes no proper time to go between two points however distantly separated in space. Thus, no object could travel faster than taking no time. This is the oft-repeated mantra of textbooks, and, while the mathematics verifies it, there are far more fundamental reasons, the best being causality as outlined above.