Let $\gamma(t)=(x(t),y(t))$ be a positively-oriented simple closed curve in $\mathbb{R}^2$ with period $a$. The area of its interior is given by

$$A(\mathrm{int}(\gamma))=\int\int_{\mathrm{int}(\gamma)}dxdy=\frac{1}{2}\int_0^a(x\dot y-y\dot x)dt$$

The last line integral is obtained by applying Green’s Theorem to the second double integral. A rigid motion can be given by $M=T_b\circ R_{\theta}$ where $R_{\theta}=\begin{pmatrix}

\cos\theta & -\sin\theta\\

\sin\theta & \cos\theta

\end{pmatrix}$ is a rotation by an angle $\theta$ and $T_b$ is a translation by a constant vector $b=(b_1,b_2)$. So $\tilde\gamma=M(\gamma)$ is given by

$$\tilde\gamma=(\tilde x,\tilde y)=(x\cos\theta-y\sin\theta+b_1,x\sin\theta+y\cos\theta+b_2)$$

and

\begin{align*}\tilde x\dot{\tilde y}-\tilde y\dot{\tilde x}&=x\dot y-y\dot x+b_1(\dot x\sin\theta+\dot y\cos\theta)-b_2(\dot x\cos\theta-\dot y\sin\theta)\\&=x\dot y-y\dot x+b\cdot\begin{pmatrix}

\sin\theta & \cos\theta\\

-\cos\theta & \sin\theta

\end{pmatrix}\begin{pmatrix}

\dot x\\

\dot y

\end{pmatrix}\\&=x\dot y-y\dot x+b\cdot R_{\theta-\frac{\pi}{2}}(\dot\gamma)\end{align*}

\begin{align*}\int_0^a b\cdot R_{\theta-\frac{\pi}{2}}(\dot\gamma(t))dt&=b\cdot R_{\theta-\frac{\pi}{2}}\left(\int_0^a\dot\gamma(t)dt\right)\\

&=0\end{align*}

since $\int_0^a\dot\gamma(t)dt=\gamma(a)-\gamma(0)=0$. ($\gamma(t)$ is a simple closed curve with period $a$.) Therefore, $A(\mathrm{int}(\gamma))=A(\mathrm{int}(M(\gamma))$.

# MAT 101: Zeros of Polynomials

As we studied here, once you know how to find at least one rational zero of a polynomial using long division or synthetic division you can find the rest of the zeros of the polynomial. In this note, we study how to find a rational zero of a polynomial if there is one. Let $P(x)=a_nx^n+\cdots +a_1x+a_0$ and suppose that $P(x)$ has a rational zero $\frac{p}{q}$. This means that by factor theorem $P(x)$ has a factor $x-\frac{p}{q}$ or equivalently a factor $qx-p$. That is, $P(x)=(qx-p)Q(x)$ where $Q(x)$ is a polynomial of degree $n-1$. Let us write $Q(x)=b_{n-1}x^{n-1}+\cdots+b_1x+b_0$. Then we see that $a_n=qb_{n-1}$ and $a_0=-pb_0$. This means that $q$ is a factor of the leading coefficient $a_n$ of $P(x)$ and $p$ is a factor of the constant term $a_0$ of $P(x)$. Hence we have the *Rational Zero Theorem*.

*Rational Zero Theorem*. Let $P(x)=a_nx^n+\cdots +a_1x+a_0$ be a polynomial with integer coefficients where $a_n\ne 0$ and $a_0\ne 0$. If $P(x)$ has a rationa zero $\frac{p}{q}$ then $q$ is a factor of $a_n$ and $p$ is a factor of $a_0$.

Here is the strategy to find a rational zero of a polynomial $P(x)$.

STEP 1. Use the rational zero theorem to find the all candidates for a rational zero of $P(x)$.

STEP 2. Test each candidate from STEP 1 to see if it is a rational zero using the factor theorem. Once you find one say $\frac{p}{q}$, stop and move to STEP 3

STEP 3. Use long division or synthetic division (easier) to divide $P(x)$ by $x-\frac{p}{q}$ to find the rest of the zeros.

STEP 4. If necessary (in the event $Q(x)$ from STEP 3 has a higher degree), repeat the process $Q(x)$ from STEP 1.

*Example*. Find all zeros of $P(x)=2x^3+x^2-13x+6$.

*Solution*. $a_0=2$ has factors $\pm 1$ and $\pm 2$. $a_0=6$ has factors $\pm 1,\pm 2,\pm3\pm 6$. Thus all the candidates for a rational zero are

$$\pm 1,\pm 2,\pm 3,\pm 6,\pm\frac{1}{2},\pm\frac{2}{2}=\pm 1,\pm\frac{3}{2},\pm\frac{6}{2}=\pm 3$$

Since $P(2)=0$, 2 is a rational zero. Using long division or synthetic division we find $Q(x)=2x^2+5x-3=(2x-1)(x+3)$. Therefore, all zeros of $P(x)$ are $-3,\frac{1}{2},2$.

*Example*. Find all zeros of $P(x)=x^4-5x^3+23x+10$.

*Solution*. $a_n=1$ has factors $\pm 1$ and $a_0=10$ has factors $\pm 1, \pm 2, \pm 5, \pm10$. So all the candidates for a rational zero are

$$\pm 1, \pm 2, \pm 5, \pm10$$

Since $P(5)=0$, 5 is a rational zero. Using long division or synthetic division we find $Q(x)=x^3-5x-2$. We cannot factor this cubic polynomial readily so we repeat the process. The leading coeffient 1 has factors $\pm 1$ and the constant term $-2$ has factors $\pm 1,\pm 2$ so all the candidates for a rational zero of $Q(x)$ are $\pm 1,\pm 2$. $Q(-2)=0$ so $-2$ is a rational zero of $Q(x)$ (and hence of $P(x)$ as well). Using one’s favorite division we find the quotient $x^2-2x-1$ which has two real zeros $1\pm\sqrt{2}$. Therefore, all zeros of $P(x)$ are

$5, -2, 1\pm\sqrt{2}$.

It would be convenient if we can estimate how many positive real zeros and how many negative zeros without actually factoring the polynomial. Here is a machinary just for that.

**Descartes’ Rule of Signs**

Let $P(x)$ be a polynomial with real coefficients.

- The number of positive real zeros of $P(x)$ is either equal to the number of variations in sign in $P(x)$ or is less than that by an even number.
- The number of negative real zeros of $P(x)$ is either equal to the number of variations in sign in $P(-x)$ or is less than that by an even number.

*Example*. $P(x)=3x^6+4x^5+3x^3-x-3$ has one variation in sign so there is one positive real zero. $P(-x)=3x^6-4x^5-3x^3+x-3$ has three variations in sign so there can be either three negative zeros or one negative zero.

**Upper and Lower Bounds for Real Zeros**

Let $P(x)$ be a polynomial with real coefficients.

- If we divide $P(x)$ by $x-a$ ($a>0$) using synthetic division and if the row that contains the quotient and remainder has no negative entry, then $a$ is an upper bound for the real zeros of $P(x)$.
- If we divide $P(x)$ by $x-b$ ($b<0$) using synthetic division and if the row that contains the quotient and remainder has entries that are alternatively nonpositive and nonnegative, then $b$ is a lower bound for the real zeros of $P(x)$.

*Example*. If we divide $P(x)=3x^6+4x^5+3x^3-x-3$ by $x-1$ then

$$1|\begin{array}{cccccc}

3 & 4 & 3 & 0 & -1 & -3\\

& 3 & 7 & 10 & 10 & 9\\

\hline

3 & 7 & 10 & 10 & 9 & 6

\end{array}$$

Since the row that contains quotient and remainder has no negative entries, 1 is an upper bound for real zeros of $P(x)$. If we divide $P(x)$ by $x-(-2)$ then

$$-2|\begin{array}{cccccc}

3 & 4 & 3 & 0 & -1 & -3\\

& -6 & 4 & -14 & 28 & -54\\

\hline

3 & -2 & 7 & -14 & 27 & -57

\end{array}$$

The entries of the row that contains the quotient and remainder are alternatively nonpositive and nonnegative, so $-2$ is a lower bound for real zeros of $P(x)$. $P(x)$ in fact does not have any integer zeros but the upper and lower bounds helps us graphically locate the real zeros of $P(x)$. Also they can be used as initial estimates for *Newton’s method*, a method that can find approximations to real zeros of a polynomial. Figure 1 shows that there is one positive real zero and one negative real zero of $P(x)$.

# MAT 101: Dividing Polynomials

Polynomials are nice in the sense that they behave like numbers. For polynomials *Division Algorithm* works as well namely Given polynomials $P(x)$ and $D(x)\ne 0$ there exist unique polynomials $Q(x)$ and $R(x)$ such that

$$P(x)=D(x)Q(x)+R(x)$$

or

$$\frac{P(x)}{D(x)}=Q(x)+\frac{R(x)}{D(x)}$$

$P(x)$, $D(x)$, $Q(x)$, and $R(x)$ are called, respectively, the dividend, divisor, quptient and remainder. There are two ways to divide a polynomial by another polynomial. The first one is the familiar long division and it works the same way we do with numbers.

*Example*. Let $P(x)=8x^4+6x^2-3x+1$ and $D(x)=2x^2-x+2$. Find polynomials $Q(x)$ and $R(x)$ such that $P(x)=D(x)Q(x)+R(x)$.

*Solution*.

Hence $Q(x)=4x^2+2x$ and $R(x)=-7x+1$.

The other method is called *synthetic division*. This method cannot be used for any polynomial divisions, however it works great when the divisor is a linear polynomial and is easier than long division. Synthetic division uses only coefficients without including variables as shown in the following example.

*Example*. Using synthetic division divide $2x^3-7x^2+5$ by $x-3$.

*Solution*.

Hence we have $Q(x)=2x^2-x-3$ and $R=-4$.

If a polynomial $P(x)$ is divided by a linear polynomial $x-c$, by division algorithm $P(x)$ can be written as

$$P(x)=(x-c)Q(x)+R$$

for some $Q(x)$ and $R$. So, $P(c)=R$ and hence we obtain the *Remainder Theorem*.

*Remainder Theorem*. If a polynomial $P(x)$ is divided by $x-c$, then the remainder is $P(c)$.

*Example*. Let $P(x)=3x^5+5x^4-4x^3+7x+3$. Use the remainder theorem to find the remainder when $P(x)$ is divided by $x+2$.

*Solution*. $R=P(-2)=5$.

As a corollary of the remainder theorem we have

*Factor Theorem*. $c$ is a zero of $P(x)$ if and only if $x-c$ is a factor of $P(x)$.

*Example*. Let $P(x)=x^3-7x+6$. Show that $P(1)=0$ and use this information to factor $P(x)$ completely.

*Solution*. Dividing $P(x)$ by $x-1$ (using long division or synthetic division) we find $Q(x)=x^2+x-6$ and $R=0$ (of course as we expected). So,

\begin{align*}

P(x)&=(x-1)(x^2+x-6)\\

&=(x-1)(x-2)(x+3)

\end{align*}

*Example*. Find a polynomial of degree 4 that has zeros $-3$, 0, 1, and 5.

*Solution*. Such a polynomial would have $x+3$, $x$, $x-1$, and $x-5$ for its factors by the factor theorem. So the simplest one is

$$P(x)=(x+3)x(x-1)(x-5)=x^4-3x^3-13x^2+15x$$

# The Proof of the Chain Rule

In this note, we introduce two versions of the proof of the Chain Rule. The first one comes from [1]. Let $y=f(u)$ and $u=g(x)$ be differentiable functions. We claim that

$$\frac{dy}{dx}=f'(u)g'(x)$$

The finite difference $\frac{f(g(x+h))-f(g(x))}{h}$ can be written as $\frac{f(u+k)-f(u)}{h}$ where $k=g(x+h)-g(x)$. Define $\varphi(t)=\frac{f(u+t)-f(u)}{t}-f'(u)$ if $t\ne 0$. Multiplying by $t$ and rearranging terms, we obtain

\begin{equation}

\label{eq:chainpf}

f(u+t)-f(u)=t[\varphi(t)+f'(u)]

\end{equation}

$\lim_{t\to 0}\varphi(t)=0$ so we define $\varphi(0)=0$. Then \eqref{eq:chainpf} is defined for all $t$. Now replace $t$ in \eqref{eq:chainpf} by $k$.

\begin{equation}

\label{eq:chainpf2}

\frac{f(u+k)-f(u)}{h}=\frac{k}{h}[\varphi(k)+f'(u)]

\end{equation}

\eqref{eq:chainpf2} is valid even if $k=0$. When $h\to 0$, $\frac{k}{h}\to g'(x)$ and $\varphi(k)\to 0$. Hence the RHS of \eqref{eq:chainpf2} approaches $f'(u)g'(x)$. This completes the proof.

Another version of the proof of the Chain Rule is from [2] as a guided exercise (# 99 on page p. 559). Here we suppose that $y=f(u)$ is differentiable at $u_0=g(x_0)$ and $u=g(x)$ is differentiable at $x_0$. Then we claim that $y=f(g(x))$ is differentiable at $x=x_0$ and $$\left[\frac{dy}{dx}\right]_{x=x_0}=f'(u_0)g'(x_0)$$

Since $g'(x_0)$ exists, $\Delta u$ can be written as

$$\Delta u=g'(x_0)\Delta x+\rho(x)$$

where $\lim_{\Delta x\to 0}\frac{\rho(x)}{\Delta x}=0$. Similarly, if $\Delta u\ne 0$ (it could be 0), then $\Delta y$ can be written as

\begin{equation}

\label{eq:chainpf3}

\Delta y=f'(u_0)\Delta u+\sigma(u)

\end{equation}

where $\lim_{\Delta u\to 0}\frac{\sigma(u)}{\Delta u}=0$.

\begin{align*}

\Delta y&=f'(u_0)[g'(x_0)\Delta x+\rho(x)]+\sigma(g(x))\\

&=f'(u_0)g'(x_0)\Delta x+f'(u_0)\rho(x)+\sigma(x)

\end{align*}

As $\Delta u\to 0$, $\Delta y\to 0$ and accordingly $\sigma(u)\to 0$. So one can define $\sigma(U)=0$ if $\Delta u=0$ (that is one can define $\sigma(u_0)=\sigma(g(x_0))=0$). Then \eqref{eq:chainpf3} is still valid if $\Delta u=0$.

$$\frac{\sigma(g(x))}{\Delta x}=\left\{\begin{array}{ccc}

\frac{\sigma(g(x))}{\Delta u}\cdot\frac{\Delta u}{\Delta x} & \mbox{if} & \Delta u\ne 0\\

0 & \mbox{if} & \Delta u=0\end{array}\right.\to 0$$

as $\Delta x\to 0$. Therefore,

$$\frac{\Delta y}{\Delta x}=f'(u_0)g'(x_0)+f'(u_0)\frac{\rho(x)}{\Delta x}+\frac{\sigma(g(x))}{\Delta x}$$

approaches

$$\frac{dy}{dx}=f'(u_0)g'(x_0)$$

as $\Delta x\to 0$.

*References*:

[1] Tom M. Apostol, Calculus, Volume I One-Variable Calculus with an Introduction to Linear Algebra, 2nd Edition, John Wiley & Sons, Inc., 1967

[2] Jerrold Marsden and Alan Weinstein, Calculus II, Springer-Verlag, 1985

# MAT 101: One-to-One Functions and Inverse Functions

A function $y=f(x)$ is said to be *one-to-one* if satisfies the property

$$f(x_1)=f(x_2) \Longrightarrow x_1=x_2$$

or equivalently

$$x_1\ne x_2 \Longrightarrow f(x_1)\ne f(x_2)$$

for all $x_1,x_2$ in the domain. In plain English what this says is no two numbers in the domain are corresponded to the same number in the range. Figure 1 is the graph of $f(x)=x^2$. It is not one-to-one.

For example, $-1\ne 1$ but $f(-1)=1=f(1)$.

Figure 2 is the graph of $f(x)=x^3$. It is one-to-one as seen clearly from the graph. But let us pretend that we don’t know the graph but want to prove that it is one-to-one following the definition. Here we go. Suppose that $f(x_1)=f(x_2)$. Then $x_1^3=x_2^3$ or $x_1^3-x_2^3=(x_1-x_2)(x_1^2+x_1x_2+x_2^2)=0$. This means $x_1=x_2$ which completes the proof.

Why do we care about one-to-one functions? The reason is that if $y=f(x)$ is one-to-one, it has an *inverse function* $y=f^{-1}(x)$.

\begin{align*}

x&\stackrel{f}{\longrightarrow} y\\

x&\stackrel{f^{-1}}{\longleftarrow} y

\end{align*}

Given a one-to-one function $y=f(x)$, here is how to find its inverse function $y=f^{-1}(x)$

STEP 1. Swap $x$ and $y$ in $y=f(x)$. The reason we are doing this is that $\mathrm{Dom}(f)=\mathrm{Range}(f^{-1})$ and $\mathrm{Dom}(f^{-1})=\mathrm{Range}(f)$.

STEP 2. Solve the resulting expression $x=f(y)$ for $y$. That is the inverse function $y=f^{-1}(x)$.

Example. Find the inverse function of $f(x)=\frac{2x+3}{x-1}$. (It is a one-to-one function.)

Solution. STEP 1. Let $y=\frac{2x+3}{x-1}$ and swap $x$ and $y$. Then we have

$$x=\frac{2y+3}{y-1}$$

STEP 2. Let us solve $x=\frac{2y+3}{y-1}$ for $y$. First multiply $x=\frac{2y+3}{y-1}$ by $y-1$. Then we have $x(y-1)=2y+3$ or $xy-x=2y+3$. Isolating the terms that contain $y$ in the LHS, we get $xy-2y=x+3$ or $(x-2)y=x+3$. Finally we find $y=\frac{x+3}{x-2}$. This is the inverse function.

$y=f(x)$ and its inverse $y=f^{-1}(x)$ satisfy the following properties.

$$(f\circ f^{-1})(x)=x,\ (f^{-1}\circ f)(x)=x$$

The reason for these properties to hold is clear from the definition of an inverse function. We can check the properties using the above example. I will do $(f\circ f^{-1})(x)=x$ and leave the other for an exercise.

\begin{align*}

(f\circ f^{-1})(x)&=f(f^{-1}(x))\\

&=f\left(\frac{x+3}{x-2}\right)\\

&=\frac{2\left(\frac{x+3}{x-2}\right)+3}{\left(\frac{x+3}{x-2}\right)-1}\\

&=x

\end{align*}

The graph of $y=f(x)$ and the graph of its inverse $y=f^{-1}(x)$ satisfy a nice symmetry, namely they are symmetric about the line $y=x$. This symmetry helps us obtain the graph of $y=f^{-1}(x)$ when the explicit expression for $f^{-1}(x)$ is not available. You will see such a case later when you study the logarithmic functions. Figure 3 shows the symmetry with $y=x^2$ ($x\geq 0$) and its inverse $y=\sqrt(x)$.