Scalar Products

Let $V$ be vector space. A scalar product is a map $\langle\ ,\ \rangle: V\times V\longrightarrow\mathbb{R}$ such that

SP 1. $\langle v,w\rangle=\langle w,v\rangle$

SP 2. $\langle u,v+w\rangle=\langle u,v\rangle+\langle u,w\rangle$

SP 3. If $x$ is a number, then
$$\langle xu,v\rangle=x\langle u,v\rangle=\langle u,xv\rangle$$

Additionally, we also assume the condition

SP 4. $\langle v,v\rangle>0$ if $v\ne O$

A scalar product with SP 4 is said to be positive definite.

Remark. If $v=O$, then $\langle v,w\rangle=0$ for any vector $w$ in $V$. This follows immediately from SP 3.

Example. Let $V=\mathbb{R}^n$ and define
$$\langle X,Y\rangle=X\cdot Y.$$
Then $\langle\ ,\ \rangle$ is a positive definite scalar product.

Example. Let $V$ be the function space of all continuous real-valued function on $[-\pi,\pi]$. For $f,g\in V$, we define
$$\langle f,g\rangle=\int_{-\pi}^{\pi}f(t)g(t)dt.$$
Then $\langle\ ,\ \rangle$ is a positive definte scalar product.

Using a scalar product, we can introduce the notion of orthogonality of vectors. Two vectors $v,w$ are said to be orthogonal or perpendicular if $\langle v,w\rangle=0$.

Let $S\subset V$ be a subspace of $V$. Let $S^{\perp}=\{w\in V: \langle v,w\rangle=0\ \mbox{for all}\ v\in V\}$.Then $S^{\perp}$ is also a subspace of $V$. (Check for yourself.) It is called the orthogonal space of $S$.

Define the length or the norm of $v\in V$ by
$$||v||=\sqrt{\langle v,v\rangle}.$$
It follows from SP 3 that
$$||cv||=|c|||v||$$
for any number $c$.

The Projection of a Vector onto Another Vector

For vectors in $\mathbb{R}^2$ or $\mathbb{R}^3$, the vector projection of a vector $v$ onto another vector $w$ is
$$||v||\cos\theta\frac{w}{||v||}=\langle v,w\rangle\frac{w}{||w||^2}=\frac{\langle v,w\rangle}{\langle w,w\rangle}w$$
as seen in the above Figure. The number $c=\frac{\langle v,w\rangle}{\langle w,w\rangle}$ is called the component of $v$ along $w$. Note that the vector projection of $v$ onto $w$
$$\frac{\langle v,w\rangle}{\langle w,w\rangle}w$$
can still be defined in any vector space with a scalar product.

Proposition. The vector $v-cw$ is perpendicular to $w$.

Proof. \begin{align*}
\langle v-cw,w\rangle&=\langle v,w\rangle-c\langle w,w\rangle\\
&=\langle v,w\rangle-\langle v,w\rangle\\
&=0.
\end{align*}

Example. Let $V=\mathbb{R}^n$. Then the component of $X=(x_1,\cdots,x_n)$ along $E_i$ is
$$X\cdot E_i=x_i.$$

Example. Let $V$ be the space of continuous functions on $[-\pi,\pi]$. Let $f(x)=\sin kx$, where $k$ is a positive integer. Then
$||f||=\sqrt{\pi}$. The component of $g(x)$ along $f(x)$ is
$$\frac{\langle g,f\rangle}{\langle f,f\rangle}=\frac{1}{\pi}\int_{-\pi}^{\pi}g(x)\sin kxdx.$$
It is called the Fourier coefficient of $g$ along $f$.

The following two inequalities are well-known for vectors in $\mathbb{R}^n$. They still hold in any vector space with a positive definite scalar product.

Theorem [Schwarz Inequality] Let $V$ be a vector space with a positive definite scalar product. For any $v,w\in V$,
$$|\langle v,w\rangle|\leq ||v||||w||.$$

Theorem [Triangle Inequality] Let $V$ be a vector space with a positive definite scalar product. For any $v,w\in V$,
$$||v+w||\leq ||v||+||w||.$$

Sage: Basic Matrix Operations

For using Sage: Sage is an open source math software whose interface is a web browser (in particular firefox). You don’t have to install Sage in your computer to use it. You can access any sage server including the main Sage server. I am running a Sage server at sage.st.usm.edu. If you are a student at the University of Southern Mississippi, you are more than welcome to create an account at sage.st.usm.edu and use it.

Matrix Constructions

In Sage, $2\times 3$ matrix
$$\begin{pmatrix}
1 & 1 & -2\\
-1 & 4 & -5
\end{pmatrix}$$ can be created as follows. Let us say we want to call the matrix $A$. Type the following command in the blank line of your sage worksheet:

A=matrix([[1,1,-2],[-1,4,-5]])

In case you are familiar with Maple, not like Maple you will not see your matrix $A$ as an output when you click on “evaluate”. To see your matrix, you need to type

A

in the next blank line and click on “evaluate”again:

[ 1  1 -2]
[-1  4 -5]

Scalar Multiplication

If you want to multiply the matrix $A$ by a number 5, type the command

5*A

and click on “evaluate”. The output is

[  5   5 -10]
[ -5  20 -25]

Matrix Addition

To perform addition of two matrices:
$$\begin{pmatrix}
1 & 1 & -2\\
-1 & 4 & -5
\end{pmatrix}+\begin{pmatrix}
2 & 1 & 5\\
1 & 3 & 2
\end{pmatrix}$$, first call the second matrix $B$:

B=matrix([[2,1,5],[1,3,2]])

and do

A+B

the output is

[ 3  2  3]
[ 0  7 -3]

The linear combination $3A+2B$ can be calculated by the command

3*A+2*B

and the output is

[  7   5   4]
[ -1  18 -11]

Transpose of a Matrix

To find the transpose of the matrix $A$ do

A.transpose()

and the output is

[ 1 -1]
[ 1  4]
[-2 -5]

Matrix Multiplication

An $m\times n$ matrix can be multiplied by a $p\times q$ matrix as long as $n=p$. The resulting multiplication is an $m\times q$ matrix. Let $C=\begin{pmatrix}
3 & 4\\
-1 &2\\
2 &1
\end{pmatrix}$. The number of columns of $A$ and the number of rows of $C$ coincide as 3, so we can perform $AC$ and this can be done in Sage as:

A*C

The output is

[ 1 -1]
[ 1  4]
[-2 -5]

Inverses

Let $F: V\longrightarrow W$ be a mapping. $F$ is said to be invertible if there exists a map $G: W\longrightarrow V$ such that
$$G\circ F:I_V,\ F\circ G=I_W,$$
where $I_V: V\longrightarrow V$ and $I_W: W\longrightarrow W$ the identity maps on $V$ and $W$, respectively. The map $G$ is called the inverse of $F$ and is denoted by $F^{-1}$.

Theorem. A map $F: V\longrightarrow W$ has an inverse if and only if it is one-to-one (injective) and onto (surjective).

Remark. If a map $F: V\longrightarrow W$ has an inverse, it is unique.

Example. The inverse map of $T_u$ is $T_{-u}$.

Example. Let $A$ be a square matrix. Then $L_A: \mathbb{R}^n\longrightarrow\mathbb{R}^n$ is invertible if and only if $A$ is invertible.

Proof. If $A$ is invertible, then one can immediately see that $L_A\circ L_{A^{-1}}=L_{A^{-1}}\circ L_A=I$. On the other hand, any linear map from $\mathbb{R}^n$ to $\mathbb{R}^n$ may be written as $L_B:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ for some $n\times n$ square matrix $B$ as seen here. Suppose that $L_B:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ is an inverse of $L_A: \mathbb{R}^n\longrightarrow\mathbb{R}^n$. Then $L_A\circ L_B=L_B\circ L_A=I$ i.e. for any $X\in\mathbb{R}^n$,
$$(AB)X=(BA)X=IX=X.$$
This implies that $AB=BA=I$. (Why?) Therefore, $A$ is invertibale and $B=A^{-1}$.

Theorem. Let $F: U\longrightarrow V$ be a linear map, and assume that this map has an inverse mapĀ  $G:V\longrightarrow U$. Then $G$ is a linear map.

Proof. The proof is straightforward and is left for an exercise.

Recall that a linear map $F: V\longrightarrow W$ is injective if and only if $\ker F=\{O\}$ as seen here.

Theorem. Let $V$ be a vector space of $\dim n$. If $W\subset V$ is a subspace of $\dim n$, then $W=V$.

Proof. It follows from the fact that a basis of a vector space is the maximal set of linearly independent vectors.

Theorem. Let $F: V\longrightarrow W$ be a linear map. Assume that $\dim V=\dim W$. Then

(i) If $\ker F=\{O\}$ then $F$ is invertible.

(ii) If $F$ is surjective then $F$ is invertible.

Proof. It follows from the formula
$$\dim V=\dim\ker F+\dim\mathrm{Im}F.$$

Example. Let $F:\mathbb{R}^2\longrightarrow\mathbb{R}^2$ be the linear map such that
$$F(x,y)=(3x-y,4x+2y).$$
Show that $F$ is invertible.

Solution. Suppose that $(x,y)\in\ker F$. Then
$$\left\{\begin{aligned}
3x-y&=0,\\
4x+2y&=0.
\end{aligned}\right.$$
This system of linear equations has only the trvial solution $(x,y)=(0,0)$, so $\ker F=\{O\}$. Therefore, $F$ is invertible.

A linear map $F: V\longrightarrow W$ which is invertible is called an isomorphism. If there is an isomorphism from $V$ onto $W$, we say $V$ is isomorphic to $W$ and write $V\cong W$ or $V\stackrel{F}{\cong}W$ in case we want to specify the isomorphism $F$. If $V\cong W$, as vector spaces they are identical and we do not distinguish them.

The following theorem says that there is essentially only one vector space of dimension $n$, $\mathbb{R}^n$.

Theorem. Let $V$ be a vector space of dimension $n$. Then $\mathbb{R}^n\cong V$.

Proof. Let $\{v_1,\cdots,v_n\}$ be a basis of $V$. Define a map $L:\mathbb{R}^n\longrightarrow V$ by
$$L(x_1,\cdots,x_n)=x_1v_1+\cdots+x_nv_n$$
for each $(x_1,\cdots,x_n)\in\mathbb{R}^n$. Then $L$ is an isomorphism.

Theorem. A square matrix $A$ is invertible if and only if its columns $A^1,\cdots,A^n$ are linearly indepdendent.

Proof. Let $L_A:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ be the map such that $L_AX=AX$. For $X=\begin{pmatrix}
x_1\\
\vdots\\
x_n
\end{pmatrix}\in\mathbb{R}^n$,
$$L_AX=x_1A^1+\cdots+x_nA^n.$$
Hence, $\ker L_A=\{O\}$ if and only if $A^1,\cdots,A^n$ are linearly independent.

Composition of Linear Maps

Let $F: U\longrightarrow V$ and $G: V\longrightarrow W$ be two maps. The composite map $G\circ F: U\longrightarrow W$ is defined by
$$G\circ F(v)=G(F(v))$$
for each $v\in U$.

Example. Let $A$ be an $m\times n$ matrix and let $B$ be a $q\times m$ matrix. Let $L_A: \mathbb{R}^n\longrightarrow\mathbb{R}^n$ be the linear map such that $L_AX=AX$ for each $X\in\mathbb{R}^n$, and let $L_B:\mathbb{R}^m\longrightarrow\mathbb{R}^m$ such that $L_BY=Y$ for each $Y\in\mathbb{R}^m$. Then
\begin{align*}
L_B\circ L_A(X)&=L_B(L_A(X))\\
&=L_B(L_A(X))\\
&=B(AX)\\
&=BA(X)\\
&=L_{BA}(X)
\end{align*}
for each $X\in\mathbb{R}^n$. Therefore, $L_B\circ L_A=L_{BA}$.

Example. Let $A$ be an $m\times n$ matrix, and let $L_A:\mathbb{R}^n\longrightarrow\mathbb{R}^m$ be the linear map such that $L_A(X)=AX$ for each $X\in\mathbb{R}^n$. Let $C$ be a vector in $\mathbb{R}^m$ and let $T_C:\mathbb{R}^m\longrightarrow\mathbb{R}^m$ be the translation by $C$
$$T_C(Y)=Y+C$$
for each $Y\in\mathbb{R}^m$. Then for each $X\in\mathbb{R}^n$,
\begin{align*}
T_C\circ L_A(X)&=T_C(AX)\\
&=AX+C.
\end{align*}

Example. Let $V$ be a vector space, and let $w$ be an element of $V$. Let $T_w: V\longrightarrow V$ be the translation by $w$ i.e.
$$T_w(v)=v+w$$
for each $v\in V$. Then
\begin{align*}
T_{w_1}\circ T_{w_2}(v)&=T_{w_1}(T_{w_2}(v))\\
&=T_{w_1}(v+w_2)\\
&=v+w_2+w_1
\end{align*}
for each $v\in V$. Therefore, $T_{w_1}\circ T_{w_2}=T_{w_1+w_2}$ i.e. the composite of translations is again a translation.

Remark. Note that translations are not linear. One easy way to see this is that $T_w(O)=O+w=w$. So $T_w$ is not linear unless $w=O$ in which case $T_w$ is the identity map.

Example. [Rotations] Let $\theta$ be a number, and $A(\theta)$ the matrix
$$A(\theta)=\begin{pmatrix}
\cos\theta & -\sin\theta\\
\sin\theta & \cos\theta
\end{pmatrix}.$$
Let $R_\theta:\mathbb{R}^2\longrightarrow\mathbb{R}^2$ be the rotation by angle $\theta$ i.e.
$$R_\theta(X)=A(\theta)X$$
for each $X\in\mathbb{R}^2$. Clearly, rotations are linear.
Now,
\begin{align*}
R_{\theta_1}\circ R_{\theta_2}(X)&=R_{\theta_1}(R_{\theta_2}(X))\\
&=R_{\theta_1}(A(\theta_2)X)\\
&=A(\theta_1)A(\theta_2)X\\
&=A(\theta_1+\theta_2)X\\
&=R_{\theta_1+\theta_2}.
\end{align*}

Theorem. Let $U,V,W$ be vector spaces. Let $F:U\longrightarrow V$ and $G:V\longrightarrow W$ be linear maps. Then the composite map $G\circ F:U\longrightarrow W$ is also linear.

Proof. The proof is straightforward and is left for an exercise.

The Matrix Associated with a Linear Map

Given an $m\times n$ matrix $A$, there is an associated linear map $L_A: \mathbb{R}^n\longrightarrow\mathbb{R}^m$ as seen here. Conversely, given a linear map $L: \mathbb{R}^n\longrightarrow\mathbb{R}^m$ there exists an $m\times n$ matrix $A$ such that $L=L_A$. To see this, consider the unit column vectors $E^1,\cdots,E^n$ of $\mathbb{R}^n$. For each $j=1,\cdots,n$, let $L(E^j)=A^j$, where $A^j$ is a column vector in $\mathbb{R}^m$. For each $X\in\mathbb{R}^n$,
$$X=x_1E^1+\cdots+x_nE^n=\begin{pmatrix}
x_1\\
\vdots\\
x_n
\end{pmatrix}$$
and hence
\begin{align*}
LX&=x_1L(E^1)+\cdots+x_nL(E^n)\\
&=x_1A^1+\cdots+x_nA^n\\
&=AX
\end{align*}
where $A$ is the matrix whose column vectors are $A^1,\cdots,A^n$. The matrix $A$ is called the matrix associated with the linear map $L$.

Example. Let $L:\mathbb{R}^3\longrightarrow\mathbb{R}^2$ be the projection given by $$L\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}=\begin{pmatrix}
x\\
y
\end{pmatrix}.$$
Find the matrix associated with $L$.

Solution. $$L(E^1)=\begin{pmatrix}
1\\
0\end{pmatrix},\ L(E^2)=\begin{pmatrix}
0\\
1
\end{pmatrix},\ L(E^3)=\begin{pmatrix}
0\\
0
\end{pmatrix}.$$
Thus, the matrix associated with $L$ is
$$A=\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 0
\end{pmatrix}.$$

Let us now consider a more general case. Let $V$ be a vector space and $\{v_1,\cdots,v_n\}$ a given basis of $V$. Let $L:V\longrightarrow V$ be a linear map. Then there exist numbers $c_{ij}$, $i,j=1,\cdots,n$ such that
\begin{align*}
Lv_1&=c_{11}v_1+\cdots+c_{1n}v_n,\\
&\vdots\\
Lv_n&=c_{n1}v_1+\cdots+c_{nn}v_n.
\end{align*}
Let $v=x_1v_1+\cdots+x_nv_n$. Then
\begin{align*}
Lv&=\sum_{i=1}^nx_iLv_i\\
&=\sum_{i=1}^nx_i\sum_{j=1}^nc_{ij}v_j\\
&=\sum_{j=1}^n(\sum_{i=1}^nx_ic_{ij})v_j.
\end{align*}
Hence, we have the following theorem.

Theorem. If $C=(c_{ij})$ is the matrix such that $Lv_i=\sum_{j=1}^nc_{ij}v_j$ and $X=\begin{pmatrix}
x_1\\
\vdots\\
x_n
\end{pmatrix}$ is the coordinate vector of $v$, then the coordinate vector of $Lv$ is ${}^tCX$ i.e. the matrix associated with $L$ is ${}^tC$ with respect to the basis $\{v_1,\cdots,v_n\}$.

Example. Let $L: V\longrightarrow V$ be a linear map. Let $\{v_1,v_2,v_3\}$ be a basis of $V$ such that
\begin{align*}
L(v_1)&=2v_1-v_2,\\
L(v_2)&=v_1+v_2-4v_3,\\
L(v_3)&=5v_1+4v_2+2v_3.
\end{align*}
The matrix associated with $L$ is
$$\begin{pmatrix}
2 & 1 & 5\\
-1 & 1 & 4\\
0 & -4 & 2
\end{pmatrix}.$$