Let $F: V\longrightarrow W$ be a mapping. $F$ is said to be *invertible* if there exists a map $G: W\longrightarrow V$ such that

$$G\circ F:I_V,\ F\circ G=I_W,$$

where $I_V: V\longrightarrow V$ and $I_W: W\longrightarrow W$ the identity maps on $V$ and $W$, respectively. The map $G$ is called the *inverse* of $F$ and is denoted by $F^{-1}$.

*Theorem*. A map $F: V\longrightarrow W$ has an inverse if and only if it is one-to-one (injective) and onto (surjective).

*Remark*. If a map $F: V\longrightarrow W$ has an inverse, it is unique.

*Example*. The inverse map of $T_u$ is $T_{-u}$.

*Example*. Let $A$ be a square matrix. Then $L_A: \mathbb{R}^n\longrightarrow\mathbb{R}^n$ is invertible if and only if $A$ is invertible.

*Proof*. If $A$ is invertible, then one can immediately see that $L_A\circ L_{A^{-1}}=L_{A^{-1}}\circ L_A=I$. On the other hand, any linear map from $\mathbb{R}^n$ to $\mathbb{R}^n$ may be written as $L_B:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ for some $n\times n$ square matrix $B$ as seen here. Suppose that $L_B:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ is an inverse of $L_A: \mathbb{R}^n\longrightarrow\mathbb{R}^n$. Then $L_A\circ L_B=L_B\circ L_A=I$ i.e. for any $X\in\mathbb{R}^n$,

$$(AB)X=(BA)X=IX=X.$$

This implies that $AB=BA=I$. (Why?) Therefore, $A$ is invertibale and $B=A^{-1}$.

*Theorem*. Let $F: U\longrightarrow V$ be a linear map, and assume that this map has an inverse mapĀ $G:V\longrightarrow U$. Then $G$ is a linear map.

*Proof*. The proof is straightforward and is left for an exercise.

Recall that a linear map $F: V\longrightarrow W$ is injective if and only if $\ker F=\{O\}$ as seen here.

*Theorem*. Let $V$ be a vector space of $\dim n$. If $W\subset V$ is a subspace of $\dim n$, then $W=V$.

*Proof*. It follows from the fact that a basis of a vector space is the maximal set of linearly independent vectors.

*Theorem*. Let $F: V\longrightarrow W$ be a linear map. Assume that $\dim V=\dim W$. Then

(i) If $\ker F=\{O\}$ then $F$ is invertible.

(ii) If $F$ is surjective then $F$ is invertible.

*Proof*. It follows from the formula

$$\dim V=\dim\ker F+\dim\mathrm{Im}F.$$

*Example*. Let $F:\mathbb{R}^2\longrightarrow\mathbb{R}^2$ be the linear map such that

$$F(x,y)=(3x-y,4x+2y).$$

Show that $F$ is invertible.

*Solution*. Suppose that $(x,y)\in\ker F$. Then

$$\left\{\begin{aligned}

3x-y&=0,\\

4x+2y&=0.

\end{aligned}\right.$$

This system of linear equations has only the trvial solution $(x,y)=(0,0)$, so $\ker F=\{O\}$. Therefore, $F$ is invertible.

A linear map $F: V\longrightarrow W$ which is invertible is called an *isomorphism*. If there is an isomorphism from $V$ onto $W$, we say $V$ is isomorphic to $W$ and write $V\cong W$ or $V\stackrel{F}{\cong}W$ in case we want to specify the isomorphism $F$. If $V\cong W$, as vector spaces they are identical and we do not distinguish them.

The following theorem says that there is essentially only one vector space of dimension $n$, $\mathbb{R}^n$.

*Theorem*. Let $V$ be a vector space of dimension $n$. Then $\mathbb{R}^n\cong V$.

*Proof*. Let $\{v_1,\cdots,v_n\}$ be a basis of $V$. Define a map $L:\mathbb{R}^n\longrightarrow V$ by

$$L(x_1,\cdots,x_n)=x_1v_1+\cdots+x_nv_n$$

for each $(x_1,\cdots,x_n)\in\mathbb{R}^n$. Then $L$ is an isomorphism.

*Theorem*. A square matrix $A$ is invertible if and only if its columns $A^1,\cdots,A^n$ are linearly indepdendent.

*Proof*. Let $L_A:\mathbb{R}^n\longrightarrow\mathbb{R}^n$ be the map such that $L_AX=AX$. For $X=\begin{pmatrix}

x_1\\

\vdots\\

x_n

\end{pmatrix}\in\mathbb{R}^n$,

$$L_AX=x_1A^1+\cdots+x_nA^n.$$

Hence, $\ker L_A=\{O\}$ if and only if $A^1,\cdots,A^n$ are linearly independent.