From here on, a map from a vector space into another vector space will be called an operator.
Definition. A linear operator $T$ is an operator such that
- $T(x+y)=Tx+Ty$ for any two vectors $x$ and $y$.
- $T(\alpha x)=\alpha Tx$ for any vector $x$ and a scalar $\alpha$.
Proposition. An operator $T$ is a linear operator if and only if
$$T(\alpha x+\beta y)=\alpha Tx+\beta Ty$$
for any vectors $x,y$ and scalars $\alpha,\beta$.
Denote by $\mathcal{D}(T)$, $\mathcal{R}(T)$ and $\mathcal{N}(T)$, the domain, the range and the null space, respectively, of a linear operator $T$. The null space $\mathcal{N}(T)$ is the kernel of $T$ i.e.
$$\mathcal{N}(T)=T^{-1}(0)=\{x\in \mathcal{D}(T): Tx=0\}.$$
Since the term kernel is reserved for something else in functional analysis, we call it the null space of $T$.
Example. [Differentiation] Let $X$ be the space of all polynomials on $[a,b]$. Define an operator $T: X\longrightarrow X$ by
$$Tx(t)=x'(t)$$
for each $x(t)\in X$. Then $T$ is linear and onto.
Example. [Integration] Recall that $\mathcal{C}[a,b]$ denotes the space of all continuous functions on the closed interval $[a,b]$. Define an operator $T:\mathcal{C}[a,b]\longrightarrow\mathcal{C}[a,b]$ by
$$Tx(t)=\int_a^tx(\tau)d\tau$$
for each $x(t)\in\mathcal{C}[a,b]$. Then $T$ is linear.
Example. Let $A=(a_{jk})$ be an $r\times n$ matrix of real entries. Define an operator $T: \mathbb{R}^n\longrightarrow\mathbb{R}^r$ by
$$Tx=Ax=(a_{jk})(\xi_l)=\left(\sum_{k=1}^na_{jk}\xi_k\right)$$
for each $n\times 1$ column vector $x=(\xi_l)\in\mathbb{R}^n$. Then $T$ is linear as seen in linear algebra.
Theorem. Let $T$ be a linear operator. Then
- The range $\mathcal{R}(T)$ is a vector space.
- If $\dim\mathcal{D}(T)=n<\infty$, then $\dim\mathcal{R}(T)\leq n$.
- The null space $\mathcal{N}(T)$ is a vector space.
Proof. Parts 1 and 3 are straightforward. We prove part 2. Choose $y_1,\cdots,y_{n+1}\in\mathcal{R}(T)$. Then $y_1=Tx_1,\cdots,y_{n+1}=Tx_{n+1}$ for some $x_1,\cdots,\\x_{n+1}\in\mathcal{D}(T)$. Since $\dim\mathcal{D}(T)=n$, $x_1,\cdots,x_{n+1}$ are linearly dependent. So, there exist scalars $\alpha_1,\cdots,\alpha_{n+1}$ not all equal to 0 such that $\alpha_1x_1+\cdots+\alpha_{n+1}x_{n+1}=0$. Since $T(\alpha_1x_1+\cdots+\alpha_{n+1}x_{n+1})=\alpha_1y_1+\cdots+\alpha_{n+1}y_{n+1}=0$, $\mathcal{R}$ has no linearly independent subset of $n+1$ or more elements.
Theorem. $T$ is one-to-one if and only if $\mathcal{N}=\{O\}$.
Proof. Suppose that $T$ is one-to-one. Let $a\in\mathcal{N}$. Then $Ta=O=TO$. Since $T$ is one-to-one, $a=O$ and hence $\mathcal{N}=\{O\}$. Suppose that $\mathcal{N}=\{O\}$. Let $Ta=Tb$. Then by linearity $T(a-b)=O$ and so $a-b\in\mathcal{N}=\{O\}\Longrightarrow a=b$. Thus, $T$ is one-to-one.
Theorem.
- $T^{-1}: \mathcal{R}(T)\longrightarrow\mathcal{D}(T)$ exists if and only if $\mathcal{N}=\{O\}$ if and only if $T$ is one-to-one.
- If $T^{-1}$ exists, it is linear.
- If $\dim\mathcal{D}(T)=n<\infty$ and $T^{-1}$ exists, then $\dim\mathcal{R}(T)=\dim\mathcal{D}(T)$.
Proof. Part 1 is trivial. Part 3 follows from part 2 of the previous theorem. Let us prove part 2. Let $y_1,y_2\in\mathcal{R}(T)$. Then there exist $x_1,x_2\in\mathcal{D}(T)$ such that $y_1=Tx_1$, $y_2=Tx_2$. Now,
\begin{align*}
\alpha y_1+\beta y_2&=\alpha Tx_1+\beta Tx_2\\
&=T(\alpha x_1+\beta x_2).
\end{align*}
So,
\begin{align*}
T^{-1}(\alpha y_1+\beta y_2)&=T^{-1}(T(\alpha x_1+\beta x_2))\\
&=\alpha x_1+\beta x_2\\
&=\alpha T^{-1}y_1+\beta T^{-1}y_2.
\end{align*}