Category Archives: Differential Geometry

Differentiable Manifolds and Tangent Spaces

In $\mathbb{R}^n$, there is a globally defined orthonormal frame
$$E_{1p}=(1,0,\cdots,0)_p,\ E_{2p}=(0,1,0,\cdots,0)_p,\cdots,E_{np}=(0,\cdots,0,1)_p.$$
For any tangent vector $X_p\in T_p(\mathbb{R}^n)$, $X_p=\sum_{i=1}^n\alpha^iE_{ip}$. Note that the coefficients $\alpha^i$ are the ones that distinguish tangent vectors in $T_p(\mathbb{R}^n)$. For a differentiable function $f$, the directional derivative $X_p^\ast f$ of $f$ with respect to $X_p$ is given by
$$X_p^\ast f=\sum_{i=1}^n\alpha^i\left(\frac{\partial f}{\partial x_i}\right).$$
We identify each $X_p$ with the differential operator
$$X_p^\ast=\sum_{i=1}^n\alpha^i\frac{\partial}{\partial x_i}:C^\infty(p)\longrightarrow\mathbb{R}.$$
Then the frame fields $E_{1p},E_{2p},\cdots,E_{np}$ are identified with
$$\left(\frac{\partial}{\partial x_1}\right)_p,\left(\frac{\partial}{\partial x_2}\right)_p,\cdots,\left(\frac{\partial}{\partial x_n}\right)_p$$
respectively. Unlike $\mathbb{R}^n$, we cannot always have a globally defined frame on a differentiable manifold. So it is necessary for us to use local coordinate neighborhoods that are homeomorphic to $\mathbb{R}^n$ and the associated frames $\frac{\partial}{\partial x_1},\frac{\partial}{\partial x_2},\cdots,\frac{\partial}{\partial x_n}$.

Example. The points $(x,y,z)$ are represented in terms of the spherical coordinates $(\phi,\theta)$ as
$$x=\sin\phi\cos\theta,y=\sin\phi\sin\theta,z=\cos\phi,\ 0\leq\phi\leq\pi,\ 0\leq\theta\leq 2\pi.$$
By chain rule, one finds the standard basis $\frac{\partial}{\partial\phi},\frac{\partial}{\partial\theta}$ for $T_\ast S^2$:
\begin{align*}
\frac{\partial}{\partial\phi}&=\cos\phi\cos\theta\frac{\partial}{\partial x}+\cos\phi\sin\theta\frac{\partial}{\partial y}-\sin\phi\frac{\partial}{\partial z},\\
\frac{\partial}{\partial\theta}&=-\sin\phi\sin\theta\frac{\partial}{\partial x}+\sin\phi\cos\theta\frac{\partial}{\partial y}.
\end{align*}
The frame field is not globally defined on $S^2$ since $\frac{\partial}{\partial\theta}$ at $\phi=0,\pi$. More generally, the following theorem holds.

Frame field on 2-sphere

Theorem. [Hairy Ball Theorem] If $n$ is even, a non-vanishing $C^\infty$ vector field on $S^n$ does not exist i.e. a $C^\infty$ vector field on $S^n$ must take zero value at some point of $S^n$.

The Hairy Ball Theorem tells us why we have ball spots on our heads. It can be also stated as “you cannot comb a hairy ball flat.” There may also be a meteorological implication of this theorem. It may implicate that there must be at least one spot on earth where there is no wind at all. No-wind spot may be the eye of a hurricane. So, as long as there is wind (and there always is) on earth, there must be a hurricane somewhere at all times.

It has been known that all odd-dimensional spheres have at least one non-vanishing $C^\infty$ vector field and that only spheres $S^1, S^3, S^7$ have a $C^\infty$ field of basis. For instance, there are three mutually perpendicular unit vector fields on $S^3\subset\mathbb{R}^4$ i.e. a frame field: Let $S^3=\{(x^1,x^2,x^3,x^4)\in\mathbb{R}^4: \sum_{i=1}^4(x^i)^2=1\}$. Then
\begin{align*}
X&=-x^2\frac{\partial}{\partial x^1}+x^2\frac{\partial}{\partial x^2}+x^4\frac{\partial}{\partial x^3}-x^3\frac{\partial}{\partial x^4},\\
Y&=-x^3\frac{\partial}{\partial x^1}-x^4\frac{\partial}{\partial x^2}+x^1\frac{\partial}{\partial x^3}+x^2\frac{\partial}{\partial x^4},\\
Z&=-x^4\frac{\partial}{\partial x^1}+x^3\frac{\partial}{\partial x^2}-x^2\frac{\partial}{\partial x^3}+x^1\frac{\partial}{\partial x^4}
\end{align*}
form an orthonormal basis of $C^\infty$ vector fields on $S^3$.

Fibre Bundles

A fibre bundle is an object $(E,M,F,\pi)$ consisting of

  1. The total space $E$;
  2. The base space $M$ with an open covering $\mathcal{U}=\{U_\alpha\}_{\alpha\in\mathcal{A}}$;
  3. The fibre $F$ and the projection map $E\stackrel{\pi}{ \longrightarrow}M$.

The simplest case is $E=M\times F$. In this case, the bundle is called a trivial bundle. In general the total space may be too complicated for us to understand, so it would be nice if we can always find smaller parts that are simple enough for us to understand such as trivial bundles. For this reason, we want the fibre bundle to have the additional property: For each $U_\alpha\in\mathcal{U}$, there exists a homeomorphism $h_\alpha : \pi^{-1}(U_\alpha)\longrightarrow U_\alpha\times F$. Such a homeomorphism $h_\alpha$ is called a local trivialization. For each $x\in M$, $F_x:=\pi^{-1}(x)$ is homeomorphic to $\{x\}\times F$. $F_x$ is called the fibire of $x$.

Let $x\in U_\alpha\cap U_\beta$. Then $F_x^\alpha\subset\pi^{-1}(U_\alpha)$ and $F_x^\beta\subset\pi^{-1}(U_\beta)$ may not be the same. However, the two fibres are homeomorphic. For each $x\in M$, denote by $h_{\alpha\beta}(x)$ the homeomorphism from $F_x^\alpha$ to $F_x^\beta$. Then for each $x\in M$, $h_{\alpha\beta}(x)\in\mathrm{Aut}(F)$ where $\mathrm{Aut}(F)$ is the group of homeomorphisms from $F$ to itself i.e. the automorphism group of $F$. The map $h_{\alpha\beta}: U_\alpha\cap U_\beta\longrightarrow\mathrm{Aut}(F)$ is called a transition map. Note that for $U_\alpha,U_\beta\in\mathcal{U}$ with $U_\alpha\cap U_\beta\ne\emptyset$, $h_\alpha\circ h_\beta^{-1}:(U_\alpha\cap U_\beta)\times F\longrightarrow (U_\alpha\cap U_\beta)\times F$ satisfies
$$h_\alpha\circ h_\beta^{-1}(x,f)=(x,h_{\alpha\beta}(x)f)$$
for any $x\in U_\alpha\cap U_\beta$, $f\in F$

Structural Equations

Definition. The dual 1-forms $\theta_1,\theta_2,\theta_3$ of a frame $E_1,E_2,E_3$ on $\mathbb{E}^3$ are defined by
$$\theta_i(v)=v\cdot E_i(p),\ v\in T_p\mathbb{E}^3.$$
Clearly $\theta_i$ is linear.

Example. The dual 1-forms of the natural frame $U_1,U_2,U_3$ are $dx_1$, $dx_2$, $dx_3$ since
$$dx_i(v)=v_i=v\cdot U_i(p)$$
for each $v\in T_p\mathbb{E}^3$.

For any vector field $V$ on $\mathbb{E}^3$,
$$V=\sum_i\theta_i(V)E_i.$$
To see this, let us calculate for each $V(p)\in T_p\mathbb{E}^3$
\begin{align*}
\sum_i\theta_i(V(p))E_i(p)&=\sum_i(V(p)\cdot E_i(p))E_i(p)\\
&=\sum_iV_i(p)E_i(p)\\
&=V(p).
\end{align*}

Lemma. Let $\theta_1,\theta_2,\theta_3$ be the dual 1-forms of a frame $E_1, E_2, E_3$. Then any 1-form $\phi$ on $\mathbb{E}^3$ has a unique expression
$$\phi=\sum_i\phi(E_i)\theta_i.$$

Proof. Let $V$ be any vector field on $\mathbb{E}^3$. Then
\begin{align*}
\sum_i\phi(E_i)\theta_i(V)&=\sum_i\phi(E_i)\theta_i(V)\\
&=\phi(\sum_i\theta_i(V)E_i)\ \mbox{by linearity of $phi$}\\
&=\phi(V).
\end{align*}
Let $A=(a_{ij})$ be the attitude matrix of a frame field $E_1$, $E_2$, $E_3$, i.e.
$$E_i=\sum_ja_{ij}U_j,\ i=1,2,3.\ \ \ \ \ \mbox{(1)}$$
Clearly $\theta_i=\sum_j\theta_i(U_j)dx_j$. On the other hand,
$$\theta_i(U_j)=E_i\cdot U_j=\left(\sum_ka_{ik}U_k\right)\cdot U_j=a_{ij}.$$ Hence the dual formulation of (1) is
$$\theta_i=\sum_ja_{ij}dx_j.\ \ \ \ \ \mbox{(2)}$$

Theorem. [Cartan Structural Equations] Let $E_1$, $E_2$, $E_3$ be a frame field on $\mathbb{E}^3$ with dual 1-forms $\theta_1$, $\theta_2$, $\theta_3$ and connection forms $\omega_{ij}$, $i,j=1,2,3$. Then

  1. The First Structural Equations: $$d\theta_i=\sum_j\omega_{ij}\wedge\theta_j.$$
  2. The Second Structural Equations: $$d\omega_{ij}=\sum_k\omega_{ik}\wedge\omega_{kj}.$$

Proof. The exterior derivative of (2) is
$$d\theta_i=\sum_jda_{ij}\wedge dx_j.$$ Since $\omega=dA\cdot{}^tA$ and ${}^tA=A^{-1}$ (recall that $A$ is an orthogonal matrix), $dA=\omega\cdot A$, i.e.
$$da_{ij}=\sum_k\omega_{ik}a_{kj}.$$
So,
\begin{align*}
d\theta_i&=\sum_j\left\{\left(\sum_k\omega_{ik}a_{kj}\right)\wedge dx_j\right\}\\
&=\sum_k\left\{\omega_{ik}\wedge\sum_j a_{kj}dx_j\right\}\\
&=\sum_k\omega_{ik}\wedge\theta_k.
\end{align*}

From $\omega=dA\cdot{}^tA$,
$$\omega_{ij}=\sum_kda_{ik}a_{jk}.\ \ \ \ \ \mbox{(3)}$$
The exterior derivative of (3) is
\begin{align*}
d\omega_{ij}&=\sum_k da_{jk}\wedge d_{ik}\\
&=-\sum_k da_{ik}\wedge da_{jk},
\end{align*}
i.e.
\begin{align*}
d\omega&=-dA\wedge{}^t(dA)\\
&=-(\omega\cdot A)\cdot({}^tA\cdot{}^t\omega)\\
&=-\omega\cdot (A\cdot{}^tA)\cdot{}^t\omega\\
&=-\omega\cdot{}^t\omega\ \ \ (A\cdot{}^tA=I)\\
&=\omega\cdot\omega.\ \ \ (\mbox{$\omega$ is skew-symmetric.})
\end{align*}
This is equivalent to the second structural equations.

Example. [Structural Equations for the Spherical Frame Field] Let us first calculate the dual forms and connection forms.

From the spherical coordinates
\begin{align*}
x_1&=\rho\cos\varphi\cos\theta,\\
x_2&=\rho\cos\varphi\sin\theta,\\
x_3&=\rho\sin\varphi,
\end{align*}
we obtain differentials
\begin{align*}
dx_1&=\cos\varphi\cos\theta d\rho-\rho\sin\varphi\cos\theta d\varphi-\rho\cos\varphi\sin\theta d\theta,\\
dx_2&=\cos\varphi\sin\theta d\rho-\rho\sin\varphi\sin\theta d\varphi+\rho\cos\varphi\cos\theta d\theta,\\
dx_3&=\sin\varphi d\rho+\rho\cos\varphi d\varphi.
\end{align*}
From the spherical frame field $F_1$, $F_2$, $F_3$ discussed here, we find its attitude matrix
$$A=\begin{pmatrix}
\cos\varphi\cos\theta & \cos\varphi\sin\theta & \sin\varphi\\
-\sin\theta & \cos\theta & 0\\
-\sin\varphi\cos\theta & -\sin\varphi\sin\theta & \cos\varphi
\end{pmatrix}.$$
Thus by (2) we find the dual 1-forms
\begin{align*}
\begin{pmatrix}
\theta_1\\
\theta_2\\
\theta_3
\end{pmatrix}&=\begin{pmatrix}
\cos\varphi\cos\theta & \cos\varphi\sin\theta & \sin\varphi\\
-\sin\theta & \cos\theta & 0\\
-\sin\varphi\cos\theta & -\sin\varphi\sin\theta & \cos\varphi
\end{pmatrix}\begin{pmatrix}
dx_1\\
dx_2\\
dx_3
\end{pmatrix}\\
&=\begin{pmatrix}
d\rho\\
\rho\cos\theta d\theta\\
\rho d\varphi
\end{pmatrix}.
\end{align*}
\begin{align*}
&dA=\\
&\begin{bmatrix}
-\sin\varphi\cos\theta d\varphi-\cos\varphi\sin\theta d\theta & -\sin\varphi\sin\theta d\varphi+\cos\varphi\cos\theta d\theta & \cos\varphi d\varphi\\
-\cos\theta d\theta & -\sin\theta d\theta & 0\\
-\cos\varphi\cos\theta d\varphi+\sin\varphi\sin\theta d\theta & -\cos\varphi\sin\theta d\varphi-\sin\varphi\sin\theta d\theta & -\sin\varphi d\varphi
\end{bmatrix}\end{align*}
and so,
\begin{align*}
\omega&=\begin{pmatrix}
0 & \omega_{12} & \omega_{13}\\
-\omega_{12} & 0 & \omega_{23}\\
-\omega_{13} & -\omega_{23} & 0
\end{pmatrix}\\
&=dA\cdot{}^tA\\
&=\begin{pmatrix}
0 & \cos\varphi d\theta & d\varphi\\
-\cos\varphi d\theta & 0 & \sin\varphi d\theta\\
-d\varphi & -\sin\varphi d\theta & 0
\end{pmatrix}.
\end{align*}
From these dual 1-forms and connections forms one can immediately verify the first and the second structural equations.

Tensors I

Tensors may be considered as a generalization of vectors and covectors. They are extremely important quantities for studying differential geometry and physics.

Let $M^n$ be an $n$-dimensional differentiable manifold. For each $x\in M^n$, let $E_x=T_xM^n$, i.e. the tangent space to $M^n$ at $x$. We denote the canonical basis of $E$ by $\partial=\left(\frac{\partial}{\partial x^1},\cdots,\frac{\partial}{\partial x^n}\right)$ and its dual basis by $\sigma=dx=(dx^1,\cdots,dx^n)$, where $x^1,\cdots,x^n$ are local coordinates. The canonical basis $\frac{\partial}{\partial x^1},\cdots,\frac{\partial}{\partial x^1}$ also simply denoted by $\partial_1,\cdots,\partial_n$.

Covariant Tensors

Definition. A covariant tensor of rank $r$ is a multilinear real-valued function
$$Q:E\times E\times\cdots\times E\longrightarrow\mathbb{R}$$
of $r$-tuples of vectors. A covariant tensor of rank $r$ is also called a tensor of type $(0,r)$ or shortly $(0,r)$-tensor. Note that the values of $Q$ must be independent of the basis in which the components of the vectors are expressed. A covariant vector (also called covector or a 1-form) is a covariant tensor of rank 1. An important of example of covariant tensor of rank 2 is the metric tensor $G$:
$$G(v,w)=\langle v,w\rangle=\sum_{i,j}g_{ij}v^iw^j.$$

In componenents, by multilinearity
\begin{align*}
Q(v_1\cdots,v_r)&=Q\left(\sum_{i_1}v_1^{i_1}\partial_{i_1},\cdots,\sum_{i_r}v_r^{i_r}\partial_{i_r}\right)\\
&=\sum_{i_1,\cdots,i_r}v_1^{i_1}\cdots v_r^{i_r}Q(\partial_{i_1},\cdots,\partial_{i_r}).
\end{align*}
Denote $Q(\partial_{i_1},\cdots,\partial_{i_r})$ by $Q_{i_1,\cdots,i_r}$. Then
$$Q(v_1\cdots,v_r)=\sum_{i_1,\cdots,i_r}Q_{i_1,\cdots,i_r}v_1^{i_1}\cdots v_r^{i_r}.\ \ \ \ \ \mbox{(1)}$$
Using the Einstein’s convention, (1) can be shortly written as
$$Q(v_1\cdots,v_r)=Q_{i_1,\cdots,i_r}v_1^{i_1}\cdots v_r^{i_r}.$$
The set of all covariant tensors of rank $r$ forms a vector space over $\mathbb{R}$. The number of components in such a tensor is $n^r$. The vector space of all covariant $r$-th rank tensors is denoted by
$$E^\ast\otimes E^\ast\otimes\cdots\otimes E^\ast=\otimes^r E^\ast.$$

If $\alpha,\beta\in E^\ast$, i.e. covectors, we can form the 2nd rank covariant tensor, the tensor product $\alpha\otimes\beta$ of $\alpha$ and $\beta$: Define $\alpha\otimes\beta: E\times E\longrightarrow\mathbb{R}$ by
$$\alpha\otimes\beta(v,w)=\alpha(v)\beta(w).$$
If we write $\alpha=a_idx^i$ and $\beta=b_jdx^j$, then
$$(\alpha\otimes\beta)_{ij}=\alpha\otimes\beta(\partial_i,\partial_j)=\alpha(\partial_i)\beta(\partial_j)=a_ib_j.$$

Contravariant Tensors

A contravariant vector, i.e. an element of $E$ can be considered as a linear functional $v: E^\ast\longrightarrow\mathbb{R}$ defined by
$$v(\alpha)=\alpha(v)=a_iv^i,\ \alpha=a_idx^i\in E^\ast.$$

Definition. A contravariant tensor of rank $s$ is a multilinear real-valued function $T$ on $s$-tuples of covectors
$$T:E^\ast\times E^\ast\times\cdots\times E^\ast\longrightarrow\mathbb{R}.$$ A contravariant tensor of rank $s$ is also called a tensor of type $(s,0)$ or shortly $(s,0)$-tensor.
For 1-forms $\alpha_1,\cdots,\alpha_s$
$$T(\alpha_1,\cdots,\alpha_s)=a_{1_{i_1}}\cdots a_{s_{i_s}}T^{i_1\cdots i_s}$$
where
$$T^{i_1\cdots i_s}:=T(dx^{i_1},\cdots,dx^{i_s}).$$
The space of all contravariant tensors of rank $s$ is denoted by
$$E\otimes E\otimes\cdots\otimes E:=\otimes^s E.$$
Contravariant vectors are contravariant tensors of rank 1. An example of a contravariant tensor of rank 2 is the inverse of the metric tensor $G^{-1}=(g^{ij})$:
$$G^{-1}(\alpha,\beta)=g^{ij}a_ib_j.$$

Given a pair $v,w$ of contravariant vectors, we can form the tensor product $v\otimes w$ in the same manner as we did for covariant vectors. It is the 2nd rank contravariant tensor with components $(v\otimes w)^{ij}=v^jw^j$. The metric tensor $G$ and its inverse $G^{-1}$ may be written as
$$G=g_{ij}dx^i\otimes dx^j\ \mbox{and}\ G^{-1}=g^{ij}\partial_i\otimes\partial_j.$$

Mixed Tensors

Definition. A mixed tensor, $r$ times covariant and $s$ times contravariant, is a real multilinear function $W$
$$W: E^\ast\times E^\ast\times\cdots\times E^\ast\times E\times E\times\cdots\times E\longrightarrow\mathbb{R}$$
on $s$-tuples of covectors and $r$-tuples of vectors. It is also called a tensor of type $(s,r)$ or simply $(s,r)$-tensor. By multilinearity
$$W(\alpha_1,\cdots,\alpha_s, v_1,\cdots, v_r)=a_{1_{i_1}}\cdots a_{s_{i_s}}W^{i_1\cdots i_s}{}_{j_1\cdots j_r}v_1^{j_1}\cdots v_r^{j_r}$$
where
$$W^{i_1\cdots i_s}{}_{j_1\cdots j_r}:=W(dx^{i_1},\cdots,dx^{i_s},\partial_{j_1},\cdots,\partial_{j_r}).$$

A 2nd rank mixed tensor may arise from a linear operator $A: E\longrightarrow E$. Define $W_A: E^\ast\times E\longrightarrow\mathbb{R}$ by $W_A(\alpha,v)=\alpha(Av)$. Let $A=(A^i{}_j)$ be the matrix associated with $A$, i.e. $A(\partial_j)=\partial_i A^i{}_j$. Let us calculate the component of $W_A$:
$$W_A^i{}_j=W_A(dx^i,\partial_j)=dx^i(A(\partial_j))=dx^i(\partial_kA^k{}_j)=\delta^i_kA^k{}_j=A^i{}_j.$$
So the matrix of the mixed tensor $W_A$ is just the matrix associated with $A$. Conversely, given a mixed tensotr $W$, once convariant and once contravariant, we can define a linear transformation $A$ such that $W(\alpha,v)=\alpha(A,v)$. We do not distinguish between a linear transformation $A$ and its associated mixed tensor $W_A$. In components, $W(\alpha,v)$ is written as
$$W(\alpha,v)=a_iA^i{}_jv^j=aAv.$$

The tensor product $w\otimes\beta$ of a vector and a covector is the mixed tensor defined by
$$(w\otimes\beta)(\alpha,v)=\alpha(w)\beta(v).$$ The associated transformation is can be written as
$$A=A^i{}_j\partial_i\otimes dx^j=\partial_i\otimes A^i{}_jdx^j.$$

For math undergraduates, different ways of writing indices (raising, lowering, and mixed) in tensor notations can be very confusing. Main reason is that in standard math courses such as linear algebra or elementary differential geometry (classical differential geometry of curves and surfaces in $\mathbb{E}^3$) the matrix of a linear transformation is usually written as $A_{ij}$. Physics undergraduates don’t usually get a chance to learn tensors in undergraduate physics courses. In order to study more advanced differential geometry or physics such as theory of special and general relativity, and field theory one must be able to distinguish three different ways of writing matrices $A_{ij}$, $A^{ij}$, and $A^i{}_j$. To summarize, $A_{ij}$ and $A^{ij}$ are bilinear forms on $E$ and $E^\ast$, respectively that are defined by
$$A_{ij}v^iv^j\ \mbox{and}\ A^{ij}a_ib_j\ (\mbox{respectively}).$$ $A^i{}_j$ is the matrix of a linear transformation $A: E\longrightarrow E$.

Let $(E,\langle\ ,\ \rangle)$ be an inner product space. Given a linear transformation $A: E\longrightarrow E$ (i.e. a mixed tensor), one can associate a bilinear covariant bilinear form $A’$ by
$$A'(v,w):=\langle v,Aw\rangle=v^ig_{ij}A^j{}_k w^k.$$ So we see that the matrix of $A’$ is
$$A’_{ik}=g_{ij}A^j{}_k.$$ The process can be said as “we lower the index $j$, making it a $k$, by mans of the metric tensor $g_{ij}$.” In tensor analysis one uses the same letter, i.e. instead of $A’$, one writes
$$A_{ik}:=g_{ij}A^j{}_k.$$ This is clearly a covariant tensor. In general, the components of the associated covariant tensor $A_{ik}$ differ from those of the mixed tensor $A^i{}_j$. But if the basis is orthonormal, i.e. $g_{ij}=\delta^i_j$ then they coincide. That is the reason why we simply write $A_{ij}$ without making any distiction in linear algebra or in elementary differential geometry.

Similarly, one may associate to the linear transformation $A$ a contravariant bilinear form
$$\bar A(\alpha,\beta)=a_iA^i{}_jg^{jk}b_k$$ whose matrix components can be written as
$$A^{ik}=A^i{}_jg^{jk}.$$

Note that the metric tensor $g_{ij}$ represents a linear map from $E$ to $E^\ast$, sending the vector with components $v^j$ into the covector with components $g_{ij}v^j$. In quantum mechanics, the covector $g_{ij}v^j$ is denoted by $\langle v|$ and called a bra vector, while the vector $v^j$ is denoted by $|v\rangle$ and called a ket vector. Usually the inner product on $E$
$$\langle\ ,\ \rangle:E\times E\longrightarrow\mathbb{R};\ \langle v,w\rangle=g_{ij}v^iw^j$$ is considered as a covariant tensor of rank 2. But in quantum mechanics $\langle v,w\rangle$ is not considered as a covariant tensor $g_{ij}$ of rank 2 acting on a pair of vectors $(v,w)$, rather it is regarded as the braket $\langle v|w\rangle$, a bra vector $\langle v|$ acting on a ket vector $|w\rangle$.

Connection Forms

Let $E_1, E_2, E_3$ be an arbitrary frame field on $\mathbb{E}^3$. At each $v\in T_p\mathbb{E}^3$, $\nabla_v E_i\in T_p\mathbb{E}^3$, $i=1,2,3$. So, there exists uniquely 1-forms $\omega_{ij}:T_p\mathbb{E}^3\longrightarrow\mathbb{R}$, $i,j=1,2,3$ such that
\begin{align*}
\nabla_vE_1&=\omega_{11}(v)E_1(p)+\omega_{12}(v)E_2(p)+\omega_{13}(v)E_3(p),\\
\nabla_vE_2&=\omega_{21}(v)E_1(p)+\omega_{22}(v)E_2(p)+\omega_{23}(v)E_3(p),\\
\nabla_vE_3&=\omega_{31}(v)E_1(p)+\omega_{32}(v)E_2(p)+\omega_{33}(v)E_3(p)
\end{align*}
for each $v\in T_p\mathbb{E}^3$. These equations are called the connection equations of the frame field $E_1$, $E_2$, $E_3$. One can clearly see that $\omega_{ij}$ is determined by
$$\omega_{ij}(v)=\nabla_v E_i\cdot E_j(p).\ \ \ \ \ \mbox{(1)}$$ The 1-forms $\omega_{ij}$ are called the connection forms of the frame field $E_1,E_2,E_3$. Often the matrix $\omega=(\omega_{ij})$ is called the connection 1-form of the frame field $E_1,E_2,E_3$. The linearity of $\omega_{ij}$ is due to the linearity of the covariant derivative $\nabla E_i$.

Proposition. The matrix $\omega$ is a skew symmetric matrix, i.e. $\omega+{}^t\omega=0$.

Proof. Since $E_i\cdot E_j=0$, the directional derivative $v[E_i\cdot E_j]=0$. On the other hand, by Leibniz rule,
\begin{align*}
v[E_i\cdot E_j]&=\nabla_vE_i\cdot E_j(p)+E_i(p)\cdot \nabla_vE_j\\
&=\omega_{ij}(v)+\omega_{ji}(v).
\end{align*}
Hence,
$$\omega_{ij}+\omega_{ji}=0.\ \ \ \ \ \mbox{(2)}$$

If $i=j$ in (2), we get $\omega_{ii}=0$. So, the connection 1-form $\omega$ is written as
$$\omega=\begin{pmatrix}
0 & \omega_{12} & \omega_{13}\\
-\omega_{12} & 0 &\omega_{23}\\
-\omega_{13} & -\omega_{23} & 0
\end{pmatrix}.\ \ \ \ \ \mbox{(3)}$$

Remark. The set of all $3\times 3$ skew symmetric matrices is denoted by $\mathfrak{o}(3)$. It is the Lie algebra of the orthogonal group $\mathrm{O}(3)$. The orthogonal group $\mathrm{O}(3)$ is the set of all $3\times 3$ orthogonal matrices and it is a Lie group. Recall that a square matrix $A$ is orthogonal if and only if $A\cdot{}^tA=I$, i.e. $A^{-1}={}^tA$.

The connection equations of the frame field $E_1$, $E_2$, $E_3$
$$\nabla_VE_i=\sum_i\omega_{ij}(V)E_j,\ i=1,2,3\ \ \ \ \ \mbox{(4)}$$
where $V$ is a vector field on $\mathbb{E}^3$ become
$$\begin{array}{ccccccc}
\nabla_VE_1&=&&&\omega_{12}(V)E_2&+&\omega_{13}(V)E_3,\\
\nabla_VE_2&=&-\omega_{12}(V)E_1& & &+&\omega_{23}(V)E_3,\\
\nabla_VE_3&=&-\omega_{13}(V)E_1&-&\omega_{23}(V)E_2.
\end{array}
$$
The connections equations are in fact a generalization of the Frenet-Serret formulas.

Let $Y$ be a vector field defined on a region containing a curve $\alpha(t)$. Then $Y_\alpha(t):=Y(\alpha(t))$ defined a vector field on the curve $\alpha(t)$. Then one can easily see that
$$\nabla_{\dot\alpha(t)}Y=\frac{d}{dt}Y_\alpha(t).$$
Let $\alpha(t)$ be a curve with unit speed. Let $E_1=T$, $E_2=N$, $E_3=B$. Then
\begin{align*}
\omega_{12}&=\nabla_{\dot\alpha_(t)}E_1\cdot E_2=\dot T\cdot N=(\kappa N)\cdot N=\kappa,\\
\omega_{13}&=\nabla_{\dot\alpha_(t)}E_1\cdot E_3=\dot T\cdot B=0,\\
\omega_{23}&=\nabla_{\dot\alpha_(t)}E_2\cdot E_3=\dot N\cdot B=(-\kappa T+\tau B)=\tau.
\end{align*}
The connection equations (4) are then nothing but the Frenet-Serret formulas
$$\begin{array}{ccccccc}
\dot T&=&&&\kappa N&&\\
\dot N&=&-\kappa T& & &+&\tau B\\
\dot B&=&&-&\tau N.
\end{array}
$$

The frame $E_1,E_2,E_3$ can be written in terms of the natural frame $U_1,U_2,U_3$ as
\begin{align*}
E_1&=a_{11}U_1+a_{12}U_2+a_{13}U_3,\\
E_2&=a_{21}U_1+a_{22}U_2+a_{23}U_3,\\
E_3&=a_{31}U_1+a_{32}U_2+a_{33}U_3.
\end{align*}
Each real-valued function $a_{ij}:\mathbb{E}^3\longrightarrow\mathbb{R}$ is uniquely determined by $a_{ij}=E_i\cdot U_j$. The matrix $A=(a_{ij})$ is called the attitude matrix (also called rotation matrix or orientation matrix) of the frame field $E_1,E_2,E_3$. One can clearly see that the attitude matrix $A$ is an orthogonal matrix. In the above remark, I mentioned that the set of all $3\times $ skew symmetric matrices is the Lie algebra $\mathfrak{o}(3)$. The Lie algebra $\mathfrak{g}$ of a Lie group $G$ is defined to be the tangent space $T_e G$ to $G$ at the identity element $e$. (A Lie group is a differentiable manifold, so it make sense to talk about tangent spaces to $G$.)

Let us define a curve $\gamma: \mathbb{R}\longrightarrow\mathrm{O}(3)$ by
$$\gamma(t)=A(t)\cdot{}^tA(0).$$
Then $\gamma(0)=I$.
Hence $\dot{\gamma}(0)=\frac{dA(t)}{dt}|_{t=0}\cdot{}^tA(0)$ is a tangent vector to $\mathrm{O}(3)$ at the identity matrix $I$. That is, $\dot{\gamma}(0)\in\mathfrak{o}(3)$. Hence one can easily expect that the following theorem holds.

Theorem. If $A=(a_{ij})$ is the attitude matrix and $\omega=(\omega_{ij})$ the connection 1-form of a frame field $E_1, E_2, E_3$, then
$$\omega=dA\cdot{}^tA\ \ \ \ \ \mbox{(4)}$$
or equivalently
$$\omega_{ij}=\sum_k da_{ik} \cdot a_{jk}\ \mbox{for}\ i,j=1,2,3.$$

Proof. For each $v\in T_p\mathbb{E}^3$,
$$\omega_{ij}(v)=\nabla_vE_i\cdot E_j(p).$$
In terms of the natural field $U_i$, $i=1,2,3$,
$$E_i=\sum_ka_{ik}U_k,\ i=1,2,3.$$
So,
\begin{align*}
\nabla_vE_i&=\sum_k v[a_{ik}]U_k(p)\\
&=\sum_k da_{ik} U_k(p).
\end{align*}
Hence,
$$\omega_{ij}=\sum_k da_{ik}a_{jk},$$
i.e.
$$\omega=dA\cdot{}^tA.$$

Remark. In general, if $G$ is a Lie group then its Lie algebra $\mathfrak{g}$ is given by the set of differential $1$-forms
$$\mathfrak{g}=\{g^{-1}dg:\ g\in G\}=\{(dg^{-1})g:\ g\in G\}.$$

Example. Let us compute the connection forms of the cylindrical frame field. The attitude matrix is
$$A=\begin{pmatrix}
\cos\theta & \sin\theta & 0\\
-\sin\theta & \cos\theta & 0\\
0 & 0 & 1
\end{pmatrix}.$$ Thus
$$dA=\begin{pmatrix}
-\sin\theta d\theta & \cos\theta d\theta & 0\\
-\cos\theta d\theta & -\sin\theta d\theta & 0\\
0 & 0 & 0
\end{pmatrix}.$$
Hence,
\begin{align*}
\omega&=dA\cdot{}^tA\\
&=\begin{pmatrix}
-\sin\theta d\theta & \cos\theta d\theta & 0\\
-\cos\theta d\theta & -\sin\theta d\theta & 0\\
0 & 0 & 0
\end{pmatrix}\begin{pmatrix}
\cos\theta & -\sin\theta & 0\\
\sin\theta & \cos\theta & 0\\
0 & 0 & 1\end{pmatrix}\\
&=\begin{pmatrix}
0 & d\theta & 0\\
-d\theta & 0 & 0\\
0 & 0 & 0
\end{pmatrix}.
\end{align*}
The connection equations of the cylindrical frame field are then
\begin{align*}
\nabla_VE_1&=d\theta(V)E_2=V[\theta]E_2,\\
\nabla_VE_2&=-d\theta(V)E_1=-V[\theta]E_1,\\
\nabla_VE_3&=0
\end{align*}
for all vector fields $V$. As expected the vector field $E_3$ is parallel.