Category Archives: Quantum Computing

No $\sqrt{\mathrm{NOT}}$ in PT-Symmetric Quantum Mechanics

In PT-symmetric quantum mechanics, the inner product is given by
\begin{equation}
\label{eq:pt-product}
\langle v_1|P|v_2\rangle
\end{equation}
where $P=\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$.
A PT-symmetric operator $H$ is a $2\times 2$ symmetric matrix that satisfies
$$\langle Hv_1|P|v_2\rangle=\langle v_1|P|Hv_2\rangle$$
which is equivalent to
$$\bar HP=PH$$
So, PT-symmetric operators are self-adjoint operators with respect to the inner product \eqref{eq:pt-product}. PT-symmetric operators take the form
$$\begin{bmatrix}
a & b\\
b & \bar a
\end{bmatrix}$$
In PT-symmetric quantum mechanics, $\mathrm{NOT}$ gate is given by
$$\mathrm{NOT}=\begin{bmatrix}
1 & 0\\
0 &-1
\end{bmatrix}$$
The reason for this is that in PT-symmetric quantum mechanics the qubits are
$$|0\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}
1\\
1
\end{bmatrix},\ |1\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}
1\\
-1
\end{bmatrix}$$
The eigenvectors of $\mathrm{NOT}$ corresponding to the eigenvalues $\pm 1$ are, respectively, $\begin{bmatrix}
1\\
0
\end{bmatrix}$ and $\begin{bmatrix}
0\\
1
\end{bmatrix}$ and their norms are zero with respect to the inner product \eqref{eq:pt-product}. This means that the spectral decomposition of $\mathrm{NOT}$ in terms of its eigenvectors does not exist in PT-symmetric quantum mechanics. This also hints us that $\sqrt{\mathrm{NOT}}$ does not likely exist. In PT-symmetric quantum mechanics, an operator $U$ is called $P$-unitary if $U^\dagger PU=P$ and $P$-antiunitary if $U^\dagger PU=-P$.

For the same physical reason, in PT-symmetric quantum mechanics, gates are required to be $P$-unitary or $P$-antiunitary. $\mathrm{NOT}$ is $P$-antiunitary. It can be easily seen that $\sqrt{\mathrm{NOT}}$ does not exist. If $\sqrt{\mathrm{NOT}}$ exists, then it is either $P$-unitary or $P$-antiunitary. If $\sqrt{\mathrm{NOT}}$ is $P$-unitary, then
\begin{align*} \mathrm{NOT}^\dagger P\mathrm{NOT}&=\sqrt{\mathrm{NOT}}^\dagger(\sqrt{\mathrm{NOT}}^\dagger P\sqrt{\mathrm{NOT}})\sqrt{\mathrm{NOT}}\\ &=\sqrt{\mathrm{NOT}}^\dagger P\sqrt{\mathrm{NOT}}\\ &=P \end{align*}
This contradicts to the fact that $\mathrm{NOT}$ is $P$-antiunitary. If $\sqrt{\mathrm{NOT}}$ is $P$-antiunitary, then
\begin{align*} \mathrm{NOT}^\dagger P\mathrm{NOT}&=\sqrt{\mathrm{NOT}}^\dagger(\sqrt{\mathrm{NOT}}^\dagger P\sqrt{\mathrm{NOT}})\sqrt{\mathrm{NOT}}\\ &=-\sqrt{\mathrm{NOT}}^\dagger P\sqrt{\mathrm{NOT}}\\ &=P \end{align*}
Again, this is a contradiction. Therefore, $\sqrt{\mathrm{NOT}}$ does not exist. As seen here, the standard hermitian quantum mechanics admits $\sqrt{\mathrm{NOT}}$ gate. This implies that the standard hermitian quantum mechanics and PT-symmetric quantum mechanics cannot be consistent with each other. While no one has yet come up with a physical implementation of $\sqrt{\mathrm{NOT}}$ gate, it is interesting to see that the physical viability of a quantum theory hangs on the physical realizability of a quantum gate.

Square Roots of Operators

Mathematics is full of weird stuff. One of them is the square root of an operator. So far, I have seen only two books that discuss the square root of an operator. They are listed in the references below.

Definition. An operator $R$ is called a square root of an operator $T$ is $R^2=T$.

Example. In quantum computing, $\sqrt{\mathrm{NOT}}$ gate is given by
$$\sqrt{\mathrm{NOT}}=\begin{bmatrix}
\frac{1+i}{2} & \frac{1-i}{2}\\
\frac{1-i}{2} & \frac{1+i}{2}
\end{bmatrix}$$
and
$$\sqrt{\mathrm{NOT}}\cdot\sqrt{\mathrm{NOT}}=\mathrm{NOT}=\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$$
As required by quantum mechanics, quantum gates are unitary matrices. There is no counterpart of $\sqrt{\mathrm{NOT}}$ gate in classical computing, so $\sqrt{\mathrm{NOT}}$ gate is a truly quantum gate. As far as I know, no one has come up with a physical implementation of $\sqrt{\mathrm{NOT}}$ gate yet.

Example. An operator does not necessarily have a square root. For example, define $T:\mathbb{C}^3\longrightarrow\mathbb{C}^3$ by
$$T(z_1,z_2,z_3)=(z_2,z_3,0)$$
Then one can easily show that $T$ is linear. Suppose that $T$ has a square root $R$. Let $\begin{bmatrix}
r & s & t\\
u & v & w\\
z & y & z
\end{bmatrix}$ be the matrix associated with $R$. Since the matrix associated with $T$ is $\begin{bmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
0 & 0 & 0
\end{bmatrix}$, we have the equation
$$\begin{bmatrix}
r & s & t\\
u & v & w\\
z & y & z
\end{bmatrix}\cdot\begin{bmatrix}
r & s & t\\
u & v & w\\
z & y & z
\end{bmatrix}=\begin{bmatrix}
0 & 1 & 0\\
0 & 0 & 1\\
0 & 0 & 0
\end{bmatrix}$$
This is a system of 9 scalar equations. One can attempt to solve this equation using a CAS (Computer Algebra System), for example, Maple and see that the system does not have a solution, i.e. $R$ does not exist.

So, what kind of operators have square roots?

Theorem. Identity + Nilpotent has a square root.

Proof. Let $N$ be nilpotent. Then $N^m=0$ for some positive integer $m$. Consider the Taylor series for $\sqrt{1+x}$:
$$\sqrt{1+x}=1+a_1x+a_2x^2+a_3x^3+\cdots$$
Using the nilpotency of $N$, we may guess that $\sqrt{I+N}$ takes the form
$$I+a_1N+a_2N^2+\cdots+a_{m-1}N^{m-1}$$
Now,
\begin{align*} (I+a_1N+a_2N^2+\cdots+a_{m-1}N^{m-1})^2=I&+2a_1N+(2a_2+a_1^2)N^2\\&+(2a_3+2a_1a_2)N^3+\cdots\\ &+(2a_{m-1}+\cdots)N^{m-1}=I+N \end{align*}
and by comparing the coefficients
$a_1=\frac{1}{2}$, $2a_2+a_1^2=0$, so $a_2=-\frac{1}{8}$, $2a_3+2a_1a_2=0$, so $a_3=\frac{1}{16}$, and so and forth.

Theorem. Suppose that $T: V\longrightarrow V$ is invertible. Then $T$ has a square root.

Proof. Let $\lambda_1,\lambda_2,\cdots,\lambda_m$ be the distinct eigenvalues of $T$. Then for each $j$, there exists a nilpotent operator $N_j: G(\lambda_j,T)\longrightarrow G(\lambda_j,T)$ such that $T|{G(\lambda_j,T)}=\lambda_jI+N_j$. (See [1] , p 252, Theorem 8.21 for a proof.) Here, $G(\lambda_j,T)$ is the eigenspace of $T$ corresponding to the eigenvalue $\lambda_j$. Since $T$ is invertible, no $\lambda_j$ is equal to 0, so we can write $$T|{G(\lambda_j,T)}=\lambda_j\left(I+\frac{N}{\lambda_j}\right)$$
Since $\frac{N}{\lambda_j}$ is nipotent, $I+\frac{N}{\lambda_j}$ has a square root. Let $R_j$ be $\sqrt{\lambda_j}$ times the square root of $I+\frac{N}{\lambda_j}$. Any vector $v\in V$ can be written uniquely in the form
$$v=u_1+u_2+\cdots+u_m,$$
where each $u_j$ is in $G(\lambda_j,T)$. Using this decomposition, define an operator $R:V\longrightarrow V$ by
$$Rv=R_1u_1+R_2 u_2+\cdots+R_m u_m$$
Then $R^2u_j=R_j^2 u_j=T|{G(\lambda_j,T)}u_j=\lambda_ju_j$ and so \begin{align*} R^2v&=R_1^2u_1+R_2^2u_2+\cdots+R_m^2u_m\\ &= T|{G(\lambda_1,T)}u_1+T|{G(\lambda_2,T)}u_2+\cdots+T|{G(\lambda_m,T)}u_m=Tv
\end{align*}
Therefore, $R$ is a square root of $T$.

Example. The $\mathrm{NOT}$ gate is invertible. It is an inverse of itself.

The proof of the above theorem suggests that an operator $T$ has a square root if there is a spectral decomposition of $T$ with respect to the standard inner product of $\mathbb{C}^n$. A normal operator is such an operator. Recall that an operator $T: V\longrightarrow V$ is normal if $T^\ast T=TT^\ast$. The following theorem guarantees the existence of such a spectral decomposition of a normal operator.

Theorem. A matrix $T$ is normal if and only if there exists a diagonal matrix $\Lambda$ and an unitary matrix $U$ such that $T=U\Lambda U^\ast$.

Here, the diagonal entries of $\Lambda$ are the eigenvalues of $T$ and the columns of $U$ are the eigenvectors of $T$. The matching eigenvalues in $\Lambda$ come in the same order as the eigenvectors are ordered as columns of $U$.

Let $T: V\longrightarrow V$ be a normal operator and write it as its spectral decomposition, using Dirac’s braket notation,
$$T=\sum_j\lambda_j|v_j\rangle\langle v_j|$$
Define
$$f(T)=\sum_j f(\lambda_j)|v_j\rangle\langle v_j|$$
Using this definition, one can define, for example, the exponential, square root, and logarithmic operator of $T$.

Example. Let $T=\begin{bmatrix}
0 & 1\\
1 & 0
\end{bmatrix}$. Find the square root of $T$.

Solution. $T$ is hermitian (symmetric), so it is normal. The spectral decomposition of $T$ is given by
$$T=|v_1\rangle\langle v_1|-|v_2\rangle\langle v_2|,$$
where
$$|v_1\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}
1\\
1
\end{bmatrix}\ \mathrm{and}\ |v_2\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}
1\\
-1
\end{bmatrix}$$
Now,
\begin{align*} \sqrt{T}&=|v_1\rangle\langle v_1|+i|v_2\rangle\langle v_2|\\ &=\frac{1}{2}\begin{bmatrix} 1 & 1\\ 1 & 1 \end{bmatrix}+\frac{i}{2}\begin{bmatrix} 1 & -1\\ -1 & 1 \end{bmatrix}\\ &=\begin{bmatrix} \frac{1+i}{2} & \frac{1-i}{2}\\ \frac{1-i}{2} & \frac{1+i}{2} \end{bmatrix} \end{align*}

References:

  1. Sheldon Axler, Linear Algebra Done Right, Third Edition, Springer, 2015
  2. Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2000

Bell’s Theorem

Suppose that Boris and Natasha are in different locations far away from each other. Their mutual friend Victor prepares a pair of particles and send one each to Boris and Natasha. Boris chooses to perform one of two possible measurements, say $A_0$ and $A_1$, associated with physical properties $P_{A_0}$ and $P_{A_1}$ of the particle he received. Each $A_0$ and $A_1$ has $+1$ or $-1$ for the outcomes of measurement. When Natasha receives one of the particles, she as well chooses to perform one of two possible measurements $B_0$, $B_1$, each of which has outcome $+1$ or $-1$. Let us consider the following quantity of the measurements $A_0$, $A_1$, $B_0$, $B_1$:
$$A_0B_0+A_0B_1+A_1B_0-A_1B_1=(A_0+A_1)B_0+(A_0-A_1)B_1$$
Since $A_0=\pm 1$ and $A_1=\pm 1$, either one of $A_0+A_1$ and $A_0-A_1$ is zero and the other is $\pm 2$. If the experiment is repeated over many trials with Victor preparing new pairs of particles, the expected value of all the outcomes satisfies the inequality
\begin{equation}
\label{eq:bell}
\langle A_0B_0+A_0B_1+A_1B_0-A_1B_1\rangle\leq 2
\end{equation}
Proof of \eqref{eq:bell}:
\begin{align*}\langle A_0B_0+A_0B_1+A_1B_0-A_1B_1\rangle&=\sum_{A_0,A_1,B_0,B_1}p(A_0,A_1,B_0,B_1)(A_0B_0+A_0B_1+A_1B_0-A_1B_1)\\&\leq \sum_{A_0,A_1,B_0,B_1}2p(A_0,A_1,B_0,B_1)\\&=2 \end{align*}
The inequality \eqref{eq:bell} is a variant of the Bell inequality called the CHSH inequality. CHSH stands for John Clauser, Michael Horne, Abner Simony and Richard Holt. The derivation of the CHSH inequality \eqref{eq:bell} depends on two assumptions:

  1. The physical properties $P_{A_0}$, $P_{A_1}$, $P_{B_0}$, $P_{B_1}$ have definite values $A_0$, $A_1$, $B_0$, $B_1$ which exist independently of observation or measurement. This is called the assumption of realism.
  2. Boris performing his measurement does not influence the result of Natasha’s measurement. This is called the assumption of locality.

These two assumptions together are known as the assumptions of local realism. Surprisingly this intuitively innocuous inequality can be violated in quantum mechanics. Here is an example. Let $|0\rangle=\begin{pmatrix}
1\\
0
\end{pmatrix}$ and $|1\rangle=\begin{pmatrix}
0\\
1
\end{pmatrix}$. Then $|0\rangle$ and $|1\rangle$ are the eigenstates of
$$\sigma_z=\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}$$
Victor prepares a quantum system of two qubits in the state
\begin{align*}|\psi\rangle&=\frac{|0\rangle\otimes |1\rangle-|1\rangle\otimes |0\rangle}{\sqrt{2}}\\ &=\frac{|01\rangle – |10\rangle}{\sqrt{2}} \end{align*}
He passes the first qubit to Boris, and the second qubit to Natasha. Boris measures either of the observables
$$A_0=\sigma_z,\ A_1=\sigma_x=\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}$$
and Natasha measures either of the observables
$$B_0=-\frac{\sigma_x+\sigma_z}{\sqrt{2}},\ B_1=\frac{\sigma_x-\sigma_z}{\sqrt{2}}$$
Since the system is in the state $|\psi\rangle$, the average value of $A_0\otimes B_0$ is
\begin{align*}\langle A_0\otimes B_0\rangle&=\langle\psi|A_0\otimes B_0|\psi\rangle\\ &=\frac{1}{\sqrt{2}} \end{align*}
Similarly, the average values of the other observables ar given by
\begin{align*}\langle A_0\otimes B_1\rangle&=\frac{1}{\sqrt{2}},\ \langle A_1\otimes B_0\rangle=\frac{1}{\sqrt{2}},\ \mbox{and}\\ \langle A_1\otimes B_1\rangle&=-\frac{1}{\sqrt{2}} \end{align*}
Since the expected value is linear, we have
\begin{equation}
\begin{aligned}
\langle A_0B_0+A_0B_1+A_1B_0-A_1B_1\rangle&=\langle A_0B_0\rangle+\langle A_0B_1\rangle+\langle A_1B_0\rangle-\langle A_1B_1\rangle\\&=2\sqrt{2}\end{aligned}\label{eq:bell2}
\end{equation}
This means that the Bell inequality \eqref{eq:bell} is violated. Physicists have confirmed the prediction in \eqref{eq:bell2} by experiments using photons. It turns out that the Mother Nature does not obey the Bell inequality. What this means is that one or both of the two assumptions for the derivation of the Bell inequality \eqref{eq:bell} must be incorrect. There is no consensus among physicists which of the two assumptions needs to be dropped. An important lesson we learn from the Bell inequality is that the Mother Nature (Quantum Mechanics) defies our intuitive common sense. This also begs a troubling question. If we cannot rely on our intuition to understand how the universe works, what else can we rely on? One thing is certain. The world is not locally realistic.

References:

  1. Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2004

Quantum Gates: Single Qubit Gates

A classical computer is built from electrical circuits that consist of logic gates and wires. The logic gates perform intended logical operations of information and wires pass information between logic gates. Similarly, a quantum computer would be built from quantum circuits that consist of quantum gates and wires. However, when it comes down to comparing how operations are performed at elementary gates level, there is no similarity between classical logic gates and quantum gates. Unlike classical logic gates, quantum gates operate not according to logical rules but to physical (quantum mechanical) laws. A single qubit gate acts on not only on qubits $|0\rangle=\begin{bmatrix}1\\0\end{bmatrix}$ and $|1\rangle=\begin{bmatrix}0\\1\end{bmatrix}$ but also on a superposition of $|0\rangle$ and $|1\rangle$, $\alpha|0\rangle+\beta|1\rangle$, where $\alpha$ and $\beta$ are complex numbers. For an obvious reason, single qubit gates are $2\times 2$ matrices. Furthermore, a quantum gate operation is performed through a unitary time evolution, and this means that they are required to be unitary matrices. Recall that a matrix $U$ of complex entries is unitary if $UU^\dagger=I$ where $\dagger$ denotes transpose conjugate i.e. $U^\dagger = \bar U^t$. If we write $U=\begin{bmatrix}a & b\\c & d\end{bmatrix}$, then $U$ is unitary if and only if $|a|^2+|b|^2=|c|^2+|d|^2=1$ and $a\bar c+b\bar d=0$. Any unitary matrix is qualified to be a quantum gate but not all unitary matrices would be interesting or useful gates. For practical purposes, it is important to see which gates are universal if there are any and that which gates are physically implementable. The following are some examples of well-known single cubit gates.

  1. Pauli-X, Pauli-Y, Pauli-Z gates. A matrix $H$ is Hermitian if $H=H^\dagger$. Hermitian matrices play a crucial role in quantum mechanics as Hamiltonians are required to be Hermitian. The eigenvalues of a Hamiltonian are the energies of the corresponding quantum system and of course they all have to be real. Hermitian matrices are guaranteed to have all real eigenvalues. A $2\times 2$ Hermitian matrix can be written in the form $H=\begin{bmatrix}x_0+x_3 & x_1-ix_2\\x_1+ix_2 & x_0-x_3\end{bmatrix}$ where the $x_i$ are all reals. The matrices \begin{equation}\label{eq:pauli}\sigma_0=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix},\ \sigma_1=\begin{bmatrix}0 & 1\\1 & 0\end{bmatrix},\ \sigma_2=\begin{bmatrix}0 & -i\\i & 0\end{bmatrix},\ \sigma_3=\begin{bmatrix}1 & 0\\0 & -1\end{bmatrix}\end{equation} form a basis for the space of all Hermitian matrices. The $\sigma_i$ in \eqref{eq:pauli} are called the Pauli spin matrices in physics. $\sigma_0$ is just the identity matrix but notice that $\sigma_1,\sigma_2,\sigma_3$ are also unitary matrices. They are called Pauli-X, Pauli-Y, Pauli-Z gates, respectively. Pauli-X gate is also called NOT gate as it acts like the classical NOT gate. \begin{align*}\begin{bmatrix}0 & 1\\1 & 0\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}&=\begin{bmatrix}0\\1\end{bmatrix}\\\begin{bmatrix}0 & 1\\1 & 0\end{bmatrix}\begin{bmatrix}0\\1\end{bmatrix}&=\begin{bmatrix}0\\1\end{bmatrix}\end{align*}
  2. Square root of NOT gate. The gate $$\sqrt{\mathrm{NOT}}=\frac{1}{2}\begin{bmatrix}1+i & 1-i\\1-i & 1+i\end{bmatrix}$$ is called square root of NOT gate because $$\sqrt{\mathrm{NOT}}\cdot\sqrt{\mathrm{NOT}}=\begin{bmatrix}0 & 1\\1 & 0\end{bmatrix}$$ $\sqrt{\mathrm{NOT}}$ is one of the gates that are truly quantum because there is no classical counterpart.
  3. Hadamard gate. $$H=\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1\\1 & -1\end{bmatrix}$$ is called the Hadamard gate. It turns $|0\rangle$ to $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ and $|1\rangle$ to $\frac{|0\rangle-|1\rangle}{\sqrt{2}}$.

References:

  1. Michael A. Nielsen and Isaac L. Chung, Quantum Computation and Quantum Information, Cambridge University Press, 2000.
  2. Colin P. Williams and Scott H. Clearwater, Explorations in Quantum Computing, Spinger Telos, 1998.

The Tensor Product

Let $V$ and $W$ be two vector spaces of dimensions $m$ and $n$, respectively. The tensor product of $V$ and $W$ is a space $V\otimes W$ of dimension $mn$ together with a bilinear map $$\varphi: V\times W\longrightarrow V\otimes W;\ \varphi(v,w)=v\otimes w$$ which satisfy the following universal property: For any vector space $X$ and any bilinear map $\psi: V\times W\longrightarrow X$, there exists uniquely a linear map $\gamma : V\otimes W\longrightarrow X$ such that $\psi(v,w)=\gamma(u\otimes w)$ for all $v\in V$ and $w\in W$.

$$\begin{array}[c]{ccc}V\otimes W & & \\\uparrow\scriptstyle{\varphi} & \scriptstyle{\gamma}\searrow& \\V\times W & \stackrel{\psi}\rightarrow & X\end{array}\ \ \ \gamma\circ\varphi=\psi$$

Often, we use more down to earth definition of the tensor product. Let $\{e_1,\cdots,e_m\}$ and $\{f_1,\cdots,f_n\}$ be bases of $V$ and $W$, respectively. The tensor product $V\otimes W$ is a vector space of dimension $mn$ spanned by the basis $\{e_i\otimes f_j: i=1,\cdots,m,\ j=1,\cdots,n\}$. Let $v\in V$ and $w\in W$. Then $$v=\sum_i v_ie_i\ \mbox{and}\ w=\sum_j w_jf_j$$ The tensor product $v$ and $w$ is then given by $$v\otimes w=\sum_{i,j}v_iw_je_i\otimes f_j$$ It can be easily shown that this definition of the tensor product in terms of prescribed bases satisfies the universality property. Although this definition uses a choice of bases of $V$ and $W$, the tensor product $V\otimes W$ must not depend on a particular choice of bases, i.e. regardless of the choice of bases the resulting tensor product must be the same. This also can be shown easily using some basic properties from linear algebra. I will leave them for exercise for readers.

The tensor product can be used to describe the state of a quantum memory register. A quantum memory register consists of many 2-state systems (Hilbert spaces of qubits). Let $|\psi^{(1)}\rangle$ and $|\psi^{(2)}\rangle$ be qubits associated with two different 2-state systems. In terms of the standard orthogonal basis $\begin{pmatrix}1\\0\end{pmatrix}$ and $\begin{pmatrix}0\\1\end{pmatrix}$ for each 2-state system, we have \begin{align*}|\psi^{(1)}\rangle&=\begin{pmatrix}\omega_0^{(1)}\\\omega_1^{(1)}\end{pmatrix}=\omega_0^{(1)}\begin{pmatrix}1\\0\end{pmatrix}+\omega_1^{(1)}\begin{pmatrix}0\\1\end{pmatrix}\\|\psi^{(2)}\rangle&=\begin{pmatrix}\omega_0^{(2)}\\\omega_1^{(2)}\end{pmatrix}=\omega_0^{(2)}\begin{pmatrix}1\\0\end{pmatrix}+\omega_1^{(2)}\begin{pmatrix}0\\1\end{pmatrix}\end{align*} Define $\otimes$ on the basis members as follows: \begin{align*}|00\rangle&=\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}1\\0\\0\\0\end{pmatrix},\ |01\rangle=\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\\1\\0\\0\end{pmatrix}\\|10\rangle&=\begin{pmatrix}0\\1\end{pmatrix}\otimes\begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}0\\0\\1\\0\end{pmatrix},\ |11\rangle=\begin{pmatrix}0\\1\end{pmatrix}\otimes\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\\0\\0\\1\end{pmatrix}\end{align*} These four vectors form a basis for a 4-dimensional Hilbert space (a 2-qubit memory register). It follows that \begin{align*}|\psi^{(1)}\rangle\otimes|\psi^{(2)}\rangle&=\omega_0^{(1)}\omega_0^{(2)}|00\rangle+\omega_0^{(1)}\omega_1^{(2)}|01\rangle+\omega_1^{(1)}\omega_0^{(2)}|10\rangle+\omega_1^{(1)}\omega_1^{(2)}|11\rangle\\&=\begin{pmatrix}\omega_0^{(1)}\omega_0^{(2)}\\\omega_0^{(1)}\omega_1^{(2)}\\\omega_1^{(1)}\omega_0^{(2)}\\\omega_1^{(1)}\omega_1^{(2)}\end{pmatrix}\end{align*}Similarly, to describe the state of a 3-qubit memory register, one performs the tensor product $|\psi^{(1)}\rangle\otimes|\psi^{(2)}\rangle\otimes|\psi^{(3)}\rangle$.

Quantum memory registers can store an exponential amount of classical information in only a polynomial number of qubits using the quantum property the Principle of Superposition. For example, consider the two classical memory registers storing complimentary sequences of bits $$\begin{array}{|c|c|c|c|c|c|c|}\hline1 & 0 & 1 & 1 & 0 & 0 &1\\\hline 0 & 1 & 0 & 0 & 1 & 1 & 0\\\hline\end{array}$$ A single quantum memory register can store both sequences simultaneously in an equally weighted superposition of the two states representing each 7-bit input $$\frac{1}{\sqrt{2}}(|1011001\rangle+|0100110\rangle)$$

A matrix can be considered as a vector. For example, a $2\times 2$ matrix $\begin{pmatrix}a & b\\c & d\end{pmatrix}$ can be identified with the vector $(a, b, c, d) \in \mathbb{R}^4$. Hence one can define the tensor product of two matrices, called the Kronecker product, in a similar manner to that of two vectors. For example, $$\begin{pmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{pmatrix}\otimes\begin{pmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{pmatrix}:=\begin{pmatrix}a_{11}\begin{pmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{pmatrix} & a_{12}\begin{pmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{pmatrix}\\a_{21}\begin{pmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{pmatrix} & a_{22}\begin{pmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{pmatrix}\end{pmatrix}\\=\begin{pmatrix}a_{11}b_{11} & a_{11}b_{12} & a_{12}b_{11} & a_{12}b_{12}\\a_{11}b_{21} & a_{11}b_{22} & a_{12}b_{21} & a_{12}b_{22}\\a_{21}b_{11} & a_{21}b_{12} & a_{22}b_{11} & a_{22}b_{12}\\a_{21}b_{21} & a_{21}b_{22} & a_{22}b_{21} & a_{22}b_{22}\end{pmatrix}$$

References:

[1] A. Yu. Kitaev, A. H. Shen and M. N. Vyalyi, Classical and Quantum Computation, Graduate Studies in Mathematics Volume 47, American Mathematical Society, 2002

[2] Colin P. Williams and Scott H. Clearwater, Explorations in Quantum Computing, Springer TELOS, 1998