The Tensor Product

Let $V$ and $W$ be two vector spaces of dimensions $m$ and $n$, respectively. The tensor product of $V$ and $W$ is a space $V\otimes W$ of dimension $mn$ together with a bilinear map $$\varphi: V\times W\longrightarrow V\otimes W;\ \varphi(v,w)=v\otimes w$$ which satisfy the following universal property: For any vector space $X$ and any bilinear map $\psi: V\times W\longrightarrow X$, there exists uniquely a linear map $\gamma : V\otimes W\longrightarrow X$ such that $\psi(v,w)=\gamma(u\otimes w)$ for all $v\in V$ and $w\in W$.

$$\begin{array}[c]{ccc}V\otimes W & & \\\uparrow\scriptstyle{\varphi} & \scriptstyle{\gamma}\searrow& \\V\times W & \stackrel{\psi}\rightarrow & X\end{array}\ \ \ \gamma\circ\varphi=\psi$$

Often, we use more down to earth definition of the tensor product. Let $\{e_1,\cdots,e_m\}$ and $\{f_1,\cdots,f_n\}$ be bases of $V$ and $W$, respectively. The tensor product $V\otimes W$ is a vector space of dimension $mn$ spanned by the basis $\{e_i\otimes f_j: i=1,\cdots,m,\ j=1,\cdots,n\}$. Let $v\in V$ and $w\in W$. Then $$v=\sum_i v_ie_i\ \mbox{and}\ w=\sum_j w_jf_j$$ The tensor product $v$ and $w$ is then given by $$v\otimes w=\sum_{i,j}v_iw_je_i\otimes f_j$$ It can be easily shown that this definition of the tensor product in terms of prescribed bases satisfies the universality property. Although this definition uses a choice of bases of $V$ and $W$, the tensor product $V\otimes W$ must not depend on a particular choice of bases, i.e. regardless of the choice of bases the resulting tensor product must be the same. This also can be shown easily using some basic properties from linear algebra. I will leave them for exercise for readers.

The tensor product can be used to describe the state of a quantum memory register. A quantum memory register consists of many 2-state systems (Hilbert spaces of qubits). Let $|\psi^{(1)}\rangle$ and $|\psi^{(2)}\rangle$ be qubits associated with two different 2-state systems. In terms of the standard orthogonal basis $\begin{pmatrix}1\\0\end{pmatrix}$ and $\begin{pmatrix}0\\1\end{pmatrix}$ for each 2-state system, we have \begin{align*}|\psi^{(1)}\rangle&=\begin{pmatrix}\omega_0^{(1)}\\\omega_1^{(1)}\end{pmatrix}=\omega_0^{(1)}\begin{pmatrix}1\\0\end{pmatrix}+\omega_1^{(1)}\begin{pmatrix}0\\1\end{pmatrix}\\|\psi^{(2)}\rangle&=\begin{pmatrix}\omega_0^{(2)}\\\omega_1^{(2)}\end{pmatrix}=\omega_0^{(2)}\begin{pmatrix}1\\0\end{pmatrix}+\omega_1^{(2)}\begin{pmatrix}0\\1\end{pmatrix}\end{align*} Define $\otimes$ on the basis members as follows: \begin{align*}|00\rangle&=\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}1\\0\\0\\0\end{pmatrix},\ |01\rangle=\begin{pmatrix}1\\0\end{pmatrix}\otimes\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\\1\\0\\0\end{pmatrix}\\|10\rangle&=\begin{pmatrix}0\\1\end{pmatrix}\otimes\begin{pmatrix}1\\0\end{pmatrix}=\begin{pmatrix}0\\0\\1\\0\end{pmatrix},\ |11\rangle=\begin{pmatrix}0\\1\end{pmatrix}\otimes\begin{pmatrix}0\\1\end{pmatrix}=\begin{pmatrix}0\\0\\0\\1\end{pmatrix}\end{align*} These four vectors form a basis for a 4-dimensional Hilbert space (a 2-qubit memory register). It follows that \begin{align*}|\psi^{(1)}\rangle\otimes|\psi^{(2)}\rangle&=\omega_0^{(1)}\omega_0^{(2)}|00\rangle+\omega_0^{(1)}\omega_1^{(2)}|01\rangle+\omega_1^{(1)}\omega_0^{(2)}|10\rangle+\omega_1^{(1)}\omega_1^{(2)}|11\rangle\\&=\begin{pmatrix}\omega_0^{(1)}\omega_0^{(2)}\\\omega_0^{(1)}\omega_1^{(2)}\\\omega_1^{(1)}\omega_0^{(2)}\\\omega_1^{(1)}\omega_1^{(2)}\end{pmatrix}\end{align*}Similarly, to describe the state of a 3-qubit memory register, one performs the tensor product $|\psi^{(1)}\rangle\otimes|\psi^{(2)}\rangle\otimes|\psi^{(3)}\rangle$.

Quantum memory registers can store an exponential amount of classical information in only a polynomial number of qubits using the quantum property the Principle of Superposition. For example, consider the two classical memory registers storing complimentary sequences of bits $$\begin{array}{|c|c|c|c|c|c|c|}\hline1 & 0 & 1 & 1 & 0 & 0 &1\\\hline 0 & 1 & 0 & 0 & 1 & 1 & 0\\\hline\end{array}$$ A single quantum memory register can store both sequences simultaneously in an equally weighted superposition of the two states representing each 7-bit input $$\frac{1}{\sqrt{2}}(|1011001\rangle+|0100110\rangle)$$

A matrix can be considered as a vector. For example, a $2\times 2$ matrix $\begin{pmatrix}a & b\\c & d\end{pmatrix}$ can be identified with the vector $(a, b, c, d) \in \mathbb{R}^4$. Hence one can define the tensor product of two matrices in a similar manner to that of two vectors. For example, $$\begin{pmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{pmatrix}\otimes\begin{pmatrix}b_{11} & b_{12}\\b_{21} & b_{22}\end{pmatrix}:=\begin{pmatrix}a_{11}b_{11} & a_{11}b_{12} & a_{11}b_{21} & a_{11}b_{22}\\a_{12}b_{11} & a_{12}b_{12} & a_{12}b_{21} & a_{12}b_{22}\\a_{21}b_{11} & a_{21}b_{12} & a_{21}b_{21} & a_{21}b_{22}\\a_{22}b_{11} & a_{22}b_{12} & a_{22}b_{21} & a_{22}b_{22}\end{pmatrix}$$

References:

[1] A. Yu. Kitaev, A. H. Shen and M. N. Vyalyi, Classical and Quantum Computation, Graduate Studies in Mathematics Volume 47, American Mathematical Society, 2002

[2] Colin P. Williams and Scott H. Clearwater, Explorations in Quantum Computing, Springer TELOS, 1998

The Dirac Equation

The Schrödinger equation $$i\hbar\frac{\partial\psi}{\partial t}=\hat H\psi$$ is a non-relativistic approximation of what is supposed to be more realistic a relativistic equation. The first place one would look at to find a relativisitic generalization is the relativistic energy $$E=\sqrt{c^2p^2+m^2c^4}$$ Replacing $E$ and $p$ by operators $i\hbar\frac{\partial}{\partial t}$ and $-i\hbar\nabla$, respectively, we obtain the square-root Klein-Gordon equation $$-\hbar\frac{\partial\psi(t,x)}{\partial t}=\sqrt{-c^2\hbar^2\nabla^2+m^2c^4}\psi(t,x)$$ This equation is however not a desirable one. Due to the appearance of the radical in the right hand side, it is impossible to include external electromagnetic fields in a relativistically invariant way.

P.A.M. Dirac considered a linearization of the relativistic energy by writing $$\label{eq:linenergy}E=c\sum_{i=1}^3\alpha_ip_i+\beta mc^2=c\alpha\cdot p+\beta mc^2$$ where $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ and $\beta$ have to be determined by comparing it with the relativistic energy.

Squaring \eqref{eq:linenergy}, we have \begin{aligned}E^2=&c^2[\alpha_1^2p_1^2+\alpha_2^2p_2^2+\alpha_3^3p_3^2+(\alpha_1\alpha_2+\alpha_2\alpha_1)p_1p_2+\\&(\alpha_2\alpha_3+\alpha_3\alpha_2)p_2p_3+(\alpha_3\alpha_1+\alpha_1\alpha_3)p_3p_1]+\\&mc^3[(\alpha_1\beta+\beta\alpha_1)p_1+(\alpha_2\beta+\beta\alpha_2)p_2+(\alpha_3\beta+\beta\alpha_3)p_3]+\\&\beta^2m^2c^4\end{aligned}\label{eq:linenergy2} \eqref{eq:linenergy2} must coincide with $c^2p^2+m^2c^4$. For that to happen we must require that \begin{align*}\alpha_1^2p_1^2+\alpha_2^2p_2^2+\alpha_3^2p_3^2&=p^2\\\alpha_i\alpha_j+\alpha_j\alpha_i&=0\ \mbox{for $i\ne j$}\\\alpha_i\beta+\beta\alpha_i&=0\\\beta^2m^2c^4&=m^2c^4\end{align*}If the $\alpha_i$’s and $\beta$ were numbers, we would have $\alpha_1=\alpha_2=\alpha_3=\beta=0$ which is not desirable. Since $\alpha_i$’s and $\beta$ are anticommuting, we may assume that they are $n\times n$ matrices. Now the $\alpha_i$’s and $\beta$, as $n\times n$ matrices, are required to satisfy \begin{aligned}\alpha_i\alpha_j+\alpha_j\alpha_i&=2\delta_{ij}{\bf 1},\ i,j=1,2,3\\\alpha_i\beta+\beta\alpha_i&=0,\ i=1,2,3\\\beta^2&={\bf 1}\end{aligned}\label{eq:linenergy3}where ${\bf 1}$ denotes the $n\times n$ identity matrix. In order for the Hamiltonian to be Hermitian, the $\alpha_i$’s and $\beta$ are required to be Hermitian. From \eqref{eq:linenergy3}, $$\mathrm{tr}\alpha_i=\mathrm{tr}\beta^2\alpha_i=\mathrm{tr}\beta(\beta\alpha_i)=-\mathrm{tr}\beta\alpha_i\beta=-\mathrm{tr}\alpha_i$$ Thus, $\mathrm{tr}\alpha_i=0$. Since $\alpha_i^2={\bf 1}$, $\alpha_i$ has eigenvalues $1,-1$. Together, we see that $n$ has to be an even number. The smallest $n$ is $n=2$, but this can’t be right as there are only three linearly independent anticommuting Hermitian matrices. For example, the Pauli matrices $$\sigma_1=\begin{pmatrix}0 & 1\\1 & 0\end{pmatrix},\ \sigma_2=\begin{pmatrix}0 & -i\\i & 0\end{pmatrix},\ \sigma_3=\begin{pmatrix}1 & 0\\0 & -1\end{pmatrix}$$ together with ${\bf 1}$ form a basis for the space of $2\times 2$ Hermitian matrices. For $n=4$, if we choose $$\label{eq:diracmat}\beta=\begin{pmatrix}{\bf 1} & {\bf 0}\\{\bf 0} & -{\bf 1}\end{pmatrix},\ \alpha_i=\begin{pmatrix}{\bf 0} & \sigma_i\\\sigma_i & {\bf 0}\end{pmatrix},\ i=1,2,3$$ then \eqref{eq:linenergy3} is satisfied.

Now replacing $E$ and $p$ by operators $i\hbar\frac{\partial}{\partial t}$ and $-i\hbar\nabla$, respectively, we obtain the Dirac equation $$i\hbar\frac{\partial\psi(t,x)}{\partial t}=H_o\psi(t,x)$$ where \begin{align*}H_0&=-i\hbar c\alpha\cdot\nabla+\beta mc^2\\&=\begin{pmatrix}mc^2{\bf 1} & -i\hbar c\sigma\cdot\nabla\\-i\hbar c\sigma\cdot\nabla & -mc^2{\bf 1}\end{pmatrix}\end{align*} Here, $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ and $\sigma=(\sigma_1,\sigma_2,\sigma_3)$ are triplets of matrices. The Dirac equation acts of $\mathbb{C}^4$-valued wave functions $$\psi(t,x)=\begin{pmatrix}\psi_1(t,x)\\\psi_2(t,x)\\\psi_3(t,x)\\\psi_4(t,x)\end{pmatrix},\ \psi_i\in\mathbb{C},\ i=1,2,3,4$$

If $m=0$, then only three anticommuting $\alpha_i$ are needed, so it would be sufficient to use $2\times 2$ matrices. For example, one may choose $\alpha_i=\sigma_i$, $i=1,2,3$. Then we obtain the equation $$i\hbar\frac{\partial\psi(t,x)}{\partial t}=c\sigma\cdot\nabla\psi(t,x)$$ This equation is called the Weyl equation. The Weyl equation is thought to describe neutrinos. We will discuss more about this later.

If the space dimension is two, then we can also use Pauli matrices instead of Dirac matrices \eqref{eq:diracmat}. In this case, $H$ has the form $$H=-i\hbar c\left(\sigma_1\frac{\partial}{\partial x_1}+\sigma_2\frac{\partial}{\partial x_2}\right)+\sigma_3 mc^2$$

References:

[1] Walter Greiner, Relativistic Quantum Mechanics, 3rd Edition, Springer-Verlag, 2000

[2] Bernd Thaller, The Dirac Equation, Springer-Verlag, 1992

Lagrangian and Hamiltonian

In physics, there are two things that play a very crucial role in describing the motion of a particle. One is called Lagrangian and the other Hamiltonian. These are closely related. It appears to be almost inconceivable for physicists not to be able to do physics without a Lagrangian. There is a good reason for that. The Lagrangian is what gives rise to the equation of motion. Interestingly, I am currently working on a Lagrangian-free quantum theory, called geometric quantum theory, in which the equation of motion is obtained from geometric considerations. There is no need for introducing a Lagrangian to begin with. I will talk about it elsewhere as I make progress on it.

So what is a Lagrangian? In classical mechanics, a Lagrangian $L({\bf x},\dot{\bf x},t)$ is defined by \begin{aligned}L({\bf x},\dot{\bf x},t)&=T-V\\&=\frac{1}{2}m{\dot x_i}^2-V({\bf x},t)\end{aligned}\label{eq:lagrangian} Note that a Lagrangian is a function of three variables ${\bf x}$, $\dot{\bf x}$, and $t$. Since a Lagrangian is acting on functions, it is rather called a functional in mathematics. The equation $$\label{eq:E-L}\frac{d}{dt}\frac{\partial L}{\partial\dot x_i}-\frac{\partial L}{\partial x_i}=0$$ with the Lagrangian in \eqref{eq:lagrangian} results in the familiar equation of motion $$m\frac{d^2{\bf x}}{dt^2}=-{\bf \nabla}V$$ The equation \eqref{eq:E-L} is called the Euler-Lagrange equation. In general, a Lagrangian is not necessarily given as $T-V$. For another example, one may consider the Lagrangian $$\label{eq:lagrangian2}L({\bf x},\dot{\bf x},t)=\frac{1}{2}m{\dot x_i}^2-e\phi({\bf x},t)+\frac{e}{c}\dot x_i A_i({\bf x},t)$$ where $\phi$ is a scalar potential and ${\bf A}$ such that \begin{align*}{\bf B}&={\bf\nabla}\times{\bf A}\\{\bf\nabla}\phi&=-{\bf E}-\frac{1}{c}\frac{\partial{\bf A}}{\partial t}\end{align*}The Euler-Lagrange equation \eqref{eq:E-L} then yields the familiar Lorentz force $$m\frac{d^2{\bf x}}{dt^2}=e{\bf E}+\frac{e}{c}{\bf v}\times{\bf B}$$

A Lagrangian doesn’t have to be given in terms of rectangular coordinates and it can be written in terms of generalized coordinates ${\bf q}$ and $\dot{\bf q}$ if there is a functional relationship $$x_i=x_i(q_1,q_2,q_3)$$ One can easily show that $L({\bf q},\dot{\bf q},t)$ satisfies the Euler-Lagrange equation $$\label{eq:E-L2}\frac{d}{dt}\frac{\partial L}{\partial\dot q_i}-\frac{\partial L}{\partial q_i}=0$$

Example. [Cylindrical Coordinates] In terms of the cylindrical coordinates \begin{align*}x&=r\cos\theta\\y&=r\sin\theta\\z&=z\end{align*} the Lagrangian \eqref{eq:lagrangian} is written as $$L=\frac{1}{2}m({\dot r}^2+r^2{\dot\theta}^2+{\dot z}^2)-V$$ The Euler-Lagrange equation \eqref{eq:E-L2} yields the equations \begin{align*}m\ddot r-mr{\dot\theta}^2+\frac{\partial V}{\partial r}&=0\\mr^2\ddot\theta+2mr\dot r\dot \theta+\frac{\partial V}{\partial\theta}&=0\\m\ddot z+\frac{\partial V}{\partial z}&=0\end{align*} If $V=V(r)$ then $$mr^2\ddot\theta+2mr\dot r\dot \theta=\frac{d}{dt}(mr^2\dot\theta)=0$$ which implies that $mr^2\dot\theta$ is constant, i.e. we obtain the conservation of angular momentum.

If $\frac{\partial L}{\partial q_i}=0$, then the coordinate $q_i$ is called a cyclic coordinate. Since $\frac{d}{dt}\frac{\partial L}{\partial\dot q_i}=0$, $\frac{\partial L}{\partial\dot q_i}$ is conserved. The quantity $p_i:=\frac{\partial L}{\partial\dot q_i}$ is called a canonical momentum or a conjugate momentum.

Example. For the Lagrangian \eqref{eq:lagrangian}, the canonical momentum $p_i$ is $$p_i=m\dot x_i$$ and for the Lagrangian \eqref{eq:lagrangian2}, the canonical momentum is $$p_i=m\dot x_i+\frac{e}{c}A_i({\bf x},t)$$

Example. [Spherical Coordinates] In terms of spherical coordinates \begin{align*}x&=r\sin\theta\cos\varphi\\y&=r\sin\theta\sin\varphi\\z&=r\cos\theta\end{align*} the Lagrangian \eqref{eq:lagrangian} is written as $$L=\frac{1}{2}m({\dot r}^2+r^2{\dot\theta}^2+r^2{\dot\varphi}^2\sin^2\theta)-V$$ The Euler-Lagrange equation \eqref{eq:E-L2} then yields \begin{align*}\frac{d}{dt}(m\dot r)-mr{\dot\theta}^2-mr{\dot\varphi}^2\sin^2\theta+\frac{\partial V}{\partial r}&=0\\\frac{d}{dt}(mr^2\dot\theta)-mr^2{\dot\varphi}^2\sin\theta\cos\theta+\frac{\partial V}{\partial\theta}&=0\\\frac{d}{dt}(mr^2\dot\varphi\sin^2\theta)+\frac{\partial V}{\partial\varphi}&=0\end{align*}

Hamiltonians

Given a Lagrangian $L({\bf q},\dot{\bf q},t)$, it can be shown that $$\label{eq:legendre}d(p_i\dot q_i-L)=(dp_i)\dot q_i-\frac{\partial L}{\partial q_i}dq_i-\frac{\partial L}{\partial t}dt$$ \eqref{eq:legendre} is called the Lengendre transformation. Let us denote $$H({\bf q},{\bf p},t):=\dot q_ip_i-L$$ and call it a Hamiltonian. Since $H$ is a function of $(q_i,p_i,t)$, we have $$\label{eq:hamiltonian}dH=\frac{\partial H}{\partial q_i}dq_i+\frac{\partial H}{\partial p_i}dp_i+\frac{\partial H}{\partial t}dt$$ Comparing \eqref{eq:legendre} and \eqref{eq:hamiltonian} we obtain the Hamilton’s equations \begin{aligned}\frac{\partial H}{\partial q_i}&=-\dot p_i\\\frac{\partial H}{\partial p_i}&=\dot q_i\\\frac{\partial H}{\partial t}&=-\frac{\partial L}{\partial t}\end{aligned}\label{eq:hamiltoneqn}

If $L$ does not depend on $t$, $\frac{\partial L}{\partial t}=0$ and consequently we have $\frac{dH}{dt}=0$, i.e. $H$ is constant. If the kinetic energy is a quadratic term of $\dot q_i$ and the potential is a function of only $q_i$, then $$\dot q_i\frac{\partial T}{\partial\dot q_i}=2T=\dot q_i\frac{\partial L}{\partial\dot q_i}=\dot q_ip_i$$ and thereby \begin{align*}H&=\dot q_ip_i-L\\&=2T-(T-V)=T+V\end{align*} Hence, in this case the total energy is conserved.

Example. For the Lagrangian \eqref{eq:lagrangian} the Hamiltonian $H$ is given by $$H=\frac{p_i^2}{2m}+V$$ where $p_i=m\dot x_i$.

Example. For the Lagrangian \eqref{eq:lagrangian}, the Hamiltonian $H$ is given by $$H=\frac{1}{2m}\left(p_i-\frac{e}{c}A_i\right)^2+e\phi$$ where $p_i=m\dot x_i+\frac{e}{c}A_i$. If the vector potential ${\bf A}$ depends only on the position vector ${\bf x}$, one can show that $H$ is constant in time.

References:

[1] Quantum Mechanics, H.-S. Song (in Korean)

Dirac Sea and Antiparticles

In here, I mentioned that one of the issues of the Klein-Gordon equation is that it admits solutions yielding negative energies.This issue continues to appear with the Dirac equation which is a relativistic generalization of the Schrödinger equation. (I will discuss the Dirac equation later in a different note.) The possibility that an electron can keep falling down to a higher negative energy level indefinitely seems unphysical and initially the suggestion had to face a huge backlash from physicists including Wolfgang Pauli. P.A.M. Dirac came up with a brilliant idea based on electron hole that each negative energy state is filled by an electron (remember that electrons are fermions so no more than one electron can occupy the same energy state due to Pauli’s exclusion principle). This idea is called Dirac sea of infinite electrons. Since all negative energy states are already occupied by electrons, an electron cannot fall down below zero energy level. This may not, however, definitely be true as David Hilbert has shown in his paradox of the Grand Hotel even after the negative energy states are all occupied by electrons it may accommodate additional electrons. Dirac also suggested that it may be possible that all negative energy levels are filled by electrons except for one. This would leave a hole with a negative energy. This hole was interpreted as a positron, the antiparticle on an electron. While it is a brilliant idea, Dirac sea also appears to be unphysical. Regardless, positron was discovered by Carl Anderson in 1932 and no one raised an issue about it afterwards. Move along, nothing to see here. Still it seems that many physicists are not very comfortable with the notion of Dirac sea and that they don’t believe that it is an actual physical reality. Dirac sea is nowadays introduced more for a pedagogical purpose rather than for the purpose of defining antiparticles. In modern quantum field theory, antiparticles are defined by wave functions traveling backward in time. If I remember correctly, this definition of antiparticles is due to John Archibald Wheeler. Note that those wave functions traveling backward in time do have negative energies.

Here is a thought. Physically an electron would have the minimum energy at rest and the rest energy is given by $E_0=mc^2$. This can be obtained by putting ${\bf p}\cdot{\bf p}=0$ in the relativistic energy-momentum relation. In fact, since ${\bf p}\cdot{\bf p}\geq 0$, we have $$E^2\geq m_0^2c^4$$ which implies that either $E\geq m_0c^2$ or $E\leq -m_0c^2$. So we can say that the energy of an electron cannot be negative and that the negative energy condition could be considered as merely a mathematical fluke. But if that were the case, what about antiparticles? That is a big question which seems to have no apparent answer within conventional quantum theory and for this reason physicists are still sticking to the negative energies.

I am currently working on an unconventional quantum theory and this might shed light on alternative possibility of antiparticles. Those who are curious can read about its brief idea and motivation here. In that quantum theory, antiparticles, while having positive energies, are described by wave functions that have negative probabilities. Of course the notion of negative probabilities sounds unphysical but it actually isn’t in this case. According to the theory, antiparticles do not live in our universe but in its twin parallel universe where the roles of time coordinate and a spatial coordinate are switched from those in our universe. While the wave function of an antiparicle is seen to have a negative probability in our universe, it actually has a positive probability in its own universe. I will write more details about it elsewhere in the very near future.

Matter and Waves

Since Huygens and Newton, physicists have known that light is described by (electromagnetic) waves and particles (photons). Such peculiar nature of light is called wave-particle duality. What about material particles such as electrons? de Broglie proposed a bold hypothesis that what is true for light is also true for material particles i.e. they will also exhibit wave nature. (This was indeed confirmed by experiments.) So how do we mathematically model such a wave? What physicists thought of using to study material particles was a complex plane wave called de Broglie wave. It looks like $$\psi(x,t)=Ae^{i(kx-\omega t)}$$ for 1-dimensional case. For 3-dimensional case, it would be $$\label{eq:planewave}\psi({\bf r},t)=Ae^{i({\bf k}\cdot{\bf r}-\omega t)}$$ Before we continue, one may wonder how physicists came up with this kind of wave. I can only speculate but such a complex plane wave was already well-known to physicists as it is a solution of Maxwell’s equation in electromagnetism. Hence, complex plane wave may describe electromagnetic wave, and naturally it became the first candidate for modeling material particles. In fact, it worked out well as we shall see and consequently complex numbers played a crucial role in building quantum mechanics.

Let us first study some properties of plane waves. The plane wave \eqref{eq:planewave} describes a free particle, more accurately a free particle in a state. In order for a plane wave to behave like a particle, we want it to be localized i.e. the wave is defined in a tiny region. (There is a more mathematically subtle reason why we require this.) We can achieve this by redefining $\psi({\bf r},t)$ as $$\psi({\bf r},t)=\left\{\begin{array}{ccc}Ae^{i({\bf k}\cdot{\bf r}-\omega t)} & \mbox{for} & {\bf r}\ \mbox{within a volume}\ V=L^3\\0 & \mbox{for} & {\bf r}\ \mbox{outside a volume}\ V=L^3\end{array}\right.$$ Physicists call this box renormalization. Physically the state of a particle must not depend on a particular location of the tiny box, so we require the periodicity condition $$\psi(x,y,z,t)=\psi(x+L,y,z,t)=\psi(x,y+L,z,t)=\psi(x,y,z+L,t)$$ Here, $L$ is called wave length. The periodicity condition implies that ${\bf k}$ is quantized as $${\bf k}=\frac{2\pi}{L}{\bf n}$$ where ${\bf k}=(k_x,k_y,k_z)$, ${\bf n}=(n_x,n_y,n_z)$, and $n_i=0,1,2,\cdots$, $i=x,y,z$. The vector ${\bf k}$ is called wave vector and for 1-dimensional case, $k$ is called wave number. If the wave is periodic in time, say $\psi(x,t)=\psi(x,t+T)$, then we obtain $e^{-i\omega T}=1$. The nonzero minimum value of $T$ is $T=\frac{2\pi}{\omega}$. $\omega=\frac{2\pi}{T}$ is called angular frequency. $kx-\omega t$ is called phase and if $kx-\omega t$ is constant, the wave moves at the speed $v_p=\frac{dx}{dt}=\frac{\omega}{k}$. This $v_p$ is called phase velocity. For 3-dimensional case, \begin{align*}\psi({\bf r},t)&=Ae^{i({\bf k}\cdot{\bf r}-\omega t)}\\&=Ae^{i{\bf k}\cdot\left({\bf r}-\frac{\omega t}{|{\bf k}|^2}{\bf k}\right)}\\&=Ae^{i{\bf k}\cdot\left({\bf r}-\frac{\omega t}{|{\bf k}|}\hat{\bf k}\right)}\end{align*}So, the phase velocity would be $${\bf v}_p=\frac{d{\bf r}}{dt}=\frac{\omega}{|{\bf k}|}\hat{\bf k}$$

The image of wave function $\psi(x,t)=Ae^{i(kx-\omega t)}$ is a circle. We are in fact quite familiar with this kind of waves. On a beautiful day, you go to a lake. You would be then tempted to throw a rock into the cam water. When you do, you would see circular water waves spreading out from the point of impact.

References:

[1] Walter Greiner, Quantum Mechanics, An Introduction, 4th Edition, Springer, 2001

[2] Quantum Mechanics, H.-S. Song (in Korean)