Propositional Logic

A proposition is a statement that is either true or false. We can combine one or more propositions to create more complicated propositions using conjunctions. Those conjunctions include:

  1. Negation: $\neg p$, meaning not $p$.
  2. Inclusive Or: $p\vee q$, meaning $p$ or $q$ (could be both).
  3. And: $p\wedge q$, meaning $p$ and $q$.
  4. Exclusive Or: $p\oplus q$, meaning either $p$ or $q$.
  5. Implies: $p\Longrightarrow q$, meaning if $p$ then $q$ ($p$ implies $q$). $p$ and $q$ are called the hypothesis and the conclusion, respectively.
  6. Equivalence: $p\Longleftrightarrow q$, meaning $p$ if and only if $q$. $p\Longleftrightarrow q$ is shorthand for $(p\Longrightarrow q)\wedge(p\Longleftarrow q)$.

Definition. A propositional formula is defined by the following.

PF1. Any propositional variable (such as $p$ and $q$) is a formula.

PF2. If $\alpha$ and $\beta$ are formulas, so are

  1. $\alpha\vee\beta$
  2. $\alpha\wedge\beta$
  3. $\neg\alpha$

For example $\neg(p\vee q)$ is a formula.

Truth Tables

The truth value of a formula can be found by its truth table. A formula is said to be satisfiable if at least one truth assignment satisfies it. A formula is called a tautology if every truth assignment satisfies it. A formula is said to be unsatisfiable if no truth assignment satisfies it.

Here are some examples.

  1. The truth table of $\neg p$ $$\begin{array}{|c|c|}\hline p & \neg p\\\hline T & F\\\hline F & T\\\hline\end{array}$$
  2. The truth table of $p\wedge q$ $$\begin{array}{|c|c|c|}\hline p & q & p\wedge q\\\hline T & T & T\\\hline T & F & F\\\hline F & T & F\\\hline F & F & F\\\hline\end{array}$$
  3. The truth table of $p\wedge\neg p$. $$\begin{array}{|c|c|c|}\hline p & \neg p & p\wedge\neg p\\\hline T & F & F\\\hline F & T & F\\\hline\end{array}$$ $p\wedge\neg p$ is unsatisfiable.
  4. The truth table of $p\vee q$ $$\begin{array}{|c|c|c|}\hline p & q & p\vee q\\\hline T & T & T\\\hline T & F & T\\\hline F & T & T\\\hline F & F & F\\\hline\end{array}$$
  5. The truth table of $p\vee\neg p$. $$\begin{array}{|c|c|c|}\hline p & \neg p & p\vee\neg p\\\hline T & F & T\\\hline F & T & T\\\hline\end{array}$$ $p\vee\neg p$ is a tautology.
  6. The truth table of $p\oplus q$ $$\begin{array}{|c|c|c|}\hline p & q & p\oplus q\\\hline T & T & F\\\hline T & F & T\\\hline F & T & T\\\hline F & F & F\\\hline\end{array}$$
  7. The truth table of $p\Longrightarrow q$ $$\begin{array}{|c|c|c|}\hline p & q & p\Longrightarrow q\\\hline T & T & T\\\hline T & F & F\\\hline F & T & T\\\hline F & F & T\\\hline\end{array}$$ It must be puzzling for readers as to why a false hypothesis can imply anything. Remember that we are doing propositional logic and any statement of our concern should be either true or false. So it comes down to the decision on what truth value to assign to a conditional statement with false hypothesis. If we were to assign the truth value F, all hell will break loose. Here is why. Consider the conditional statement “if $p$ then $\neg p$” and assume that $p$ is false. That must mean $\neg p$ is true since $p$ is a proposition. However, we decided to assign the truth value $F$ to the conditional statement hence $\neg p$ must also be false. This is a contradiction! To avoid this sort of disastrous contradiction, we should assign the truth value T to a conditional statement with false hypothesis. We claimed without a proof that the empty set is a subset of every set here. The statement “$x\in\emptyset\Longrightarrow x\in A$” is always true because the hypothesis $x\in\emptyset$ is false. Hence $\emptyset\subseteq A$ for any set $A$.
  8. The truth value of $p\Longleftrightarrow q$. $$\begin{array}{|c|c|c|c|c|}\hline p & q & p\Longrightarrow q & p\Longleftarrow q & p\Longleftrightarrow q\\\hline T & T & T & T & T\\\hline T & F & F & T & F\\\hline F & T & T & F & F\\\hline F & F & T & T & T\\\hline\end{array}$$
  9. The truth value of $p\wedge q\Longrightarrow r$. $$\begin{array}{|c|c|c|c|c|}\hline p & q & r & p\wedge q & p\wedge q\Longrightarrow r\\\hline T & T & T & T & T\\\hline T & T & F & T & F\\\hline T & F & T & F & F\\\hline T & F & F & F & T\\\hline F & T & T & F & T\\\hline F & T & F & F & T\\\hline F & F & T & F & T\\\hline F & F & F & F & T\\\hline\end{array}$$

When two fomulae $\alpha$ and $\beta$ have the same truth table, we say $\alpha$ is logically equivalent to $\beta$ and write $\alpha\equiv\beta$.

Theorem. $p\Longrightarrow q\equiv\neg p\vee q$.

Proof. $$\begin{array}{|c|c|c|c|c|}\hline p & q & p\Longrightarrow q & \neg p & \neg p\vee q\\\hline T & T & T & F & T\\\hline T & F & F & F & F\\\hline F & T & T & T & T\\\hline F & F & T & T & T\\\hline\end{array}$$

Theorem. $p\oplus q\equiv(p\wedge\neg q)\vee(\neg p\wedge q)$.

Proof. Left to readers as an exercise.

$\vee$ and $\wedge$ can be considered as operations on formulae and the following laws hold:

Commutative Laws. For any formulae $\alpha$ and $\beta$, \begin{align*}\mbox{(CL1)}\ \alpha\vee\beta&\equiv\beta\vee\alpha\\\mbox{(CL2)}\ \alpha\wedge\beta&\equiv\beta\wedge\alpha\end{align*}

Associative Laws. For any formulae $\alpha$, $\beta$ and $\gamma$, \begin{align*}\mbox{(AL1)}\ (\alpha\vee\beta)\vee\gamma&\equiv\alpha\vee(\beta\vee\gamma)\\\mbox{(AL2)}\ (\alpha\wedge\beta)\wedge\gamma&\equiv\alpha\wedge(\beta\wedge\gamma)\end{align*}

Distributive Laws. For any formulae $\alpha$, $\beta$ and $\gamma$, \begin{align*}\mbox{(DL1)}\ \alpha\vee(\beta\wedge\gamma)&\equiv(\alpha\vee\beta)\wedge(\alpha\vee\gamma)\\\mbox{(DL2)}\ \alpha\wedge(\beta\vee\gamma)&\equiv(\alpha\wedge\beta)\vee(\alpha\wedge\gamma)\end{align*}

These laws can be used to show two formulae are logically equivalent without using truth table. Proof by a truth table can be really cumbersome when concerned formulae are complicated.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

Graphs and Functions

A graph $G$ from a set $A$ to another set $B$ is a subset $R\subseteq A\times B$. If $A=B$ then $G$ is called a relation in $A$. The inverse of a graph $G\subseteq A\times B$ is defined by $G^{-1}=\{(y,x):(x,y)\in G\}\subseteq B\times A$.

A function $f$ from $A$ to $B$ is a graph $f\subseteq A\times B$ such that

F1. For any $x\in A$ there exists $y\in B$ such that $(x,y)\in f$.

F2. $(x,y_1)\in f$ and $(x,y_2)\in f$ $\Longrightarrow$ $y_1=y_2$.

If $(x,y)\in f$ then we write $y=f(x)$. If $f\subseteq A\times B$ is a function, we write $f: A\longrightarrow B$. We call $A$ and $B$ the domain and the codomain of $f$, respectively. The set \begin{align*}f(A)&=\{y\in B: (x,y)\in f\ \mbox{for some}\ x\in A\}\\&=\{f(x):x\in A\}\end{align*} is called the range of $f$. A function $f: A\longrightarrow B$ is onto or surjective if $f(A)=B$. A function $f: A\longrightarrow B$ is one-to-one or injective if $(x_1,y)\in f$ and $(x_2,y)\in f$ $\Longrightarrow$ $x_1=x_2$. A function $f: A\longrightarrow B$ is bijective if it is both injective and surjective. A bijective function is also called a one-to-one correspondence.

Theorem. An inverse graph $f^{-1}\subseteq B\times A$ of a function $f\subseteq A\times B$ is a function if and only if $f$ is bijective.

Proof. ($\Longrightarrow$) Suppose that $f^{-1}\subseteq A\times B$ is a function. $\forall y\in B$, $\exists x\in A$ such that $(y,x)\in f^{-1}\Longrightarrow (x,y)\in f$. Thus $f$ is surjective. $(x_1,y)\in f$ and $(x_2,y)\in f$ $\Longrightarrow$ $(y,x_1)\in f^{-1}$ and $(y,x_2)\in f^{-1}$ $\Longrightarrow$ $x_1=x_2$. Thus $f$ is injective. Therefore, $f$ is bijective.

($\Longleftarrow$) Suppose that $f: A\longrightarrow B$ is bijective.

F1. Let $y\in B$. Then $\exists x\in A$ such that $(x,y)\in f\Longrightarrow (y,x)\in f^{-1}$.

F2. $(y,x_1)\in f^{-1}$ and $(y,x_2)\in f^{-1}$ $\Longrightarrow$ $(x_1,y)\in f$ and $(x_2,y)\in f$ $\Longrightarrow$ $x_1=x_2$.

Theorem. If $f: A\longrightarrow B$ and $g: B\longrightarrow C$ are bijective then the composition $g\circ f: A\longrightarrow C$ is also bijective.

Proof. F1. Let $x\in A$. Then $\exists y\in B$ such that $y=f(x)$. Also $\exists z\in C$ such that $g(y)=z$. Hence, $$g\circ f(x)=g(f(x))=f(y)=z$$

F2. $$x_1=x_2\Longrightarrow f(x_1)=f(x_2)\Longrightarrow g\circ f(x_1)=g\circ f(x_2)$$

Surjective: Let $z\in C$. Then $\exists y\in B$ such that $g(y)=z$. Also $\exists x\in A$ such that $f(x)=y$. Thus, $g\circ f(x)=z$.

Injective: \begin{align*}g\circ f(x_1)=g\circ f(x_2)&\Longrightarrow g(f(x_1))=g(f(x_2))\\&\Longrightarrow f(x_1)=f(x_2)\\&\Longrightarrow x_1=x_2\end{align*}

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

[2] Set Theory, Charles C. Pinter, Addison Wesley Publishing Company, 1971

Sets

We have already introduced the notion of a set here. Recall that a set is an unordered collection of distinct objects, which are called members or elements. We use $\{\ \}$ to denote a set. For example, the set containing 2, 3, 5 is written as $\{2,3,5\}$. While $\{2,3,5\}=\{3,5,2\}$, the following sets are different.

  1. $\{2,3,5\}$ contains 3 members 2, 3, 5.
  2. $\{\{2\},\{3\},\{5\}\}$ contains three singleton sets $\{2\}$, $\{3\}$, $\{5\}$ as members.
  3. $\{\{2,3,5\}\}$ contains a single element $\{2,3,5\}$.

Example. Some Important Examples of Sets

  1. $\emptyset$=$\{\ \}$=$\Box$=the empty set, the set containing no object. If I remember correctly, the notation $\Box$ for the empty set was introduced by Nicolas Bourbaki, a secretive group of mathematicians.
  2. $\mathbb{N}=\{0,1,2,3,\cdots\}$, the set of all natural numbers.
  3. $\mathbb{Z}=\{\cdots,-3,-2,-1,0,1,2,3,\cdots\}$, the set of all integers.
  4. $\mathbb{Q}$=the set of all rational numbers.
  5. $\mathbb{R}$=the set of all real numbers.

The notation $\in$ is used to express membership. For example, $a\in S$ says $a$ is a member of the set $S$. The set of elements of $A$ that satisfy the predicate $P$ is written as$$\{x\in A: P(x)\}$$

Example. The set of all even integers can be written as $$\{n\in\mathbb{Z}: n\ \mbox{is even}\},$$ $$\{n\in\mathbb{Z}:n=2m\ \mbox{for some}\ m\in\mathbb{Z}\},$$ or $$\{2m: m\in\mathbb{Z}\}$$

A subset $A$ of a set $B$, we write $A\subseteq B$, is defined by $$A\subseteq B\ \mbox{iff}\ [x\in A\Longrightarrow x\in B]$$ If $A\subseteq B$, $B$ is called a superset of $A$. Two sets $A$ and $B$ are said to be equal, we write $A=B$, if $A\subseteq B$ and $B\subseteq A$. If $A\subseteq B$ and $A\ne B$, we write $A\subsetneq B$ and $A$ is called a proper subset of $B$. The notation $A\subset B$ is also frequently used in mathematics to mean that $A$ is a subset of $B$.

Example.

  1. $\emptyset$ is a subset of every set.
  2. Every set is a subset of itself.
  3. $\mathbb{N}\subseteq\mathbb{Z}\subseteq\mathbb{Q}\subseteq\mathbb{R}$.

The power set of a set $A$, denoted by $\wp(A)$ or $2^A$, is the set of all subsets of $A$. For example, if $A=\{3,17\}$, then $\wp(A)=\{\emptyset,\{3\},\{17\},\{3,17\}\}$. If $|A|=n$, then $|\wp(A)|=2^n$.

Now let us discuss set operations. First, we define $A\cup B$, the union of two sets $A$ and $B$ by $$A\cup B=\{x:x\in A\ \mbox{or}\ x\in B\}$$ Next, $A\cap B$, the intersection of two sets $A$ and $B$ is defined by $$A\cap B=\{x:x\in A\ \mbox{and}\ x\in B\}$$ Clearly, $\cup$ and $\cap$ are commutative. $\cup$ and $\cap$ are also associative, namely \begin{align*}(A\cup B)\cup C&=A\cup (B\cup C)\\(A\cap B)\cap C&=A\cap (B\cap C)\end{align*} Lastly, $\bar A=A^c$, the compliment of a set $A$ is defined by $$\bar A=\{x:x\notin A\}$$ Note that while $\cup$ and $\cap$ are binary operations, $\bar{}$ is a unary operation. The empty set $\emptyset$ is an identity element for $\cup$, i.e. for any set $A$, $A\cup\emptyset=A$. $\emptyset$ is also regarded as an operation which is called a nullary operation. Those of you who are interested in learning more details about operations may be referred to [2] below. Using the compliment, we can define $A-B$, the difference of $A$ and $B$ is defined by $$A-B=A\cap\bar B$$ This set minus is not an operation but the composite of two operations $\cap$ and $\bar{}$. If $A\subseteq X$, $X-A$ is also frequently written as $X\setminus A$.

Theorem. Distributive Laws \begin{align*}A\cap(B\cup C)&=(A\cap B)\cup(A\cap C)\\A\cup(B\cap C)&=(A\cup B)\cap(A\cap C)\end{align*}

Proof. We prove only the first one. The proof of the second one is left as an exercise. \begin{align*}x\in A\cap(B\cup C)&\Longleftrightarrow x\in A\ \mbox{and}\ x\in B\cup C\\&\Longleftrightarrow x\in A\ \mbox{and}\ [x\in B\ \mbox{or}\ x\in C]\\&\Longleftrightarrow[x\in A\ \mbox{and}\ x\in B]\ \mbox{or}\ [x\in A\ \mbox{and}\ x\in C]\\&\Longleftrightarrow x\in A\cap B\ \mbox{or}\ x\in A\cap C\\&\Longleftrightarrow x\in(A\cap B)\cup(A\cap C)\end{align*}

The Cartesian product of two sets $A$ and $B$, denoted by $A\times B$, is defined by the set of ordered pairs with the first component taken from $A$ and the second component taken from $B$. $$A\times B=\{(a,b):a\in A\ \mbox{and}\ b\in B\}$$ Note that $\times$ is not an operation.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

[2] A Course in Universal Algebra, Stanley N. Burris and H. P. Sankappanavar, Graduate Texts in Mathematics, Springer-Verlag, 1981. The book is available online for free at the first named author’s web page here.

Strong Induction

The Strong Principle of Mathematical induction

To prove that a predicate $P(n)$ holds for every number $n\geq n_0$,

Base Case. Prove $P(m)$ for all $m$ such that $n_0\leq m\leq n_1$.

Induction Hypothesis. Let $n\geq n_1$ be arbitrary but fixed, and assume that $P(m)$ holds whenever $n_0\leq m\leq n$.

Induction Step. Assuming the induction hypothesis, show that $P(n+1)$ holds.

Example. Consider the sequence $a_1=3$, $a_2=5$, and for all $n>2$, $a_n=3a_{n-1}-2a_{n-2}$. Prove by strong induction that $a_n=2^n+1$ for all integers $n\geq 1$.

Proof. Let $P(n)$ be the predicate $a_n=2^n+1$.

Base Case. \begin{align*}P(1): a_1=3=2^1+1\\P(2): a_2=5=2^2+1\end{align*}

Induction Hypothesis. Fix $n\geq 2$, and assume that for all $1\leq n\leq n$, $P(m)$ holds i.e. $a_m=2^m+1$.

Induction Step. Show $P(n+1)$ i.e. $a_{n+1}=2^{n+1}+1$ holds. \begin{align*}a_{n+1}&=3a_n-2a_{n-1}\\&=3(2^n+1)-2(2^{n-1}+1)\\&=2\cdot 2^n+1\\&=2^{n+1}+1\end{align*}

Well-Ordering Principle

Any nonempty set of nonnegative integers has a smallest element.

Well-Ordering Principle may appear to be unrelated to the Principle of Mathematical Induction, however the two are indeed equivalent to each other.

We now prove, with the aid of Well-Ordering Principle, the Fundamental Theorem of Arithmetic (every integer $n$ greater than 1 has a unique prime decomposition)

Proof. We prove by induction on $n$.

Base Case. $n_0=2=2^1$. We are done.

Induction Hypothesis. Fix $n\geq 2$, and suppose that we know that every $m$, $2\leq m\leq n$, has a unique prime decomposition (factorization).

Induction Step. We prove that the statement is true for $n+1$. If $n+1$ is a prime, we are done. Suppose that $n+1$ is not a prime. Let $S$ be the set of all its factors greater than 1. By Well-Ordering Principle, $S$ has the smallest element $p$, which must be prime. (otherwise $p$ wouldn’t be the smallest factor of $n+1$.) Let $q=\frac{n+1}{p}$. Then by induction hypothesis ($q\leq n$), $q$ has the unique prime decomposition $$q=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}\ (p_1<p_2<\cdots<p_k)$$ Now $p$ is either $p_1$ or a smaller prime than $p_1$. If $p=p_1$ then $$n+1=p_1^{e_1+1}p_2^{e_2}\cdots p_k^{e_k}$$ If $p$ is a smaller prime than $p_1$, say $p_0$, then $$n+1=p_0^1p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$$ It’s easy to show that no number can have more than one prime decomposition. This is left as an exercise for readers.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

Proof by Mathematical Induction

The Principle of Mathematical Induction

To prove that a predicate $P(n)$ is true for all $n\geq n_0$:

Base Case. Prove that the predicate $P(n)$ holds for a particular value $n_0$.

Induction Hypothesis. Suppose that $P(n)$ holds for some fixed value $n\geq n_0$.

Induction Step. Prove that if the induction hypothesis is true then $P(n+1)$ holds as well.

Example. Prove that for any $n\geq 1$, \begin{align*}\sum_{i=0}^{n-1}2^i&=1+2+2^2+2^3+\cdots+2^{n-1}\\&=2^n-1\end{align*}

Proof. $P(n)$: $\sum_{i=1}^{n-1}2^i=2^n-1$.

Base Case. $P(1)$: $n_0=$. $\sum_{i=0}^02^i=2^1-1$. $P(1)$ is true since the LHS=the RHS=1.

Induction Hypothesis. Suppose that $P(n)$ is true for a fixed $n\geq 1$ that is, $$\sum_{i=1}^{n-1}2^i=2^n-1$$

Induction Step. Show that $P(n+1)$ is true. \begin{align*}\sum_{i=1}^n2^i&=\sum_{i=1}^{n-1}2^i+2^n\\&=2^n-1+2^n\\&=2\cdot 2^n-1\\&=2^{n+1}-1\end{align*} Therefore $P(n+1)$ is true.

Example. Prove that for any $n\geq 1$, $$\sum_{i=1}^ni=\frac{n(n+1)}{2}$$

Proof. $P(n):=\sum_{i=1}^ni=\frac{n(n+1)}{2}$$

Base Case. $n_0=1$. $P(1)=\sum_{i=1}^1i=\frac{1(1+1)}{2}$ is true.

Induction Hypothesis. Suppose that $P(n)$ is true for some fixed $n\geq 1$.

Induction Step. Show that $P(n+1)$ is true. \begin{align*}\sum_{i=1}^{n+1}i&=\sum_{i=1}^ni+n+1\\&=\frac{n(n+1)}{2}+n+1\\&=\frac{(n+1)(n+2)}{2}\end{align*} Therefore $P(n+1)$ is true.

Example. Prove that for any $n\geq 1$, $$\prod_{i=1}^n\left(1+\frac{1}{i}\right)=\left(1+\frac{1}{1}\right)\cdot\left(1+\frac{1}{2}\right)\cdots\left(1+\frac{1}{n}\right)=n+1$$

Proof. $P(n)$: $\prod_{i=1}^n\left(1+\frac{1}{i}\right)=n+1$.

Base Case. $n_0=1$. $P(1)$: $\prod_{i=1}^1\left(1+\frac{1}{i}\right)=1+1$ is true.

Induction Hypothesis. Suppose that $P(n)$ is true for some fixed $n\geq 1$.

Induction Step. Show that $P(n+1)$ is true. \begin{align*}\prod_{i=1}^{n+1}\left(1+\frac{1}{i}\right)&=\prod_{i=1}^n\left(1+\frac{1}{i}\right)\cdot\left(1+\frac{1}{n+1}\right)\\&=(n+1)\cdot\left(1+\frac{1}{n+1}\right)\\&=n+1+1=n+2\end{align*} Therefore $P(n+1)$ is true as well.

In computer science, 0 and 1 are called the two bits. A sequence of bits, such as 100001001, is called a binary string, bit string or string of bits. For example, 10001001 is a bit string of length 8. The compliment of a bit string is the result of replacing all the 0s by 1s and vice versa. For example, the compliment of 10001001 is 01110110. The concatenation of two bit strings is the result of writing one after another. For example, the concatenation of 10001001 and 111 is 10001001111.

Let us define a sequence of bit strings as follows: \begin{align*}T_0&=0\ \mbox{and for any}\ n\geq 0\\T_{n+1}&=\mbox{the concatenation of}\ T_n\ \mbox{and the compliment of}\ T_n\end{align*}

This is an example if an inductive definition or a recursive definition. The resulting sequence is called Thue sequence named after a Norwegian mathematician Axel Thue. The first few terms of the Thue sequence are: \begin{align*}T_0&=0\\T_1&=01\\T_2&=0110\\T_3&=01101001\\T_4&=0110100110010110\end{align*}

Example. Prove that for any $n\geq 0$, the length of $T_n$ is $2^n$.

Proof. We prove this by induction.

Base Case. $T_0$ has length $1=2^0$.

Induction Hypothesis. Assume that $T_n$ has length $2^n$ for some fixed $n\geq 0$.

Induction Step. $T_{n+1}$ is the result of concatenating $T_n$ with its compliment, so its length is $$2^n+2^n=2\cdot2^n=2^{n+1}$$

Example. Prove that for every $n\geq 1$, $T_n$ begins with 01, and ends with 01 if $n$ is odd or with 10 if $n$ is even.

Proof. Base Case. $T_1=01$ so the statement is true.

Induction Hypothesis. Suppose that for some fixed $n\geq 1$, $T_n$ ends with 01 if $n$ is odd or 10 if $n$ is even.

Induction Step. $T_{n+1}$ ends with the compliment of the last two bits of $T_n$. $n+1$ is even if $n$ is odd and it is odd if $n$ is even. So $T_{n+1}$ ends with 01 if $n+1$ is odd or 10 if $n+1$ is even.

Example. Prove that for every $n\geq 0$, $T_n$ never has more than two 0s or two 1s in a row.

Proof. $T_0$ has only one bit, so the statement is true for $T_0$. We prove the statement by induction starting from $n_0=1$.

Base Case. $T_1=01$ has no more than two identical bits in a row.

Induction Hypothesis. Assume that $T_n$ has no more than two 0s or two 1s in a row.

Induction Step. $T_{n+1}$ includes $T_n$ in the first half and its compliment in the second half. By induction hypothesis, neither has more than two 0s nor two 1s in a row. Due to the statement in the previous example, the four bits at the junction can be either 0110 (if $n$ is odd) or 1010 (if $n$ is even). Neither of them contains more than two 0s or two 1s in a row. This completes the proof.

Mathematical induction also can be used to prove the Pigeonhole Principle.

Example. Prove that if $f:X\longrightarrow Y$ and $|X|>|Y|$, then there exists $x_1, x_2\in X$ with $x_1\ne x_2$ such that $f(x_1)=f(x_2)$.

Proof. We prove the statement by induction on the size of the set $X$. Let $f: X\longrightarrow Y$ be a function and $|X|>|Y|$.

Base Case. $n_0=2$. If $|X|=2$ and $|X|>|Y|$, then $|Y|=1$ and the two elements in $X$ map to the unique element of $Y$.

Induction Hypothesis. Let $|X|=n$ for some fixed $n\geq 2$ and assume that the statement is true i.e., there exist $x_,x_2\in X$ with $x_1\ne x_2$ such that $f(x_1)=f(x_2)$.

Induction Step. Let $|X|=n+1$. Pick an element $x\in X$. If there exists $x’\in X$ with $x’\ne x$ such that $f(x)=f(x’)$, then we are done. Suppose not. Then $f: X\setminus\{x\}\longrightarrow Y\setminus\{f(x)\}$, $|X\setminus\{x\}|=n$ and $|X\setminus\{x\}|>|Y\setminus\{f(x)\}|$. By induction hypothesis, there exist $x_1,x_2\in X\setminus\{x\}$ with $x_1\ne x_2$ such that $f(x_1)=f(x_2)$. This completes the proof.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019