Category Archives: Discrete Mathematics

Normal Forms

In this note, we study some standard ways (normal forms) of representing formulas. The rules are:

  1. Only $\wedge$, $\vee$, and $\neg$ are used.
  2. Any negations are attached to propositional variables, not to larger expressions.
  3. The $\vee$’s and the $\wedge$’s are organized in the manner described below: A literal is either a propositional variable (such as $p$) or the negation of a single variable (such as $\neg q$). If a formula is an $\wedge$ of $\vee$’s of literals we say it is in CNF (Conjunctive Normal Form). For example, \begin{align*}(p\vee q)\wedge(\neg p\vee q)\wedge(p\vee\neg q)\wedge(\neg p\vee \neg q)\\p\wedge\neg p\\(p\vee\neg q\vee r)\wedge(\neg s\vee t)\end{align*} are all in CNF. One may wonder why $p\wedge\neg p$ is in CNF. That’s because it is logically equivalent to $(p\vee p)\wedge(\neg p\vee\neg p)$. A formula is in DNF (Disjunctive Normal Form), on the other hand, is an $\vee$ of $\wedge$’s of literals. For example, $$(p\wedge\neg q\wedge r)\vee(\neg r\wedge t)$$ is in DNF. $p\wedge\neg p$ is also in DNF because it is logically equivalent to $(p\wedge\neg p)\vee (p\wedge\neg p)$. Similarly, $p\vee q$ is in both CNF and DNF.

Example. $(p\vee\neg q\vee r)\wedge(\neg q\vee t)$ is in CNF. Formulae $p\vee\neg q\vee r$ and $\neg q\vee t$ are clauses of the formula. The clauses of a CNF formula are disjuncts of literals while the clauses of a DNF formula are conjuncts of literals. That is to say, a CNF formula is a conjunction of disjuncts of literals and a DNF formula is a disjunction of conjuncts of literals.

A CNF formula is true if at least one literal from each clause is true. Similarly, a DNF formula is true if and only of there is at least one disjunct (clause) in which all the literals are true. For example, the DNF formula $$(\neg p\wedge q)\vee(p\wedge r)$$ is true when $p$ is false and $q$ is true.

Theorem. (De Morgan’s Law) \begin{align*}\neg(p\wedge q)&\equiv\neg p\vee\neg q\\\neg(p\vee q)&\equiv\neg p\wedge\neg q\end{align*}

Proof. It can be easily proved by truth table. We prove only the first one here. The proof of the second one is left for exercise. $$\begin{array}{|c|c|c|c|c|c|c|}\hline p & q & p\wedge q &\neg(p\wedge q) & \neg p & \neg q & \neg p\vee\neg q\\\hline T & T & T & F & F & F &F\\\hline T & F & F & T &F & T & T\\\hline F & T & F & T & T & F & T\\\hline F & F & F & T & T & T & T\\\hline\end{array}$$

Example. Using commutative laws, associative laws, distributive laws we learned here along with De Morgan’s law, we can write a given formula into its equivalent formula in a normal form. To see this let us consider the formula $$p\vee(q\wedge(r\vee(s\wedge t)))$$ It can be written into an equivalent formula in conjunctive normal form. \begin{align*}p\vee(q\wedge(r\vee(s\wedge t)))&\equiv (p\vee q)\wedge(p\vee(r\vee(s\wedge t))\\&\equiv(p\vee q)\wedge(p\vee((r\vee s)\wedge(r\vee t))\\&\equiv(p\vee q)\wedge(p\vee(r\vee s))\wedge(p\vee(r\vee t))\\&\equiv(p\vee q)\wedge(p\vee r\vee s)\wedge(p\vee r\vee t)\end{align*} But then it can also be written into an equivalent formula in disjunctive normal form. \begin{align*}p\vee(q\wedge(r\vee(s\wedge t)))&\equiv p\vee((q\wedge r)\vee(q\wedge(s\wedge t)))\\&\equiv p\vee(q\wedge r)\vee(q\wedge s\wedge t)\end{align*} In general, the following theorem holds.

Theorem. Every formula is equivalent to one in conjunctive normal form, and also to one in disjunctive normal form.

We are not going to prove the theorem here but interesting readers can find a proof in the reference [1] below.

Example. Write $(\neg p\vee q)\Longrightarrow\neg p$ in disjunctive normal form.

Solution. \begin{align*}(\neg p\vee q)\Longrightarrow\neg p&\equiv\neg(\neg p\vee q)\vee\neg q\\&\equiv(\neg\neg p\wedge\neg q)\vee\neg q\\&\equiv p\vee\neg q\vee\neg q\\&\equiv p\vee\neg p\vee\neg q\end{align*} Since $p\vee\neg p$ is a tautology, $(\neg p\vee q)\Longrightarrow\neg p$ is a tautology.

As shown in the above example, any expression that can be simplified to become a tautology must be a tautology itself. Determining whether a formula is a tautology is a fundamental problem of computer science. Using truth table is in deed not an effective method to tackle the problem because the truth table of a formula with a large number of variables would become too big to handle for us. For instance, the truth table for a formula with 30 variables would have $2^{30}=1,073,741,824$ rows, more than a billion rows! It is yet unknown if there is an effective algorithm to determine satisfiability, in particular whether a formula is a tautology.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

Propositional Logic

A proposition is a statement that is either true or false. We can combine one or more propositions to create more complicated propositions using conjunctions. Those conjunctions include:

  1. Negation: $\neg p$, meaning not $p$.
  2. Inclusive Or: $p\vee q$, meaning $p$ or $q$ (could be both).
  3. And: $p\wedge q$, meaning $p$ and $q$.
  4. Exclusive Or: $p\oplus q$, meaning either $p$ or $q$.
  5. Implies: $p\Longrightarrow q$, meaning if $p$ then $q$ ($p$ implies $q$). $p$ and $q$ are called the hypothesis and the conclusion, respectively.
  6. Equivalence: $p\Longleftrightarrow q$, meaning $p$ if and only if $q$. $p\Longleftrightarrow q$ is shorthand for $(p\Longrightarrow q)\wedge(p\Longleftarrow q)$.

Definition. A propositional formula is defined by the following.

PF1. Any propositional variable (such as $p$ and $q$) is a formula.

PF2. If $\alpha$ and $\beta$ are formulas, so are

  1. $\alpha\vee\beta$
  2. $\alpha\wedge\beta$
  3. $\neg\alpha$

For example $\neg(p\vee q)$ is a formula.

Truth Tables

The truth value of a formula can be found by its truth table. A formula is said to be satisfiable if at least one truth assignment satisfies it. A formula is called a tautology if every truth assignment satisfies it. A formula is said to be unsatisfiable if no truth assignment satisfies it.

Here are some examples.

  1. The truth table of $\neg p$ $$\begin{array}{|c|c|}\hline p & \neg p\\\hline T & F\\\hline F & T\\\hline\end{array}$$
  2. The truth table of $p\wedge q$ $$\begin{array}{|c|c|c|}\hline p & q & p\wedge q\\\hline T & T & T\\\hline T & F & F\\\hline F & T & F\\\hline F & F & F\\\hline\end{array}$$
  3. The truth table of $p\wedge\neg p$. $$\begin{array}{|c|c|c|}\hline p & \neg p & p\wedge\neg p\\\hline T & F & F\\\hline F & T & F\\\hline\end{array}$$ $p\wedge\neg p$ is unsatisfiable.
  4. The truth table of $p\vee q$ $$\begin{array}{|c|c|c|}\hline p & q & p\vee q\\\hline T & T & T\\\hline T & F & T\\\hline F & T & T\\\hline F & F & F\\\hline\end{array}$$
  5. The truth table of $p\vee\neg p$. $$\begin{array}{|c|c|c|}\hline p & \neg p & p\vee\neg p\\\hline T & F & T\\\hline F & T & T\\\hline\end{array}$$ $p\vee\neg p$ is a tautology.
  6. The truth table of $p\oplus q$ $$\begin{array}{|c|c|c|}\hline p & q & p\oplus q\\\hline T & T & F\\\hline T & F & T\\\hline F & T & T\\\hline F & F & F\\\hline\end{array}$$
  7. The truth table of $p\Longrightarrow q$ $$\begin{array}{|c|c|c|}\hline p & q & p\Longrightarrow q\\\hline T & T & T\\\hline T & F & F\\\hline F & T & T\\\hline F & F & T\\\hline\end{array}$$ It must be puzzling for readers as to why a false hypothesis can imply anything. Remember that we are doing propositional logic and any statement of our concern should be either true or false. So it comes down to the decision on what truth value to assign to a conditional statement with false hypothesis. If we were to assign the truth value F, all hell will break loose. Here is why. Consider the conditional statement “if $p$ then $\neg p$” and assume that $p$ is false. That must mean $\neg p$ is true since $p$ is a proposition. However, we decided to assign the truth value $F$ to the conditional statement hence $\neg p$ must also be false. This is a contradiction! To avoid this sort of disastrous contradiction, we should assign the truth value T to a conditional statement with false hypothesis. We claimed without a proof that the empty set is a subset of every set here. The statement “$x\in\emptyset\Longrightarrow x\in A$” is always true because the hypothesis $x\in\emptyset$ is false. Hence $\emptyset\subseteq A$ for any set $A$.
  8. The truth value of $p\Longleftrightarrow q$. $$\begin{array}{|c|c|c|c|c|}\hline p & q & p\Longrightarrow q & p\Longleftarrow q & p\Longleftrightarrow q\\\hline T & T & T & T & T\\\hline T & F & F & T & F\\\hline F & T & T & F & F\\\hline F & F & T & T & T\\\hline\end{array}$$
  9. The truth value of $p\wedge q\Longrightarrow r$. $$\begin{array}{|c|c|c|c|c|}\hline p & q & r & p\wedge q & p\wedge q\Longrightarrow r\\\hline T & T & T & T & T\\\hline T & T & F & T & F\\\hline T & F & T & F & F\\\hline T & F & F & F & T\\\hline F & T & T & F & T\\\hline F & T & F & F & T\\\hline F & F & T & F & T\\\hline F & F & F & F & T\\\hline\end{array}$$

When two fomulae $\alpha$ and $\beta$ have the same truth table, we say $\alpha$ is logically equivalent to $\beta$ and write $\alpha\equiv\beta$.

Theorem. $p\Longrightarrow q\equiv\neg p\vee q$.

Proof. $$\begin{array}{|c|c|c|c|c|}\hline p & q & p\Longrightarrow q & \neg p & \neg p\vee q\\\hline T & T & T & F & T\\\hline T & F & F & F & F\\\hline F & T & T & T & T\\\hline F & F & T & T & T\\\hline\end{array}$$

Theorem. $p\oplus q\equiv(p\wedge\neg q)\vee(\neg p\wedge q)$.

Proof. Left to readers as an exercise.

$\vee$ and $\wedge$ can be considered as operations on formulae and the following laws hold:

Commutative Laws. For any formulae $\alpha$ and $\beta$, \begin{align*}\mbox{(CL1)}\ \alpha\vee\beta&\equiv\beta\vee\alpha\\\mbox{(CL2)}\ \alpha\wedge\beta&\equiv\beta\wedge\alpha\end{align*}

Associative Laws. For any formulae $\alpha$, $\beta$ and $\gamma$, \begin{align*}\mbox{(AL1)}\ (\alpha\vee\beta)\vee\gamma&\equiv\alpha\vee(\beta\vee\gamma)\\\mbox{(AL2)}\ (\alpha\wedge\beta)\wedge\gamma&\equiv\alpha\wedge(\beta\wedge\gamma)\end{align*}

Distributive Laws. For any formulae $\alpha$, $\beta$ and $\gamma$, \begin{align*}\mbox{(DL1)}\ \alpha\vee(\beta\wedge\gamma)&\equiv(\alpha\vee\beta)\wedge(\alpha\vee\gamma)\\\mbox{(DL2)}\ \alpha\wedge(\beta\vee\gamma)&\equiv(\alpha\wedge\beta)\vee(\alpha\wedge\gamma)\end{align*}

These laws can be used to show two formulae are logically equivalent without using truth table. Proof by a truth table can be really cumbersome when concerned formulae are complicated.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

Graphs and Functions

A graph $G$ from a set $A$ to another set $B$ is a subset $R\subseteq A\times B$. If $A=B$ then $G$ is called a relation in $A$. The inverse of a graph $G\subseteq A\times B$ is defined by $G^{-1}=\{(y,x):(x,y)\in G\}\subseteq B\times A$.

A function $f$ from $A$ to $B$ is a graph $f\subseteq A\times B$ such that

F1. For any $x\in A$ there exists $y\in B$ such that $(x,y)\in f$.

F2. $(x,y_1)\in f$ and $(x,y_2)\in f$ $\Longrightarrow$ $y_1=y_2$.

If $(x,y)\in f$ then we write $y=f(x)$. If $f\subseteq A\times B$ is a function, we write $f: A\longrightarrow B$. We call $A$ and $B$ the domain and the codomain of $f$, respectively. The set \begin{align*}f(A)&=\{y\in B: (x,y)\in f\ \mbox{for some}\ x\in A\}\\&=\{f(x):x\in A\}\end{align*} is called the range of $f$. A function $f: A\longrightarrow B$ is onto or surjective if $f(A)=B$. A function $f: A\longrightarrow B$ is one-to-one or injective if $(x_1,y)\in f$ and $(x_2,y)\in f$ $\Longrightarrow$ $x_1=x_2$. A function $f: A\longrightarrow B$ is bijective if it is both injective and surjective. A bijective function is also called a one-to-one correspondence.

Theorem. An inverse graph $f^{-1}\subseteq B\times A$ of a function $f\subseteq A\times B$ is a function if and only if $f$ is bijective.

Proof. ($\Longrightarrow$) Suppose that $f^{-1}\subseteq A\times B$ is a function. $\forall y\in B$, $\exists x\in A$ such that $(y,x)\in f^{-1}\Longrightarrow (x,y)\in f$. Thus $f$ is surjective. $(x_1,y)\in f$ and $(x_2,y)\in f$ $\Longrightarrow$ $(y,x_1)\in f^{-1}$ and $(y,x_2)\in f^{-1}$ $\Longrightarrow$ $x_1=x_2$. Thus $f$ is injective. Therefore, $f$ is bijective.

($\Longleftarrow$) Suppose that $f: A\longrightarrow B$ is bijective.

F1. Let $y\in B$. Then $\exists x\in A$ such that $(x,y)\in f\Longrightarrow (y,x)\in f^{-1}$.

F2. $(y,x_1)\in f^{-1}$ and $(y,x_2)\in f^{-1}$ $\Longrightarrow$ $(x_1,y)\in f$ and $(x_2,y)\in f$ $\Longrightarrow$ $x_1=x_2$.

Theorem. If $f: A\longrightarrow B$ and $g: B\longrightarrow C$ are bijective then the composition $g\circ f: A\longrightarrow C$ is also bijective.

Proof. F1. Let $x\in A$. Then $\exists y\in B$ such that $y=f(x)$. Also $\exists z\in C$ such that $g(y)=z$. Hence, $$g\circ f(x)=g(f(x))=f(y)=z$$

F2. $$x_1=x_2\Longrightarrow f(x_1)=f(x_2)\Longrightarrow g\circ f(x_1)=g\circ f(x_2)$$

Surjective: Let $z\in C$. Then $\exists y\in B$ such that $g(y)=z$. Also $\exists x\in A$ such that $f(x)=y$. Thus, $g\circ f(x)=z$.

Injective: \begin{align*}g\circ f(x_1)=g\circ f(x_2)&\Longrightarrow g(f(x_1))=g(f(x_2))\\&\Longrightarrow f(x_1)=f(x_2)\\&\Longrightarrow x_1=x_2\end{align*}

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

[2] Set Theory, Charles C. Pinter, Addison Wesley Publishing Company, 1971

Sets

We have already introduced the notion of a set here. Recall that a set is an unordered collection of distinct objects, which are called members or elements. We use $\{\ \}$ to denote a set. For example, the set containing 2, 3, 5 is written as $\{2,3,5\}$. While $\{2,3,5\}=\{3,5,2\}$, the following sets are different.

  1. $\{2,3,5\}$ contains 3 members 2, 3, 5.
  2. $\{\{2\},\{3\},\{5\}\}$ contains three singleton sets $\{2\}$, $\{3\}$, $\{5\}$ as members.
  3. $\{\{2,3,5\}\}$ contains a single element $\{2,3,5\}$.

Example. Some Important Examples of Sets

  1. $\emptyset$=$\{\ \}$=$\Box$=the empty set, the set containing no object. If I remember correctly, the notation $\Box$ for the empty set was introduced by Nicolas Bourbaki, a secretive group of mathematicians.
  2. $\mathbb{N}=\{0,1,2,3,\cdots\}$, the set of all natural numbers.
  3. $\mathbb{Z}=\{\cdots,-3,-2,-1,0,1,2,3,\cdots\}$, the set of all integers.
  4. $\mathbb{Q}$=the set of all rational numbers.
  5. $\mathbb{R}$=the set of all real numbers.

The notation $\in$ is used to express membership. For example, $a\in S$ says $a$ is a member of the set $S$. The set of elements of $A$ that satisfy the predicate $P$ is written as$$\{x\in A: P(x)\}$$

Example. The set of all even integers can be written as $$\{n\in\mathbb{Z}: n\ \mbox{is even}\},$$ $$\{n\in\mathbb{Z}:n=2m\ \mbox{for some}\ m\in\mathbb{Z}\},$$ or $$\{2m: m\in\mathbb{Z}\}$$

A subset $A$ of a set $B$, we write $A\subseteq B$, is defined by $$A\subseteq B\ \mbox{iff}\ [x\in A\Longrightarrow x\in B]$$ If $A\subseteq B$, $B$ is called a superset of $A$. Two sets $A$ and $B$ are said to be equal, we write $A=B$, if $A\subseteq B$ and $B\subseteq A$. If $A\subseteq B$ and $A\ne B$, we write $A\subsetneq B$ and $A$ is called a proper subset of $B$. The notation $A\subset B$ is also frequently used in mathematics to mean that $A$ is a subset of $B$.

Example.

  1. $\emptyset$ is a subset of every set.
  2. Every set is a subset of itself.
  3. $\mathbb{N}\subseteq\mathbb{Z}\subseteq\mathbb{Q}\subseteq\mathbb{R}$.

The power set of a set $A$, denoted by $\wp(A)$ or $2^A$, is the set of all subsets of $A$. For example, if $A=\{3,17\}$, then $\wp(A)=\{\emptyset,\{3\},\{17\},\{3,17\}\}$. If $|A|=n$, then $|\wp(A)|=2^n$.

Now let us discuss set operations. First, we define $A\cup B$, the union of two sets $A$ and $B$ by $$A\cup B=\{x:x\in A\ \mbox{or}\ x\in B\}$$ Next, $A\cap B$, the intersection of two sets $A$ and $B$ is defined by $$A\cap B=\{x:x\in A\ \mbox{and}\ x\in B\}$$ Clearly, $\cup$ and $\cap$ are commutative. $\cup$ and $\cap$ are also associative, namely \begin{align*}(A\cup B)\cup C&=A\cup (B\cup C)\\(A\cap B)\cap C&=A\cap (B\cap C)\end{align*} Lastly, $\bar A=A^c$, the compliment of a set $A$ is defined by $$\bar A=\{x:x\notin A\}$$ Note that while $\cup$ and $\cap$ are binary operations, $\bar{}$ is a unary operation. The empty set $\emptyset$ is an identity element for $\cup$, i.e. for any set $A$, $A\cup\emptyset=A$. $\emptyset$ is also regarded as an operation which is called a nullary operation. Those of you who are interested in learning more details about operations may be referred to [2] below. Using the compliment, we can define $A-B$, the difference of $A$ and $B$ is defined by $$A-B=A\cap\bar B$$ This set minus is not an operation but the composite of two operations $\cap$ and $\bar{}$. If $A\subseteq X$, $X-A$ is also frequently written as $X\setminus A$.

Theorem. Distributive Laws \begin{align*}A\cap(B\cup C)&=(A\cap B)\cup(A\cap C)\\A\cup(B\cap C)&=(A\cup B)\cap(A\cap C)\end{align*}

Proof. We prove only the first one. The proof of the second one is left as an exercise. \begin{align*}x\in A\cap(B\cup C)&\Longleftrightarrow x\in A\ \mbox{and}\ x\in B\cup C\\&\Longleftrightarrow x\in A\ \mbox{and}\ [x\in B\ \mbox{or}\ x\in C]\\&\Longleftrightarrow[x\in A\ \mbox{and}\ x\in B]\ \mbox{or}\ [x\in A\ \mbox{and}\ x\in C]\\&\Longleftrightarrow x\in A\cap B\ \mbox{or}\ x\in A\cap C\\&\Longleftrightarrow x\in(A\cap B)\cup(A\cap C)\end{align*}

The Cartesian product of two sets $A$ and $B$, denoted by $A\times B$, is defined by the set of ordered pairs with the first component taken from $A$ and the second component taken from $B$. $$A\times B=\{(a,b):a\in A\ \mbox{and}\ b\in B\}$$ Note that $\times$ is not an operation.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019

[2] A Course in Universal Algebra, Stanley N. Burris and H. P. Sankappanavar, Graduate Texts in Mathematics, Springer-Verlag, 1981. The book is available online for free at the first named author’s web page here.

Strong Induction

The Strong Principle of Mathematical induction

To prove that a predicate $P(n)$ holds for every number $n\geq n_0$,

Base Case. Prove $P(m)$ for all $m$ such that $n_0\leq m\leq n_1$.

Induction Hypothesis. Let $n\geq n_1$ be arbitrary but fixed, and assume that $P(m)$ holds whenever $n_0\leq m\leq n$.

Induction Step. Assuming the induction hypothesis, show that $P(n+1)$ holds.

Example. Consider the sequence $a_1=3$, $a_2=5$, and for all $n>2$, $a_n=3a_{n-1}-2a_{n-2}$. Prove by strong induction that $a_n=2^n+1$ for all integers $n\geq 1$.

Proof. Let $P(n)$ be the predicate $a_n=2^n+1$.

Base Case. \begin{align*}P(1): a_1=3=2^1+1\\P(2): a_2=5=2^2+1\end{align*}

Induction Hypothesis. Fix $n\geq 2$, and assume that for all $1\leq n\leq n$, $P(m)$ holds i.e. $a_m=2^m+1$.

Induction Step. Show $P(n+1)$ i.e. $a_{n+1}=2^{n+1}+1$ holds. \begin{align*}a_{n+1}&=3a_n-2a_{n-1}\\&=3(2^n+1)-2(2^{n-1}+1)\\&=2\cdot 2^n+1\\&=2^{n+1}+1\end{align*}

Well-Ordering Principle

Any nonempty set of nonnegative integers has a smallest element.

Well-Ordering Principle may appear to be unrelated to the Principle of Mathematical Induction, however the two are indeed equivalent to each other.

We now prove, with the aid of Well-Ordering Principle, the Fundamental Theorem of Arithmetic (every integer $n$ greater than 1 has a unique prime decomposition)

Proof. We prove by induction on $n$.

Base Case. $n_0=2=2^1$. We are done.

Induction Hypothesis. Fix $n\geq 2$, and suppose that we know that every $m$, $2\leq m\leq n$, has a unique prime decomposition (factorization).

Induction Step. We prove that the statement is true for $n+1$. If $n+1$ is a prime, we are done. Suppose that $n+1$ is not a prime. Let $S$ be the set of all its factors greater than 1. By Well-Ordering Principle, $S$ has the smallest element $p$, which must be prime. (otherwise $p$ wouldn’t be the smallest factor of $n+1$.) Let $q=\frac{n+1}{p}$. Then by induction hypothesis ($q\leq n$), $q$ has the unique prime decomposition $$q=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}\ (p_1<p_2<\cdots<p_k)$$ Now $p$ is either $p_1$ or a smaller prime than $p_1$. If $p=p_1$ then $$n+1=p_1^{e_1+1}p_2^{e_2}\cdots p_k^{e_k}$$ If $p$ is a smaller prime than $p_1$, say $p_0$, then $$n+1=p_0^1p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$$ It’s easy to show that no number can have more than one prime decomposition. This is left as an exercise for readers.

References.

[1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019