OL What If (LNC)
OL What If (LNC)
An Open Introduction to
Non-Classical Logics
F20
What if?
The Open Logic Project
Instigator
Richard Zach, University of Calgary
Editorial Board
Aldo Antonelli,† University of California, Davis
Andrew Arana, Université de Lorraine
Jeremy Avigad, Carnegie Mellon University
Tim Button, University College London
Walter Dean, University of Warwick
Gillian Russell, Dianoia Institute of Philosophy
Nicole Wyatt, University of Calgary
Audrey Yap, University of Victoria
Contributors
Samara Burns, Columbia University
Dana Hägg, University of Calgary
Zesen Qian, Carnegie Mellon University
What if?
An Open Introduction to
Non-Classical Logics
Fall 2020
The Open Logic Project would like to acknowledge the gener-
ous support of the Taylor Institute of Teaching and Learning of
the University of Calgary, and the Alberta Open Educational Re-
sources (ABOER) Initiative, which is made possible through an
investment from the Alberta government.
Introduction xiv
2 Axiomatic Derivations 12
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . 12
2.2 Axiomatic Derivations . . . . . . . . . . . . . . . 14
2.3 Rules and Derivations . . . . . . . . . . . . . . . 16
2.4 Axiom and Rules for the Propositional Connectives 18
2.5 Examples of Derivations . . . . . . . . . . . . . . 19
2.6 Proof-Theoretic Notions . . . . . . . . . . . . . . 21
2.7 The Deduction Theorem . . . . . . . . . . . . . . 23
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 25
v
CONTENTS vi
3 Sequent calculus 27
3.1 The Sequent Calculus . . . . . . . . . . . . . . . 27
3.2 Rules and Derivations . . . . . . . . . . . . . . . 28
3.3 Propositional Rules . . . . . . . . . . . . . . . . . 30
3.4 Structural Rules . . . . . . . . . . . . . . . . . . . 30
3.5 Derivations . . . . . . . . . . . . . . . . . . . . . 31
3.6 Examples of Derivations . . . . . . . . . . . . . . 33
3.7 Proof-Theoretic Notions . . . . . . . . . . . . . . 38
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 Three-valued Logics 51
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . 51
5.2 Łukasiewicz logic . . . . . . . . . . . . . . . . . . 51
5.3 Kleene logics . . . . . . . . . . . . . . . . . . . . 55
5.4 Gödel logics . . . . . . . . . . . . . . . . . . . . . 58
5.5 Designating not just T . . . . . . . . . . . . . . . 59
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6 Sequent Calculus 66
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . 66
6.2 Rules and Derivations . . . . . . . . . . . . . . . 67
6.3 Structural Rules . . . . . . . . . . . . . . . . . . . 69
6.4 Propositional Rules for Selected Logics . . . . . . 69
CONTENTS vii
7 Infinite-valued Logics 74
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . 74
7.2 Łukasiewicz logic . . . . . . . . . . . . . . . . . . 75
7.3 Gödel logics . . . . . . . . . . . . . . . . . . . . . 76
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 78
9 Axiomatic Derivations 99
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . 99
9.2 Proofs in K . . . . . . . . . . . . . . . . . . . . . . 101
9.3 Derived Rules . . . . . . . . . . . . . . . . . . . . 103
9.4 More Proofs in K . . . . . . . . . . . . . . . . . . 106
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 107
17 Introduction 208
17.1 The Material Conditional . . . . . . . . . . . . . 208
17.2 Paradoxes of the Material Conditional . . . . . . 210
17.3 The Strict Conditional . . . . . . . . . . . . . . . 211
17.4 Counterfactuals . . . . . . . . . . . . . . . . . . . 213
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 214
19 Introduction 228
19.1 Constructive Reasoning . . . . . . . . . . . . . . 228
19.2 Syntax of Intuitionistic Logic . . . . . . . . . . . 230
19.3 The Brouwer-Heyting-Kolmogorov Interpretation 231
19.4 Natural Deduction . . . . . . . . . . . . . . . . . 235
19.5 Axiomatic Derivations . . . . . . . . . . . . . . . 238
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 240
20 Semantics 241
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . 241
20.2 Relational models . . . . . . . . . . . . . . . . . . 242
20.3 Semantic Notions . . . . . . . . . . . . . . . . . . 244
20.4 Topological Semantics . . . . . . . . . . . . . . . 245
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 247
CONTENTS xi
X Appendices 250
A Sets 251
A.1 Extensionality . . . . . . . . . . . . . . . . . . . . 251
A.2 Subsets and Power Sets . . . . . . . . . . . . . . . 253
A.3 Some Important Sets . . . . . . . . . . . . . . . . 254
A.4 Unions and Intersections . . . . . . . . . . . . . . 256
A.5 Pairs, Tuples, Cartesian Products . . . . . . . . . 259
A.6 Russell’s Paradox . . . . . . . . . . . . . . . . . . 261
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 263
B Relations 264
B.1 Relations as Sets . . . . . . . . . . . . . . . . . . 264
B.2 Special Properties of Relations . . . . . . . . . . . 266
B.3 Equivalence Relations . . . . . . . . . . . . . . . 268
B.4 Orders . . . . . . . . . . . . . . . . . . . . . . . . 269
B.5 Graphs . . . . . . . . . . . . . . . . . . . . . . . . 272
B.6 Operations on Relations . . . . . . . . . . . . . . 274
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 275
C Proofs 276
C.1 Introduction . . . . . . . . . . . . . . . . . . . . . 276
C.2 Starting a Proof . . . . . . . . . . . . . . . . . . . 278
C.3 Using Definitions . . . . . . . . . . . . . . . . . . 278
C.4 Inference Patterns . . . . . . . . . . . . . . . . . . 281
C.5 An Example . . . . . . . . . . . . . . . . . . . . . 289
C.6 Another Example . . . . . . . . . . . . . . . . . . 293
C.7 Proof by Contradiction . . . . . . . . . . . . . . . 295
C.8 Reading Proofs . . . . . . . . . . . . . . . . . . . 300
C.9 I Can’t Do It! . . . . . . . . . . . . . . . . . . . . 302
CONTENTS xii
D Induction 306
D.1 Introduction . . . . . . . . . . . . . . . . . . . . . 306
D.2 Induction on N . . . . . . . . . . . . . . . . . . . 307
D.3 Strong Induction . . . . . . . . . . . . . . . . . . 310
D.4 Inductive Definitions . . . . . . . . . . . . . . . . 311
D.5 Structural Induction . . . . . . . . . . . . . . . . 314
D.6 Relations and Functions . . . . . . . . . . . . . . 316
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Bibliography 322
xiii
Introduction
Classical logic is very useful, widely used, has a long history,
and is relatively simple. But it has limitations: for instance, it
does not (and cannot) deal well with certain locutions of natural
language such as tense and subjunctive mood, nor with certain
constructions such as “Audrey knows that p.” It makes certain
assumptions, for instance that every sentence is either true or
false and never both. It pronounces some formulas tautologies
and some arguments as valid, even though these tautologies and
arguments formalize arguments in English which some do not
consider true or valid, at least not obviously. Thus it seems there
are examples where classical logic is not expressive enough, or
even where classical logic gets things wrong.
This book discusses some alternative, non-classical logics.
These non-classical logics are either more expressive than clas-
sical logic or have different tautologies or valid arguments. For
instance, temporal logic extends classical logic by operators that
express tense; conditional logics have an additional, different con-
ditional (“if—then”) that does not suffer from the so-called para-
doxes of the material conditional. All of these logics extend classi-
cal logic by new operators or connectives, and fall into the broad
category of intensional logics. Other logics such as many-valued,
intuitionistic, and paraconsistent logics have the same basic con-
nectives as classical logic, but different infrences count as valid.
In many-valued and intutionistic logic, for instance, the law of
excluded middle A ∨ ¬A fails to hold; in paraconsistent logic the
xiv
INTRODUCTION xv
that,” “it ought to be the case that,” or “it will always be true that.”
These are epistemic, doxastic, deontic, and temporal modalities,
respectively. Different interpretations of □ will make different for-
mulas logically true, and pronounce different inferences as valid.
For instance, everything necessary and everything known is true,
so □A →A is a logical truth on the alethic and epistemic interpre-
tations. By contrast, not everything believed nor everything that
ought to be the case actually is the case, so □A → A is not a log-
ical truth on the doxastic or deontic interpretations. We discuss
modal logics in general in parts III and IV and epistemic logics
in particular in part V.
In order to deal with different interpretations of the modal op-
erators, the semantics is extended by a relation between worlds,
the so-called accessibility relation. Then M,w ⊩ □A if M,v ⊩ A
for all worlds v which are accessible from w. The resulting se-
mantics is very versatile and powerful, and the basic idea can be
used to provide semantic interpretations for logics based on other
intensional operators. One such logic is a close relative of modal
logic called temporal logic. Instead of having just one modality
□ (plus its dual ◇), it has temporal operators such as “always P ,”
“p will be true”, etc. We study these in part VI.
Whereas the material conditional is best read as an English
indicative conditional (“If p is true then q is true”), subjunctive
conditionals are in the subjunctive mood: “if p were true then
q would be true.” While a material conditional with a false an-
tecedent is true, a subjunctive conditional need not be, e.g., “if
humans had tails, they would be able to fly.” In part VII, we
discuss logics of counterfactual coditionals.
Intuitionistic logic is a constructive logic based on L. E.
J. Brouwer’s branch of constructive mathematics. Intuitionistic
logic is philosophically interesting for this reason—it plays an
important role in constructive accounts of mathematics—but was
also proposed as a logic superior to classical logic by the influen-
tial English philosopher Michael Dummett in the 20th century.
As mentioned above, intuitionistic logic is non-classical because
it has fewer valid inferences and theorems, e.g., A ∨ ¬A and
INTRODUCTION xvii
Remind me,
how does
logic work
again?
1
CHAPTER 1
Syntax and
Semantics
1.1 Introduction
Propositional logic deals with formulas that are built from propo-
sitional variables using the propositional connectives ¬, ∧, ∨, →,
and ↔. Intuitively, a propositional variable p stands for a sen-
tence or proposition that is true or false. Whenever the “truth
value” of the propositional variable in a formula is determined,
so is the truth value of any formulas formed from them using
propositional connectives. We say that propositional logic is truth
functional, because its semantics is given by functions of truth val-
ues. In particular, in propositional logic we leave out of consider-
ation any further determination of truth and falsity, e.g., whether
something is necessarily true rather than just contingently true,
or whether something is known to be true, or whether something
is true now rather than was true or will be true. We only consider
two truth values true (T) and false (F), and so exclude from dis-
cussion the possibility that a statement may be neither true nor
false, or only half true. We also concentrate only on connectives
where the truth value of a formula built from them is completely
determined by the truth values of its parts (and not, say, on its
meaning). In particular, whether the truth value of conditionals
2
CHAPTER 1. SYNTAX AND SEMANTICS 3
1. ⊥ is an atomic formula.
1. ⊤ abbreviates ¬⊥.
2. A ↔ B abbreviates (A → B) ∧ (B → A).
CHAPTER 1. SYNTAX AND SEMANTICS 6
1.3 Preliminaries
Theorem 1.4 (Principle of induction on formulas). If some
property P holds for all the atomic formulas and is such that
1. ⊥.
v(⊥) = F;
v(pn ) = v(pn );
{︄
T if v(A) = F;
v(¬A) =
F otherwise.
{︄
T if v(A) = T and v(B) = T;
v(A ∧ B) =
F if v(A) = F or v(B) = F.
{︄
T if v(A) = T or v(B) = T;
v(A ∨ B) =
F if v(A) = F and v(B) = F.
{︄
T if v(A) = F or v(B) = T;
v(A → B) =
F if v(A) = T and v(B) = F.
A B A∧B A B A∨B
A ¬A T T T T T T
T F T F F T F T
F T F T F F T T
F F F F F F
CHAPTER 1. SYNTAX AND SEMANTICS 9
A B A→B
T T T
T F F
F T T
F F T
Proof. By induction on A. □
1. A ≡ ⊥: v ⊭ A.
2. A ≡ pi : v ⊨ A iff v(pi ) = T.
3. A ≡ ¬B: v ⊨ A iff v ⊭ B.
4. A ≡ (B ∧ C ): v ⊨ A iff v ⊨ B and v ⊨ C .
Proof. By induction on A. □
CHAPTER 1. SYNTAX AND SEMANTICS 10
2. If 𝛤 ⊨ A and 𝛤 ⊨ A → B then 𝛤 ⊨ B;
Proof. Exercise. □
Proof. Exercise. □
Proof. Exercise. □
CHAPTER 1. SYNTAX AND SEMANTICS 11
Problems
Problem 1.1. Prove Proposition 1.5
Axiomatic
Derivations
2.1 Introduction
Logics commonly have both a semantics and a derivation system.
The semantics concerns concepts such as truth, satisfiability, va-
lidity, and entailment. The purpose of derivation systems is to
provide a purely syntactic method of establishing entailment and
validity. They are purely syntactic in the sense that a derivation
in such a system is a finite syntactic object, usually a sequence
(or other finite arrangement) of sentences or formulas. Good
derivation systems have the property that any given sequence or
arrangement of sentences or formulas can be verified mechani-
cally to be “correct.”
The simplest (and historically first) derivation systems for
first-order logic were axiomatic. A sequence of formulas counts
as a derivation in such a system if each individual formula in it
is either among a fixed set of “axioms” or follows from formulas
coming before it in the sequence by one of a fixed number of “in-
ference rules”—and it can be mechanically verified if a formula
is an axiom and whether it follows correctly from other formulas
by one of the inference rules. Axiomatic derivation systems are
easy to describe—and also easy to handle meta-theoretically—
12
CHAPTER 2. AXIOMATIC DERIVATIONS 13
1. ⊢ A if and only if ⊨ A
2. 𝛤 ⊢ A if and only if 𝛤 ⊨ A
1. A is an axiom, or
A → (B → A) B → (B ∨ C ) (B ∧ C ) → B
1. Ai ∈ 𝛤; or
2. Ai is an axiom; or
1. Ai ∈ 𝛤; or
2. Ai is an axiom; or
(A ∧ B) → A (2.1)
(A ∧ B) → B (2.2)
A → (B → (A ∧ B)) (2.3)
A → (A ∨ B) (2.4)
A → (B ∨ A) (2.5)
(A → C ) → ((B → C ) → ((A ∨ B) → C )) (2.6)
A → (B → A) (2.7)
(A → (B → C )) → ((A → B) → (A → C )) (2.8)
(A → B) → ((A → ¬B) → ¬A) (2.9)
¬A → (A → B) (2.10)
⊤ (2.11)
CHAPTER 2. AXIOMATIC DERIVATIONS 19
⊥→A (2.12)
(A → ⊥) → ¬A (2.13)
¬¬A → A (2.14)
D → (D → D)
(A → (B → C )) → ((A → B) → (A → C ))
1. A→B Hyp
2. B →C Hyp
3. (B → C ) → (A → (B → C )) eq. (2.7)
4. A → (B → C ) 2, 3, mp
5. (A → (B → C )) →
((A → B) → (A → C )) eq. (2.8)
6. ((A → B) → (A → C )) 4, 5, mp
7. A →C 1, 6, mp
The lines labelled “Hyp” (for “hypothesis”) indicate that the for-
mula on that line is an element of 𝛤.
Proof. Exercise. □
1. A Hyp.
2. A→B Hyp.
3. B 1, 2, MP
By Proposition 2.16, 𝛤 ⊢ B. □
The most important result we’ll use in this context is the de-
duction theorem:
𝛤 ⊢ A → (C → B);
𝛤 ⊢ A → C.
But also
Notice how eq. (2.7) and eq. (2.8) were chosen precisely so
that the Deduction Theorem would hold.
The following are some useful facts about derivability, which
we leave as exercises.
5. If 𝛤 ⊢ ¬¬A then 𝛤 ⊢ A;
Problems
Problem 2.1. Show that the following hold by exhibiting deriva-
tions from the axioms:
1. (A ∧ B) → (B ∧ A)
2. ((A ∧ B) → C ) → (A → (B → C ))
3. ¬(A ∨ B) → ¬A
CHAPTER 2. AXIOMATIC DERIVATIONS 26
Sequent
calculus
3.1 The Sequent Calculus
While many derivation systems operate with arrangements of sen-
tences, the sequent calculus operates with sequents. A sequent is
an expression of the form
A1 , . . . ,Am ⇒ B 1 , . . . ,Bm ,
27
CHAPTER 3. SEQUENT CALCULUS 28
𝛤⇒𝛥
(A1 ∧ · · · ∧ Am ) → (B 1 ∨ · · · ∨ Bn )
holds. There are two special cases: where 𝛤 is empty and when
𝛥 is empty. When 𝛤 is empty, i.e., m = 0, ⇒ 𝛥 holds iff B 1 ∨· · ·∨
Bn holds. When 𝛥 is empty, i.e., n = 0, 𝛤 ⇒ holds iff ¬(A1 ∧
· · · ∧ Am ) does. We say a sequent is valid iff the corresponding
sentence is valid.
If 𝛤 is a sequence of sentences, we write 𝛤,A for the result
of appending A to the right end of 𝛤 (and A, 𝛤 for the result of
appending A to the left end of 𝛤). If 𝛥 is a sequence of sentences
also, then 𝛤, 𝛥 is the concatenation of the two sequences.
1. A ⇒ A
2. ⊥ ⇒
Rules for ∧
A, 𝛤 ⇒ 𝛥
∧L
A ∧ B, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A 𝛤 ⇒ 𝛥,B
∧R
B, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A ∧ B
∧L
A ∧ B, 𝛤 ⇒ 𝛥
Rules for ∨
𝛤 ⇒ 𝛥,A
∨R
A, 𝛤 ⇒ 𝛥 B, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A ∨ B
∨L
A ∨ B, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,B
∨R
𝛤 ⇒ 𝛥,A ∨ B
Rules for →
𝛤 ⇒ 𝛥,A B, 𝛱 ⇒ 𝛬 A, 𝛤 ⇒ 𝛥,B
→L →R
A → B, 𝛤, 𝛱 ⇒ 𝛥, 𝛬 𝛤 ⇒ 𝛥,A → B
Weakening
𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥
WL WR
A, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A
Contraction
A,A, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A,A
CL CR
A, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A
Exchange
𝛤,A,B, 𝛱 ⇒ 𝛥 𝛤 ⇒ 𝛥,A,B, 𝛬
XL XR
𝛤,B,A, 𝛱 ⇒ 𝛥 𝛤 ⇒ 𝛥,B,A, 𝛬
𝛤 ⇒ 𝛥,A A, 𝛱 ⇒ 𝛬
Cut
𝛤, 𝛱 ⇒ 𝛥, 𝛬
3.5 Derivations
We’ve said what an initial sequent looks like, and we’ve given
the rules of inference. Derivations in the sequent calculus are
inductively generated from these: each derivation either is an
initial sequent on its own, or consists of one or two derivations
followed by an inference.
We can now apply another rule, say XL, which allows us to switch
two sentences on the left. So, the following is also a correct
derivation:
C ⇒C
WL
D,C ⇒ C
XL
C ,D ⇒ C
In our case, the premises must match the last sequents of the
derivations ending in the premises. That means that 𝛤 is C ,D, 𝛥
is empty, A is C and B is D. So the conclusion, if the inference
should be correct, is C ,D ⇒ C ∧ D.
C ⇒C
WL
D,C ⇒ C D ⇒ D
XL WL
C ,D ⇒ C C ,D ⇒ D
∧R
C ,D ⇒ C ∧ D
A∧B ⇒ A
CHAPTER 3. SEQUENT CALCULUS 34
There are two options for what could have been the upper sequent
of the ∧L inference: we could have an upper sequent of A ⇒ A,
or of B ⇒ A. Clearly, A ⇒ A is an initial sequent (which is a
good thing), while B ⇒ A is not derivable in general. We fill in
the upper sequent:
A ⇒ A
∧L
A∧B ⇒ A
¬A ∨ B ⇒ A → B
A, ¬A ∨ B ⇒ B
→R
¬A ∨ B ⇒ A → B
¬A,A ⇒ B B,A ⇒ B
∨L
¬A ∨ B,A ⇒ B
XR
A, ¬A ∨ B ⇒ B
→R
¬A ∨ B ⇒ A→B
A ⇒ A
WR
A ⇒ A,B B ⇒ B
XR WL
A ⇒ B,A A,B ⇒ B
¬L XL
¬A,A ⇒ B B,A ⇒ B
∨L
¬A ∨ B,A ⇒ B
XR
A, ¬A ∨ B ⇒ B
→R
¬A ∨ B ⇒ A→B
¬A ∨ ¬B ⇒ ¬(A ∧ B)
A ∧ B, ¬A ∨ ¬B ⇒
¬R
¬A ∨ ¬B ⇒ ¬(A ∧ B)
A, ¬A ∨ ¬B ⇒
∧L
A ∧ B, ¬A ∨ ¬B ⇒
¬R
¬A ∨ ¬B ⇒ ¬(A ∧ B)
?
A ⇒ A A ⇒ B
¬L ¬L
¬A,A ⇒ ¬B,A ⇒
∨L
¬A ∨ ¬B,A ⇒
XL
A, ¬A ∨ ¬B ⇒
∧L
A ∧ B, ¬A ∨ ¬B ⇒
¬R
¬A ∨ ¬B ⇒ ¬(A ∧ B)
The top of the right branch cannot be reduced any further, and
it cannot be brought by way of structural inferences to an initial
sequent, so this is not the right path to take. So clearly, it was a
mistake to apply the ∧L rule above. Going back to what we had
before and carrying out the ∨L rule instead, we get
¬A,A ∧ B ⇒ ¬B,A ∧ B ⇒
∨L
¬A ∨ ¬B,A ∧ B ⇒
XL
A ∧ B, ¬A ∨ ¬B ⇒
¬R
¬A ∨ ¬B ⇒ ¬(A ∧ B)
(We could have carried out the ∧ rules lower than the ¬ rules in
these steps and still obtained a correct derivation).
A ⇒
¬R
⇒ A ⇒ ¬A
∨R ∨R
⇒ A ∨ ¬A ⇒ A ∨ ¬A
CHAPTER 3. SEQUENT CALCULUS 38
⇒ A ∨ ¬A,A
∨R
⇒ A ∨ ¬A,A ∨ ¬A
CR
⇒ A ∨ ¬A
Now we can apply ∨R a second time, and also get ¬A, which
leads to a complete derivation.
A ⇒ A
¬R
⇒ A, ¬A
∨R
⇒ A,A ∨ ¬A
XR
⇒ A ∨ ¬A,A
∨R
⇒ A ∨ ¬A,A ∨ ¬A
CR
⇒ A ∨ ¬A
B,B,C ⇒ A
CL
B,C ⇒ A
XL
C ,B ⇒ A
WL
C ,C ,B ⇒ A
𝜋0 𝜋1
𝛤0 ⇒ A A, 𝛥0 ⇒ B
Cut
𝛤0 , 𝛥0 ⇒ B
Proof. Exercise. □
Problems
Problem 3.1. Give derivations of the following sequents:
1. ⇒ ¬(A → B) → (A ∧ ¬B)
2. (A ∧ B) → C ⇒ (A → C ) ∨ (B → C )
Does
everything
have to be
true or false?
42
CHAPTER 4
Syntax and
Semantics
4.1 Introduction
In classical logic, we deal with formulas that are built from propo-
sitional variables using the propositional connectives ¬, ∧, ∨, →,
and ↔. When we define a semantics for classical logic, we do so
using the two truth values T and F. We interpret propositional
variables in a valuation v, which assigns these truth values T, F
to the propositional variables. Any valuation then determines a
truth value v(A) for any formula A, and A formula is satisfied in
a valuation v, v ⊨ A, iff v(A) = T.
Many-valued logics are generalizations of classical two-valued
logic by allowing more truth values than just T and F. So in
many-valued logic, a valuation v is a function assigning to every
propositional variable p one of a range of possible truth values.
We’ll generally call the set of allowed truth values V . Classical
logic is a many-valued logic where V = {T, F}, and the truth
value v(A) is computed using the familiar characteristic truth
tables for the connectives.
Once we add additional truth values, we have more than one
natural option for how to compute v(A) for the connectives we
read as “and,” “or,” “not,” and “if—then.” So a many-valued
43
CHAPTER 4. SYNTAX AND SEMANTICS 44
logic is determined not just by the set of truth values, but also
by the truth functions we decide to use for each connective. Once
these are selected for a many-valued logic L, however, the truth
value vL (A) is uniquely determined by the valuation, just like in
classical logic. Many-valued logics, like classical logic, are truth
functional.
With this semantic building blocks in hand, we can go on to
define the analogs of the semantic concepts of tautology, entail-
ment, and satisfiability. In classical logic, a formula is a tautology
if its truth value v(A) = T for any v. In many-valued logic, we
have to generalize this a bit as well. First of all, there is no re-
quirement that the set of truth values V contains T. For instance,
some many-valued logics use numbers, such as all rational num-
bers between 0 and 1 as their set of truth values. In such a case,
1 usually plays the rule of T. In other logics, not just one but sev-
eral truth values do. So, we require that every many-valued logic
have a set V + of designated values. We can then say that a formula
is satisfied in a valuation v, v ⊨L A, iff vL (A) ∈ V + . A formula A
is a tautology of the logic, ⊨L A, iff v(A) ∈ V + for any v. And,
finally, we say that A is entailed by a set of formulas, 𝛤 ⊨L A, if
every valuation that satisfies all the formulas in 𝛤 also satisfies A.
4.3 Formulas
4.4 Matrices
A many-valued logic is defined by its language, its set of truth
values V , a subset of designated truth values, and truth functions
for its connective. Together, these elements are called a matrix.
¬
˜︁ ∧
˜︁ T F ∨
˜︁ T F →
˜︂ T F
T F T T F T T T T T F
F T F F F F T F F T T
1. v(pn ) = v(pn );
v(★(A1 , . . . ,An )) = ★
˜︁L (v(A1 ), . . . , v(An )).
Proof. Exercise. □
In classical logic we can connect entailment and the condi-
tional. For instance, we have the validity of modus ponens: If 𝛤 ⊨ A
and 𝛤 ⊨ A → B then 𝛤 ⊨ B. Another important relationship be-
tween ⊨ and → in classical logic is the semantic deduction theo-
rem: 𝛤 ⊨ A → B if and only if 𝛤 ∪ {A} ⊨ B. These results do not
always hold in many-valued logics. Whether they do depends on
the truth function →.
˜︂
CHAPTER 4. SYNTAX AND SEMANTICS 49
¬L (x) = ˜︁
1. ˜︁ ¬C (x) if x = T or x = F;
∧L (x, y) = ˜︁
2. ˜︁ ∧C (x, y),
∨L (x, y) = ˜︁
3. ˜︁ ∨C (x, y),
4. →
˜︂L (x, y) = →
˜︂C (x, y), if x, y ∈ {T, F}.
Then, for any valuation v into V such that v(p) ∈ {T, F}, vL (A) =
vC (A).
Proof. By induction on A.
1. If A ≡ p is atomic, we have vL (A) = v(p) = vC (A).
2. If A ≡ ¬B, we have
vL (A) = ˜︁
¬L (vL (B)) by Definition 4.8
¬L (vC (B))
= ˜︁ by inductive hypothesis
¬C (vC (B))
= ˜︁ by assumption (1),
since vC (B) ∈ {T, F},
= vC (A) by Definition 4.8.
3. If A ≡ (B ∧ C ), we have
vL (A) = ˜︁
∧L (vL (B), vL (C )) by Definition 4.8
CHAPTER 4. SYNTAX AND SEMANTICS 50
Problems
Problem 4.1. Prove Proposition 4.11
CHAPTER 5
Three-valued
Logics
5.1 Introduction
If we just add one more value U to T and F, we get a three-
valued logic. Even though there is only one more truth value, the
possibilities for defining the truth-functions for ¬, ∧, ∨, and →
are quite numerous. Then a logic might use any combination of
these truth functions, and you also have a choice of making only
T designated, or both T and U.
We present here a selection of the most well-known three-
valued logics, their motivations, and some of their properties.
51
CHAPTER 5. THREE-VALUED LOGICS 52
∧(T, U) = ˜︁
˜︁ ∧(U, T) = U.
∧(F, U) = ˜︁
˜︁ ∧(F, U) = F.
1 Łukasiewicz here uses “possible” in a way that is uncommon today, namely
to mean possible but not necessary.
CHAPTER 5. THREE-VALUED LOGICS 53
The other values (if the arguments are settled truth values, T or
F, are like in classical logic.
For the conditional, the situation is a little trickier. Suppose
q is a future contingent statement. If p is false, then p → q will be
true, regardless of how q turns out, so we should set →(F,
˜︂ U) = T.
And if p is true, then q →p will be true, regardless of what q turns
out to be, so →(U,
˜︂ T) = T. If p is true, then p → q might turn out
to be true or false, so →(T,
˜︂ U) = U. Similarly, if p is false, then
q → p might turn out to be true or false, so →(U, ˜︂ F) = U. This
leaves the case where p and q are both future contingents. On
the basis of the motivation, we should really assign U in this case.
However, this would make A → A not a tautology. Łukasiewicz
had not trouble giving up A ∨ ¬A and ¬(A ∧ ¬A), but balked at
giving up A → A. So he stipulated →(U, ˜︂ U) = T.
¬
˜︁ ∧Ł 3
˜︁ T U F
T F T T U F
U U U U U F
F T F F F F
∨Ł3
˜︁ T U F →
˜︂Ł3 T U F
T T T T T T U F
U T U U U T T U
F T U F F T T T
and ∨ will take the truth value U if all its propositional variables
are assigned U. So for instance, the classical tautologies p ∨ ¬p
and ¬(p ∧¬p) are not tautologies in Ł 3 , since v(A) = U whenever
v(p) = U.
On valuations where v(p) = T or F, v(A) will coincide with
its classical truth value.
p q ¬ p → (p → q)
T T F T T T T T
T U F T T T U U
T F F T T T F F
U T U U T U T T
U U U U T U T U
U F U U T U U F
F T T F T F T T
F U T F T F T U
F F T F T F T F
One might therefore perhaps think that although not all clas-
sical tautologies are tautologies in Ł 3 , they should at least take
either the value T or the value U on every valuation. This is not
the case. A counterexample is given by
which is F if p is U.
Łukasiewicz hoped to build a logic of possibility on the basis
of his three-valued system, by introducing a one-place connec-
tive ◇A (for “A is possible”) and a corresponding □A (for “A is
necessary”):
CHAPTER 5. THREE-VALUED LOGICS 55
◇
˜︁ □
˜︁
T T T T
U T U F
F F F F
In other words, p is possible iff it is not already settled as false;
and p is necessary iff it is already settled as true.
However, the shortcomings of this proposed modal logic soon
became evident: However things turn out, p ∧ ¬p can never turn
out to be true. So even if it is not now settled (and therefore unde-
termined), it should count as impossible, i.e., ¬◇(p ∧ ¬p) should
be a tautology. However, if v(p) = U, then v(¬◇(p ∧ ¬p)) = U.
Although Łukasiewicz was correct that two truth values will not
be enough to accommodate modal distinctions such as possiblity
and necessity, introducing a third truth value is also not enough.
¬
˜︁ ∧Ks
˜︁ T U F
T F T T U F
U U U U U F
F T F F F F
∨Ks
˜︁ T U F →
˜︂Ks T U F
T T T T T T U F
U T U U U T U U
F T U F F T T T
¬
˜︁ ∧Kw
˜︁ T U F
T F T T U F
U U U U U U
F T F F U F
∨Kw
˜︁ T U F →
˜︂Kw T U F
T T U T T T U F
U U U U U U U U
F T U F F T U T
¬(U) = ˜︁
˜︁ ∨(U, U) = ˜︁
∧(U, U) = →(U,
˜︂ U) = U
¬G
˜︁ ∧G
˜︁ T U F
T F T T U F
U F U U U F
F T F F F F
∨G
˜︁ T U F →
˜︂G T U F
T T T T T T U F
U T U U U T T F
F T U F F T T T
You’ll notice that the truth tables for ∧ and ∨ are the same as
in Łukasiewicz and strong Kleene logic, but the truth tables for
¬ and → differ for each. In Gödel logic, ˜︁ ¬(U) = F. In contrast
to Łukasiewicz logic and Kleene logic, →(U,˜︂ F) = F; in contrast
to Kleene logic (but as in Łukasiewicz logic), →(U,
˜︂ U) = T.
As the connection to intuitionistic logic alluded to above sug-
gests, G3 is close to intuitionistic logic. All intuitionistic truths
are tautologies in G3 , and many classical tautologies that are not
valid intuitionistically also fail to be tautologies in G3 . For in-
CHAPTER 5. THREE-VALUED LOGICS 59
p ∨ ¬p (p → q ) → (¬p ∨ q )
¬¬p → p ¬(p ∧ q ) → (¬p ∨ ¬q )
((p → q ) → p) → p
4. Truth functions are the same as weak Kleene logic, plus the
“is meaningless” operator:
+
˜︁
T F
U T
F F
2. A ≡ ¬B.
a) Suppose vKs (¬B) = F. By the definition of ˜︁ ¬Ks ,
vKs (B) = T. By inductive hypothesis, case (b), we
get v′C (B) = T, so v′C (¬B) = F.
b) Suppose vKs (¬B) = T. By the definition of ˜︁ ¬Ks ,
vKs (B) = F. By inductive hypothesis, case (a), we
get v′C (B) = F, so v′C (¬B) = T.
3. A ≡ (B ∧ C ).
a) Suppose vKs (B ∧ C ) = F. By the definition of ˜︁ ∧Ks ,
vKs (B) = F or vKs (B) = F. By inductive hypothesis,
case (a), we get v′C (B) = F or v′C (C ) = F, so v′C (B ∧
C ) = F.
b) Suppose vKs (B ∧ C ) = T. By the definition of ˜︁ ∧Ks ,
vKs (B) = T and vKs (B) = T. By inductive hypothe-
sis, case (b), we get v′C (B) = T and v′C (C ) = T, so
v′C (B ∧ C ) = T.
The other two cases are similar, and left as exercises. Alter-
natively, the proof above establishes the result for all formulas
only containing ¬ and ∧. One may now appeal to the facts that
in both Ks and C, for any v, v(B ∨ C ) = v(¬(¬B ∧ ¬C )) and
v(B → C ) = v(¬(B ∧ ¬C )). □
→.
Proof. Exercise. □
Problems
Problem 5.1. Suppose we define v(A ↔ B) = v((A → B) ∧ (B →
A)) in Ł 3 . What truth table would ↔ have?
1. p → (q → p)
2. ¬(p ∧ q ) ↔ (¬p ∨ ¬q )
3. ¬(p ∨ q ) ↔ (¬p ∧ ¬q )
1. (¬p ∧ p) → q )
2. ((p → q ) → p) → p
CHAPTER 5. THREE-VALUED LOGICS 63
3. (p → (p → q )) → (p → q )
1. p, p → q ⊨ q
2. ¬¬p ⊨ p
3. p ∧ q ⊨ p
4. p ⊨ p ∧ p
5. p ⊨ p ∨ q
1. p, p → q ⊨ q
2. p ∨ q , ¬p ⊨ q
3. p ∧ q ⊨ p
4. p ⊨ p ∧ p
5. p ⊨ p ∨ q
Problem 5.9. Give truth tables that show that the following are
not tautologies of G3
(p → q ) → (¬p ∨ q )
¬(p ∧ q ) → (¬p ∨ ¬q )
((p → q ) → p) → p
1. p, p → q ⊨ q
2. p ∨ q , ¬p ⊨ q
3. p ∧ q ⊨ p
4. p ⊨ p ∧ p
5. p ⊨ p ∨ q
1. p, p → q ⊨ q
2. ¬q , p → q ⊨ ¬p
3. p ∨ q , ¬p ⊨ q
4. ¬p, p ⊨ q
5. p ⊨ p ∨ q
6. p → q ,q → r ⊨ p → r
CHAPTER 5. THREE-VALUED LOGICS 65
1. p, p → q ⊨ q
2. p ∨ q , ¬p ⊨ q
3. ¬p, p ⊨ q
4. p ⊨ p ∨ q
Sequent
Calculus
6.1 Introduction
The sequent calculus for classical logic is an efficient and simple
derivation system. If a many-valued logic is defined by a matrix
with finitely many truth values, i.e., V is finite, it is possible to
provide a sequent calculus for it. The idea for how to do this
comes from considering the meanings of sequents and the form
of inference rules in the classical case.
Now recall that a sequent
A1 , . . . ,An ⇒ B 1 , . . . ,Bn
(A1 ∧ · · · ∧ Am ) → (B 1 ∨ · · · ∨ Bn )
In other words, A valuation v satisfies a sequent 𝛤 ⇒ 𝛥 iff either
v(A) = F for some A ∈ 𝛤 or v(A) = T for some A ∈ 𝛥. On
this interpretation, initial sequents A ⇒ A are always satisfied,
because either v(A) = T or (A) = F.
Here are the inference rules for the conditional in LK, with
side formulas 𝛤, 𝛥 left out:
66
CHAPTER 6. SEQUENT CALCULUS 67
⇒ A B ⇒ A ⇒ B
→L →R
A→B ⇒ ⇒ A→B
A,B, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A,B
∧L ∨R
A ∧ B, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A ∨ B
𝛬1 | . . . | 𝛬n
𝛤1 | . . . | 𝛤i | . . . | 𝛤n
Wi
𝛤1 | . . . | A, 𝛤i | . . . | 𝛤n
𝛤1 | . . . | A,A, 𝛤i | . . . | 𝛤n
Ci
𝛤1 | . . . | A, 𝛤i | . . . | 𝛤n
𝛤1 | . . . | 𝛤i ,A,B, 𝛤i′ | . . . | 𝛤n
Xi
𝛤1 | . . . | 𝛤i ,B,A, 𝛤i′ | . . . | 𝛤n
𝛤1 | . . . | A, 𝛤i | . . . | 𝛤n 𝛥1 | . . . | A, 𝛥 j | . . . | 𝛥n
Cuti , j
𝛤1 , 𝛥1 | . . . | 𝛤n , 𝛥n
Rules for ¬
The following rules for ¬ apply to Łukasiewicz and Kleene logics,
and their variants.
CHAPTER 6. SEQUENT CALCULUS 70
𝛤 | 𝛱 | 𝛥,A
¬F
¬A, 𝛤 | 𝛱 | 𝛥
𝛤 | A, 𝛱 | 𝛥
¬U
𝛤 | ¬A, 𝛱 | 𝛥
A, 𝛤 | 𝛱 | 𝛥
¬T
𝛤 | 𝛱 | 𝛥, ¬A
𝛤 | A, 𝛱 | 𝛥,A A, 𝛤 | 𝛱 | 𝛥
¬G F ¬G T
¬A, 𝛤 | 𝛱 | 𝛥 𝛤 | 𝛱 | 𝛥, ¬A
Rules for ∧
These are the rules for ∧ in Łukasiewicz, strong Kleene, and
Gödel logic.
A,B, 𝛤 | 𝛱 | 𝛥
∧F
A ∧ B, 𝛤 | 𝛱 | 𝛥
𝛤 | A, 𝛱 | A, 𝛥 𝛤 | B, 𝛱 | B, 𝛥 𝛤 | A,B, 𝛱 | 𝛥
∧U
𝛤 | A ∧ B, 𝛱 | 𝛥
𝛤 | 𝛱 | 𝛥,A 𝛤 | 𝛱 | 𝛥,B
∧T
𝛤 | 𝛱 | 𝛥,A ∧ B
Rules for ∨
These are the rules for ∨ in Łukasiewicz, strong Kleene, and
Gödel logic.
CHAPTER 6. SEQUENT CALCULUS 71
A, 𝛤 | 𝛱 | 𝛥 B, 𝛤 | 𝛱 | 𝛥
∨F
A ∨ B, 𝛤 | 𝛱 | 𝛥
A, 𝛤 | A, 𝛱 | 𝛥 B, 𝛤 | B, 𝛱 | 𝛥 𝛤 | A,B, 𝛱 | 𝛥
∨U
𝛤 | A ∨ B, 𝛱 | 𝛥
𝛤 | 𝛱 | 𝛥,A,B
∨T
𝛤 | 𝛱 | 𝛥,A ∨ B
Rules for →
These are the rules for → in Łukasiewicz logic.
𝛤 | 𝛱 | 𝛥,A B, 𝛤 | 𝛱 | 𝛥
→Ł3 F
A → B, 𝛤 | 𝛱 | 𝛥
𝛤 | A,B, 𝛱 | 𝛥 B, 𝛤 | 𝛱 | 𝛥,A
→Ł 3 U
𝛤 | A → B, 𝛱 | 𝛥
A, 𝛤 | B, 𝛱 | 𝛥,B A, 𝛤 | A, 𝛱 | 𝛥,B
→Ł 3 T
𝛤 | 𝛱 | 𝛥,A → B
𝛤 | 𝛱 | 𝛥,A B, 𝛤 | 𝛱 | 𝛥
→Ks F
A → B, 𝛤 | 𝛱 | 𝛥
B, 𝛤 | B, 𝛱 | 𝛥 𝛤 | A,B, 𝛱 | 𝛥 𝛤 | A, 𝛱 | 𝛥,A
→Ks U
𝛤 | A → B, 𝛱 | 𝛥
A, 𝛤 | 𝛱 | 𝛥,B
→Ks T
𝛤 | 𝛱 | 𝛥,A → B
𝛤 | A, 𝛱 | 𝛥,A B, 𝛤 | 𝛱 | 𝛥
→G 3 F
A → B, 𝛤 | 𝛱 | 𝛥
𝛤 | B, 𝛱 | 𝛥 𝛤 | 𝛱 | 𝛥,A
→G 3 U
𝛤 | A → B, 𝛱 | 𝛥
A, 𝛤 | B, 𝛱 | 𝛥,B A, 𝛤 | A, 𝛱 | 𝛥,B
→G 3 T
𝛤 | 𝛱 | 𝛥,A → B
B |B |B
WU
B | A,B | B
XU
A|A|A A|A|A B | B,A | B A|A|A
WT WT WU WT
A | A | B,A A | A | A,A B | A,B,A | B A | A | B,A
WU WT WF WF
A | B,A | B,A A | A | B,A,A A,B | A,B,A | B B,A | A | B,A
WU WF XF WF
CHAPTER 6. SEQUENT CALCULUS
Infinite-valued
Logics
7.1 Introduction
The number of truth values of a matrix need not be finite. An
obvious choice for a set of infinitely many truth values is the set
of rational numbers between 0 and 1, V∞ = [0, 1] ∩ Q, i.e.,
n
V∞ = { : n,m ∈ N and n ≤ m}.
m
When considering this infinite truth value set, it is often useful to
also consider the subsets
n
Vm = { : n ∈ N and n ≤ m}
m−1
For instance, V5 is the set with 5 evenly spaced truth values,
1 1 3
V5 = {0, , , , 1}.
4 2 4
In logics based on these truth value sets, usually only 1 is des-
ignated, i.e., V + = {1}. In other words, we let 1 play the role of
(absolute) truth, 0 as absolute falsity, but formulas may take any
intermediate value in V .
74
CHAPTER 7. INFINITE-VALUED LOGICS 75
One can also consider the set V [0,1] = [0, 1] of all real num-
bers between 0 and 1, or other infinite subsets of [0, 1], however.
Logics with this truth value set are often called fuzzy.
¬Ł (x) = 1 − x
˜︁
∧Ł (x, y) = min(x, y)
˜︁
∨Ł (x, y) = max(x, y)
˜︁
{︄
1 if x ≤ y
˜︂Ł (x, y) = min(1, 1 − (x − y)) =
→
1 − (x − y) otherwise.
Proof. This can be seen by comparing the truth tables for the con-
nectives given in Definition 5.1 with the truth tables determined
by the equations in Definition 7.1:
CHAPTER 7. INFINITE-VALUED LOGICS 76
¬
˜︁ ∧Ł 3
˜︁ 1 1/2 0
1 0 1 1 1/2 0
1/2 1/2 1/2 1/2 1/2 0
0 1 0 0 0 0
∨Ł3
˜︁ 1 1/2 0 →
˜︂Ł3 1 1/2 0
1 1 1 1 1 1 1/2 0
□
1/2 1 1/2 1/2 1/2 1 1 1/2
0 1 1/2 0 0 1 1 1
Proof. Exercise. □
˜︁ = 0
⊥
CHAPTER 7. INFINITE-VALUED LOGICS 77
{︄
1 if x = 0
¬G (x) =
0 otherwise
˜︁
∧G (x, y) = min(x, y)
˜︁
∨G (x, y) = max(x, y)
˜︁
{︄
1 if x ≤ y
→
˜︂G (x, y) =
y otherwise.
Proof. This can be seen by comparing the truth tables for the con-
nectives given in Definition 5.6 with the truth tables determined
by the equations in Definition 7.4:
¬G 3
˜︁ ∧G
˜︁ 1 1/2 0
1 0 1 1 1/2 0
1/2 0 1/2 1/2 1/2 0
0 1 0 0 0 0
∨G
˜︁ 1 1/2 0 →
˜︂G 1 1/2 0
1 1 1 1 1 1 1/2 0
□
1/2 1 1/2 1/2 1/2 1 1 0
0 1 1/2 0 0 1 1 1
Proof. Exercise. □
p ∨ ¬p (p → q ) → (¬p ∨ q )
¬¬p → p ¬(p ∧ q ) → (¬p ∨ ¬q )
((p → q ) → p) → p
Problems
Problem 7.1. Prove Proposition 7.3.
But isn’t
truth
relative (to a
world)?
79
CHAPTER 8
Syntax and
Semantics
8.1 Introduction
Modal logic deals with modal propositions and the entailment re-
lations among them. Examples of modal propositions are the
following:
1. It is necessary that 2 + 2 = 4.
Possibility and necessity are not the only modalities: other unary
connectives are also classified as modalities, for instance, “it
ought to be the case that A,” “It will be the case that A,” “Dana
knows that A,” or “Dana believes that A.”
Modal logic makes its first appearance in Aristotle’s De Inter-
pretatione: he was the first to notice that necessity implies possi-
bility, but not vice versa; that possibility and necessity are inter-
definable; that If A ∧ B is possibly true then A is possibly true
and B is possibly true, but not conversely; and that if A → B is
necessary, then if A is necessary, so is B.
80
CHAPTER 8. SYNTAX AND SEMANTICS 81
1. ⊥ is an atomic formula.
1. ⊤ abbreviates ¬⊥.
2. A ↔ B abbreviates (A → B) ∧ (B → A).
1. A ≡ ⊥: A[D 1 /p 1 , . . . ,D n /p n ] is ⊥.
3. A ≡ pi : A[D 1 /p 1 , . . . ,D n /p n ] is D i .
4. A ≡ ¬B: A[D 1 /p 1 , . . . ,D n /p n ] is ¬B [D 1 /p 1 , . . . ,D n /p n ].
5. A ≡ (B ∧ C ): A[D 1 /p 1 , . . . ,D n /p n ] is
(B [D 1 /p 1 , . . . ,D n /p n ] ∧ C [D 1 /p 1 , . . . ,D n /p n ]).
6. A ≡ (B ∨ C ): A[D 1 /p 1 , . . . ,D n /p n ] is
(B [D 1 /p 1 , . . . ,D n /p n ] ∨ C [D 1 /p 1 , . . . ,D n /p n ]).
7. A ≡ (B → C ): A[D 1 /p 1 , . . . ,D n /p n ] is
(B [D 1 /p 1 , . . . ,D n /p n ] → C [D 1 /p 1 , . . . ,D n /p n ]).
8. A ≡ (B ↔ C ): A[D 1 /p 1 , . . . ,D n /p n ] is
(B [D 1 /p 1 , . . . ,D n /p n ] ↔ C [D 1 /p 1 , . . . ,D n /p n ]).
9. A ≡ □B: A[D 1 /p 1 , . . . ,D n /p n ] is □B [D 1 /p 1 , . . . ,D n /p n ].
while A[D 2 /p 1 ,D 1 /p 2 ] is
p
w2
q
p
w1
¬q
¬p
w3
¬q
1. A ≡ ⊥: Never M,w ⊩ ⊥.
8.7 Validity
Formulas that are true in all models, i.e., true at every world in
every model, are particularly interesting. They represent those
modal propositions which are true regardless of how □ and ◇ are
CHAPTER 8. SYNTAX AND SEMANTICS 89
Proof. By induction on A.
2. A ≡ pi :
v ⊨ pi ⇔ v(pi ) = T
by definition of v ⊨ pi
⇔ M,w ⊩ D i
by assumption
⇔ M,w ⊩ pi [D 1 /p 1 , . . . ,D n /p n ]
since pi [D 1 /p 1 , . . . ,D n /p n ] ≡ D i .
3. A ≡ ¬B:
v ⊨ ¬B ⇔ v ⊭ B
by definition of v ⊨;
⇔ M,w ⊮ B [D 1 /p 1 , . . . ,D n /p n ]
CHAPTER 8. SYNTAX AND SEMANTICS 91
by induction hypothesis
⇔ M,w ⊩ ¬B [D 1 /p 1 , . . . ,D n /p n ]
by definition of v ⊨.
4. A ≡ (B ∧ C ):
v ⊨ B ∧ C ⇔ v ⊨ B and v ⊨ C
by definition of v ⊨
⇔ M,w ⊩ B [D 1 /p 1 , . . . ,D n /p n ] and
M,w ⊩ C [D 1 /p 1 , . . . ,D n /p n ]
by induction hypothesis
⇔ M,w ⊩ (B ∧ C ) [D 1 /p 1 , . . . ,D n /p n ]
by definition of M,w ⊩.
5. A ≡ (B ∨ C ):
v ⊨ B ∨ C ⇔ v ⊨ B or v ⊨ C
by definition of v ⊨;
⇔ M,w ⊩ B [D 1 /p 1 , . . . ,D n /p n ] or
M,w ⊩ C [D 1 /p 1 , . . . ,D n /p n ]
by induction hypothesis
⇔ M,w ⊩ (B ∨ C ) [D 1 /p 1 , . . . ,D n /p n ]
by definition of M,w ⊩.
6. A ≡ (B → C ):
v ⊨ B → C ⇔ v ⊭ B or v ⊨ C
by definition of v ⊨
⇔ M,w ⊮ B [D 1 /p 1 , . . . ,D n /p n ] or
M,w ⊩ C [D 1 /p 1 , . . . ,D n /p n ]
by induction hypothesis
⇔ M,w ⊩ (B → C ) [D 1 /p 1 , . . . ,D n /p n ]
CHAPTER 8. SYNTAX AND SEMANTICS 92
by definition of M,w ⊩.
{B : ∃D 1 , . . . , ∃D n B = C [D 1 /p 1 , . . . ,D n /p n ] }.
(︁ )︁
Proof. We need to show that all instances of the schema are true
at every world in every model. So let M = ⟨W,R,V ⟩ and w ∈ W
be arbitrary. To show that a conditional is true at a world we
assume the antecedent is true to show that consequent is true as
well. In this case, let M,w ⊩ □(A → B) and M,w ⊩ □A. We
need to show M ⊩ □B. So let w ′ be arbitrary such that Rww ′.
Then by the first assumption M,w ′ ⊩ A → B and by the second
assumption M,w ′ ⊩ A. It follows that M,w ′ ⊩ B. Since w ′ was
arbitrary, M,w ⊩ □B. □
◇A ↔ ¬□¬A. (dual)
Proof. Exercise. □
8.10 Entailment
With the definition of truth at a world, we can define an entail-
ment relation between formulas. A formula B entails A iff, when-
ever B is true, A is true as well. Here, “whenever” means both
“whichever model we consider” as well as “whichever world in
that model we consider.”
w2 p w3 p
w 1 ¬p
Problems
Problem 8.1. Consider the model of Figure 8.1. Which of the
following hold?
1. M,w 1 ⊩ q ;
2. M,w 3 ⊩ ¬q ;
3. M,w 1 ⊩ p ∨ q ;
4. M,w 1 ⊩ □(p ∨ q );
5. M,w 3 ⊩ □q ;
6. M,w 3 ⊩ □⊥;
7. M,w 1 ⊩ ◇q ;
8. M,w 1 ⊩ □q ;
9. M,w 1 ⊩ ¬□□¬q .
p1 p1
¬p 2 w 1 w3 p2
¬p 3 p3
p1
w2 p2
¬p 3
1. p → ◇p (for p atomic);
2. A → ◇A (for A arbitrary);
3. □p → p (for p atomic);
1. ⊨ □p → □(q → p);
2. ⊨ □¬⊥;
3. ⊨ □p → (□q → □p).
Problem 8.9. Prove the claim in the “only if” part of the proof
of Proposition 8.22. (Hint: use induction on A.)
D: □p → ◇p;
T: □p → p;
B: p → □◇p;
4: □p → □□p;
5: ◇p → □◇p.
Problem 8.11. Prove that the schemas in the first column of Ta-
ble 8.1 are valid and those in the second column are not valid.
1. p → ◇◇p;
2. ◇p → □p.
Axiomatic
Derivations
9.1 Introduction
We have a semantics for the basic modal language in terms of
modal models, and a notion of a formula being valid—true at
all worlds in all models—or valid with respect to some class of
models or frames—true at all worlds in all models in the class, or
based on the frame. Logic usually connects such semantic charac-
terizations of validity with a proof-theoretic notion of derivability.
The aim is to define a notion of derivability in some system such
that a formula is derivable iff it is valid.
The simplest and historically oldest derivation systems are
so-called Hilbert-type or axiomatic derivation systems. Hilbert-
type derivation systems for many modal logics are relatively easy
to construct: they are simple as objects of metatheoretical study
(e.g., to prove soundness and completeness). However, they are
much harder to use to prove formulas in than, say, natural deduc-
tion systems.
In Hilbert-type derivation systems, a derivation of a formula is
a sequence of formulas leading from certain axioms, via a handful
of inference rules, to the formula in question. Since we want the
derivation system to match the semantics, we have to guarantee
99
CHAPTER 9. AXIOMATIC DERIVATIONS 100
that the set of derivable formulas are true in all models (or true in
all models in which all axioms are true). We’ll first isolate some
properties of modal logics that are necessary for this to work: the
“normal” modal logics. For normal modal logics, there are only
two inference rules that need to be assumed: modus ponens and
necessitation. As axioms we take all (substitution instances) of
tautologies, and, depending on the modal logic we deal with, a
number of modal axioms. Even if we are just interested in the
class of all models, we must also count all substitution instances
of K and Dual as axioms. This alone generates the minimal nor-
mal modal logic K.
ponens, or
With this definition, it will turn out that the set of derivable
formulas forms a normal modal logic, and that any derivable for-
mula is true in every model in which every axiom is true. This
property of derivations is called soundness. The converse, com-
pleteness, is harder to prove.
9.2 Proofs in K
In order to practice proofs in the smallest modal system, we show
the valid formulas on the left-hand side of Table 8.1 can all be
given K-proofs.
Proof.
1. A → (B → A) taut
2. □(A → (B → A)) nec, 1
3. □(A → (B → A)) → (□A → □(B → A)) K
4. □A → □(B → A) mp, 2, 3 □
Proof.
CHAPTER 9. AXIOMATIC DERIVATIONS 102
1. (A ∧ B) → A taut
2. □((A ∧ B) → A) nec
3. □((A ∧ B) → A) → (□(A ∧ B) → □A) K
4. □(A ∧ B) → □A mp, 2, 3
5. (A ∧ B) → B taut
6. □((A ∧ B) → B) nec
7. □((A ∧ B) → B) → (□(A ∧ B) → □B) K
8. □(A ∧ B) → □B mp, 6, 7
9. (□(A ∧ B) → □A) →
((□(A ∧ B) → □B) →
(□(A ∧ B) → (□A ∧ □B))) taut
10. (□(A ∧ B) → □B) →
(□(A ∧ B) → (□A ∧ □B)) mp, 4, 9
11. □(A ∧ B) → (□A ∧ □B) mp, 8, 10.
Note that the formula on line 9 is an instance of the tautology
(p → q ) → ((p → r ) → (p → (q ∧ r ))). □
Proof.
1. A → (B → (A ∧ B)) taut
2. □(A → (B → (A ∧ B))) nec, 1
3. □(A → (B → (A ∧ B))) → (□A → □(B → (A ∧ B))) K
4. □A → □(B → (A ∧ B)) mp, 2, 3
5. □(B → (A ∧ B)) → (□B → □(A ∧ B)) K
6. (□A → □(B → (A ∧ B))) →
(□(B → (A ∧ B)) → (□B → □(A ∧ B))) →
(□A → (□B → □(A ∧ B)))) taut
7. (□(B → (A ∧ B)) → (□B → □(A ∧ B))) →
(□A → (□B → □(A ∧ B))) mp, 4, 6
8. □A → (□B → □(A ∧ B))) mp, 5, 7
9. (□A → (□B → □(A ∧ B)))) →
((□A ∧ □B) → □(A ∧ B)) taut
10. (□A ∧ □B) → □(A ∧ B) mp, 8, 9
CHAPTER 9. AXIOMATIC DERIVATIONS 103
(p → q ) → ((q → r ) → (p → r ))
(p → (q → r )) → ((p ∧ q ) → r ) □
Proof.
(p → q ) → (¬q → ¬p)
(p → q ) → ((q → r ) → (p → r )). □
Proof.
1. K ⊢ A → (B → (A ∧ B)) taut
2. K ⊢ □A → (□B → □(A ∧ B))) rk, 1
3. K ⊢ (□A ∧ □B) → □(A ∧ B) pl, 2 □
CHAPTER 9. AXIOMATIC DERIVATIONS 105
Proof. Exercise. □
Proof.
1. K ⊢ ◇¬p ↔ ¬□¬¬p dual
2. K ⊢ ¬□¬¬p → ◇¬p pl, 1
3. K ⊢ ¬□p → ◇¬p p for ¬¬p □
In the above derivation, the final step “p for ¬¬p” is short for
K ⊢ ¬□¬¬p → ◇¬p
K ⊢ ¬¬p ↔ p taut
K ⊢ ¬□p → ◇¬p by Proposition 9.11
The roles of C (q ), A, and B in Proposition 9.11 are played here,
respectively, by ¬□q → ◇¬p, ¬¬p, and p.
When a formula contains a sub-formula ¬◇A, we can replace
it by □¬A using Proposition 9.11, since K ⊢ ¬◇A ↔ □¬A. We’ll
indicate this and similar replacements simply by “□¬ for ¬◇.”
The following proposition justifies that we can establish deriv-
ability results schematically. E.g., the previous proposition does
not just establish that K ⊢ ¬□p → ◇¬p, but K ⊢ ¬□A → ◇¬A for
arbitrary A.
CHAPTER 9. AXIOMATIC DERIVATIONS 106
Proof.
1. K ⊢ (A → B) → (¬B → ¬A) pl
2. K ⊢ □(A → B) → (□¬B → □¬A) rk, 1
3. K ⊢ (□¬B → □¬A) → (¬□¬A → ¬□¬B) taut
4. K ⊢ □(A → B) → (¬□¬A → ¬□¬B) pl, 2, 3
5. K ⊢ □(A → B) → (◇A → ◇B) ◇ for ¬□¬. □
Proof.
Proof.
1. K ⊢ ¬(A ∨ B) → ¬A taut
2. K ⊢ □¬(A ∨ B) → □¬A rk, 1
3. K ⊢ ¬□¬A → ¬□¬(A ∨ B) pl, 2
4. K ⊢ ◇A → ◇(A ∨ B) ◇ for ¬□¬
5. K ⊢ ◇B → ◇(A ∨ B) similarly
6. K ⊢ (◇A ∨ ◇B) → ◇(A ∨ B) pl, 4, 5. □
Proof.
Problems
Problem 9.1. Find derivations in K for the following formulas:
1. □¬p → □(p → q )
2. (□p ∨ □q ) → □(p ∨ q )
3. ◇p → ◇(p ∨ q )
Modal
Tableaux
10.1 Introduction
Tableaux are certain (downward-branching) trees of signed for-
mulas, i.e., pairs consisting of a truth value sign (T or F) and a
sentence
T A or F A.
A tableau begins with a number of assumptions. Each further
signed formula is generated by applying one of the inference
rules. Some inference rules add one or more signed formulas
to a tip of the tree; others add two new tips, resulting in two
branches. Rules result in signed formulas where the formula is
less complex than that of the signed formula to which it was ap-
plied. When a branch contains both T A and F A, we say the
branch is closed. If every branch in a tableau is closed, the entire
tableau is closed. A closed tableau consititues a derivation that
shows that the set of signed formulas which were used to begin
the tableau are unsatisfiable. This can be used to define a ⊢ rela-
tion: 𝛤 ⊢ A iff there is some finite set 𝛤0 = {B 1 , . . . ,Bn } ⊆ 𝛤 such
that there is a closed tableau for the assumptions
{F A, T B 1 , . . . , T Bn }.
109
CHAPTER 10. MODAL TABLEAUX 110
1.2 T □A → A
𝜎 T ¬A 𝜎 F ¬A
¬T ¬F
𝜎FA 𝜎TA
𝜎TA ∧B
∧T 𝜎FA∧B
𝜎TA ∧F
𝜎FA | 𝜎FB
𝜎TB
𝜎FA∨B
𝜎TA ∨B ∨F
∨T 𝜎FA
𝜎TA | 𝜎TB
𝜎FB
𝜎FA→B
𝜎TA →B →F
→T 𝜎TA
𝜎FA | 𝜎TB
𝜎FB
B 1 , . . . ,Bn ⊢ A
1 T B 1 , . . . , 1 T Bn , 1 F A.
𝜎 T □A 𝜎 F □A
□T □F
𝜎.n T A 𝜎.n F A
𝜎 T ◇A 𝜎 F ◇A
◇T ◇F
𝜎.n T A 𝜎.n F A
1. 1T □A Assumption
2. 1F ◇A Assumption
3. 1.1 T A □T 1
4. 1.1 F A ◇F 2
⊗
1. 1T ◇A Assumption
2. 1F □A Assumption
3. 1.1 T A ◇T 1
4. 1.1 F A □F 2
⊗
7. 1.1 F A 1.1 F B ∧F 6
8. 1.1 T A 1.1 T B □T 4; □T 5
⊗ ⊗
7. 1.1 T A 1.1 T B ∨T 6
8. 1.1 F A 1.1 F B ◇F 4; ◇F 5
⊗ ⊗
R f (𝜎) f (𝜎.n).
Relative to an interpretation of prefixes P we can define:
1. 1 F □A → □◇A Assumption
2. 1 T □A →F 1
3. 1 F □◇A →F 1
4. 1.1 F ◇A □F 3
5. 1 F ◇A 4r◇ 4
6. 1.1 F A ◇F 5
7. 1.1 T A □T 2
⊗
CHAPTER 10. MODAL TABLEAUX 119
𝜎 T □A 𝜎 F ◇A
T□ T◇
𝜎TA 𝜎FA
𝜎 T □A 𝜎 F ◇A
D□ D◇
𝜎 T ◇A 𝜎 F □A
𝜎.n T □A 𝜎.n F ◇A
B□ B◇
𝜎TA 𝜎FA
𝜎 T □A 4□ 𝜎 F ◇A 4◇
𝜎.n T □A 𝜎.n F ◇A
Logic R is . . . Rules
T = KT reflexive T□, T◇
D = KD serial D□, D◇
K4 transitive 4□, 4◇
B = KTB reflexive, T□, T◇
symmetric B□, B◇
S4 = KT4 reflexive, T□, T◇,
transitive 4□, 4◇
S5 = KT4B reflexive, T□, T◇,
transitive, 4□, 4◇,
euclidean 4r□, 4r◇
Proposition 10.14. 4r□ and 4r◇ are sound for euclidean models.
Corollary 10.15. The tableau systems given in Table 10.4 are sound
for the respective classes of models.
1. 1 F ◇A → □◇A Assumption
2. 1 T ◇A →F 1
3. 1 F □◇A →F 1
4. 2 F ◇A □F 3
5. 3T A ◇T 2
6. 3F A ◇F 4
⊗
CHAPTER 10. MODAL TABLEAUX 123
n T □A n F □A
□T □F
m TA mFA
m is used m is new
n T ◇A n F ◇A
◇T ◇F
m TA mFA
m is new m is used
and
V (p) = {𝜎 : 𝜎 T p ∈ 𝛥}.
We show by induction on A that if 𝜎 T A ∈ 𝛥 then M( 𝛥), 𝜎 ⊩ A,
and if 𝜎 F A ∈ 𝛥 then M( 𝛥), 𝜎 ⊮ A.
3. A ≡ B ∧ C : Exercise.
4. A ≡ B ∨ C : If 𝜎 T A ∈ 𝛥, then either 𝜎 T B ∈ 𝛥 or 𝜎 T C ∈
𝛥 since the branch is complete. By induction hypothesis,
either M( 𝛥), 𝜎 ⊩ B or M( 𝛥), 𝜎 ⊩ C . Thus M( 𝛥), 𝜎 ⊩ A.
CHAPTER 10. MODAL TABLEAUX 126
5. A ≡ B → C : Exercise.
7. A ≡ ◇B: Exercise.
Since 𝛤 ⊆ 𝛥, M( 𝛥) ⊩ 𝛤. □
and ◇T rules, and also only have to be applied once (and produce
a single new prefix). □T and ◇F have to be applied potentially
multiple times, but only once per prefix, and only finitely many
new prefixes are generated. So the construction either results in
a closed branch or a complete branch after finitely many stages.
Once a tableau with an open complete branch is constructed,
the proof of Theorem 10.19 gives us an explict model that satisfies
the original set of prefixed formulas. So not only is it the case
that if 𝛤 ⊨ A, then a closed tableau exists and 𝛤 ⊢ A, if we look for
the closed tableau in the right way and end up with a “complete”
tableau, we’ll not only know that 𝛤 ⊭ A but actually be able to
construct a countermodel.
¬p p
1.1 q 1.2 ¬q
¬p
1 ¬q
tains 1.1 T q ). The model is pictured in Figure 10.1, and you can
verify that it is a countermodel to □(p ∨ q ) → (□p ∨ □q ).
Problems
Problem 10.1. Find closed tableaux in K for the following for-
mulas:
1. □¬p → □(p → q )
2. (□p ∨ □q ) → □(p ∨ q )
3. ◇p → ◇(p ∨ q )
4. □(p ∧ q ) → □p
1. KT5 ⊢ B;
2. KT5 ⊢ 4;
3. KDB4 ⊢ T;
4. KB4 ⊢ 5;
5. KB5 ⊢ 4;
CHAPTER 10. MODAL TABLEAUX 130
6. KT ⊢ D.
Is this really
necessary?
131
CHAPTER 11
Frame
Definability
11.1 Introduction
One question that interests modal logicians is the relationship be-
tween the accessibility relation and the truth of certain formulas
in models with that accessibility relation. For instance, suppose
the accessibility relation is reflexive, i.e., for every w ∈ W , Rww.
In other words, every world is accessible from itself. That means
that when □A is true at a world w, w itself is among the accessible
worlds at which A must therefore be true. So, if the accessibility
relation R of M is reflexive, then whatever world w and formula
A we take, □A → A will be true there (in other words, the schema
□p → p and all its substitution instances are true in M).
The converse, however, is false. It’s not the case, e.g., that if
□p → p is true in M, then R is reflexive. For we can easily find
a non-reflexive model M where □p → p is true at all worlds: take
the model with a single world w, not accessible from itself, but
with w ∈ V (p). By picking the truth value of p suitably, we can
make □A → A true in a model that is not reflexive.
The solution is to remove the variable assignment V from the
equation. If we require that □p → p is true at all worlds in M,
regardless of which worlds are in V (p), then it is necessary that
132
CHAPTER 11. FRAME DEFINABILITY 133
Proof. Here is the case for B: to show that the schema is true in
a model we need to show that all of its instances are true at all
worlds in the model. So let A → □◇A be a given instance of B,
CHAPTER 11. FRAME DEFINABILITY 134
If R is . . . then . . . is true in M:
serial: ∀u∃vRuv □p → ◇p (D)
reflexive: ∀wRww □p → p (T)
symmetric: p → □◇p (B)
∀u∀v (Ruv → Rvu)
transitive: □p → □□p (4)
∀u∀v∀w ((Ruv ∧ Rvw) → Ruw)
euclidean: ◇p → □◇p (5)
∀w∀u∀v ((Rwu ∧ Rwv ) → Ruv )
Table 11.1: Five correspondence facts.
w w′
⊩A ⊩ ◇A
⊩ □◇A
Since M is not reflexive (it is, in fact, irreflexive), the converse of The-
orem 11.1 fails in the case of T (similar arguments can be given for
some—though not all—the other schemas mentioned in Theorem 11.1).
If R is . . . then . . . is true in M:
partially functional:
◇p → □p
∀w∀u∀v ((Rwu ∧ Rwv ) → u = v )
functional: ∀w∃v∀u (Rwu ↔ u = v ) ◇p ↔ □p
weakly dense:
□□p → □p
∀u∀v (Ruv → ∃w (Ruw ∧ Rwv ))
weakly connected:
□((p ∧ □p) → q ) ∨
∀w∀u∀v ((Rwu ∧ Rwv ) → (L)
□((q ∧ □q ) → p)
(Ruv ∨ u = v ∨ Rvu))
weakly directed:
∀w∀u∀v ((Rwu ∧ Rwv ) → ◇□p → □◇p (G)
∃t (Rut ∧ Rvt ))
Table 11.2: Five more correspondence facts.
11.3 Frames
Theorem 11.6. If the formula on the right side of Table 11.1 is valid
in a frame F, then F has the property on the left side.
Corollary 11.8. Each formula on the right side of Table 11.1 defines
the class of frames which have the property on the left side.
It turns out that the properties and modal formulas that define
them considered so far are exceptional. Not every formula defines
a first-order definable class of frames, and not every first-order
definable class of frames is definable by a modal formula.
A counterexample to the first is given by the Löb formula:
1. R is an equivalence relation;
Proof. Exercise. □
1. w ∈ [w];
1. W ′ = [w];
2. R ′ is universal on W ′;
3. V ′ (p) = V (p) ∩ W ′.
[w]
[z ]
[u]
[v ]
1. A ≡ ⊥: STx (A) = ⊥.
Proof. By induction on A. □
F ⊨ A iff F ′ ⊨ A ′
′
Proof. F ′ ⊨ A ′ iff for every structure M ′ where PiM ⊆ W for
i = 1, . . . , n, and for every s with s (x) ∈ W , M ′,s ⊨ STx (A). By
Proposition 11.16, that is the case iff for all models M based on F
and every world w ∈ W , M,w ⊩ A, i.e., F ⊨ A. □
Problems
Problem 11.1. Complete the proof of Theorem 11.1.
Problem 11.4. Show that if the formula on the right side of Ta-
ble 11.2 is valid in a frame F, then F has the property on the left
side. To do this, consider a frame that does not satisfy the prop-
erty on the left, and define a suitable V such that the formula on
the right is false at some world.
2. If R is reflexive, it is serial.
Explain why this suffices for the proof that the conditions are
equivalent.
CHAPTER 12
More Axiomatic
Derivations
12.1 Normal Modal Logics
Not every set of modal formulas can easily be characterized as
those formulas derivable from a set of axioms. We want modal
logics to be well-behaved. First of all, everything we can derive in
classical propositional logic should still be derivable, of course
taking into account that the formulas may now contain also □
and ◇. To this end, we require that a modal logic contain all
tautological instances and be closed under modus ponens.
A[D 1 /p 1 , . . . ,D n /p n ] ∈ 𝛴 ,
149
CHAPTER 12. MORE AXIOMATIC DERIVATIONS 150
Proposition 12.3. Every normal modal logic is closed under rule rk,
A1 → (A2 → · · · (An−1 → An ) · · · )
rk
□A1 → (□A2 → · · · (□An−1 → □An ) · · · ).
A1 → (A2 → · · · (An−1 → An ) · · · ) ∈ 𝛴
4. We have KA1 . . . An ⊢ K, so K ∈ 𝛴 .
p → ◇p (T◇ )
◇□p → p (B◇ )
◇◇p → ◇p (4◇ )
◇□p → □p (5◇ )
1. KT5 ⊢ B;
2. KT5 ⊢ 4;
3. KDB4 ⊢ T;
4. KB4 ⊢ 5;
5. KB5 ⊢ 4;
6. KT ⊢ D.
1. KT5 ⊢ B:
1. KT5 ⊢ ◇A → □◇A 5
2. KT5 ⊢ A → ◇A T◇
3. KT5 ⊢ A → □◇A pl.
2. KT5 ⊢ 4:
3. KDB4 ⊢ T:
CHAPTER 12. MORE AXIOMATIC DERIVATIONS 155
1. KDB4 ⊢ ◇□A → A B◇
2. KDB4 ⊢ □□A → ◇□A D with □A for p
3. KDB4 ⊢ □□A → A pl1, 2
4. KDB4 ⊢ □A → □□A 4
5. KDB4 ⊢ □A → A pl, 1, 4.
4. KB4 ⊢ 5:
5. KB5 ⊢ 4:
6. KT ⊢ D:
1. KT ⊢ □A → A T
2. KT ⊢ A → ◇A T◇
3. KT ⊢ □A → ◇A pl, 1, 2 □
Proof. Exercise. □
12.5 Soundness
A derivation system is called sound if everything that can be de-
rived is valid. When considering modal systems, i.e., derivations
where in addition to K we can use instances of some formulas A1 ,
. . . , An , we want every derivable formula to be true in any model
in which A1 , . . . , An are true.
Proposition 12.14. KD ⊊ KT
w1 p w2 p w 3 ¬p
⊩ □p ⊩ ◇¬p
⊮ □□p ⊮ □◇¬p
⊮ ◇¬p
Figure 12.2: The model for Theorem 12.16.
CHAPTER 12. MORE AXIOMATIC DERIVATIONS 158
w 4 ¬p
p p
w2 w3
w 1 ¬p
⊩ □p, ⊮ □□p
Figure 12.3: The model for Theorem 12.17.
2. Reflexivity: If A ∈ 𝛤 then 𝛤 ⊢ 𝛴 A;
12.9 Consistency
Consistency is an important property of sets of formulas. A set
of formulas is inconsistent if a contradiction, such as ⊥, is deriv-
able from it; and otherwise consistent. If a set is inconsistent, its
formulas cannot all be true in a model at a world. For the com-
pleteness theorem we prove the converse: every consistent set is
true at a world in a model, namely in the “canonical model.”
Problems
Problem 12.1. Prove Proposition 12.4.
Completeness
and Canonical
Models
13.1 Introduction
If 𝛴 is a modal system, then the soundness theorem establishes
that if 𝛴 ⊢ A, then A is valid in any class C of models in which all
instances of all formulas in 𝛴 are valid. In particular that means
that if K ⊢ A then A is true in all models; if KT ⊢ A then A is
true in all reflexive models; if KD ⊢ A then A is true in all serial
models, etc.
Completeness is the converse of soundness: that K is com-
plete means that if a formula A is valid, ⊢ A, for instance. Prov-
ing completeness is a lot harder to do than proving soundness.
It is useful, first, to consider the contrapositive: K is complete iff
whenever ⊬ A, there is a countermodel, i.e., a model M such that
M ⊮ A. Equivalently (negating A), we could prove that whenever
⊬ ¬A, there is a model of A. In the construction of such a model,
we can use information contained in A. When we find models
for specific formulas we often do the same: e.g., if we want to
161
CHAPTER 13. COMPLETENESS AND CANONICAL MODELS 162
1. 𝛤 is deductively closed in 𝛴 .
2. 𝛴 ⊆ 𝛤.
3. ⊥ ∉ 𝛤
4. ¬A ∈ 𝛤 if and only if A ∉ 𝛤.
5. A ∧ B ∈ 𝛤 iff A ∈ 𝛤 and B ∈ 𝛤
6. A ∨ B ∈ 𝛤 iff A ∈ 𝛤 or B ∈ 𝛤
7. A → B ∈ 𝛤 iff A ∉ 𝛤 or B ∈ 𝛤
CHAPTER 13. COMPLETENESS AND CANONICAL MODELS 164
5. Exercise.
7. Exercise.
□𝛤 = {□B : B ∈ 𝛤 }
◇𝛤 = {◇B : B ∈ 𝛤 }
and
□−1 𝛤 = {B : □B ∈ 𝛤 }
◇−1 𝛤 = {B : ◇B ∈ 𝛤 }
1. W 𝛴 = {𝛥 : 𝛥 is complete 𝛴 -consistent}.
3. V 𝛴 (p) = {𝛥 : p ∈ 𝛥}.
Proof. By induction on A.
CHAPTER 13. COMPLETENESS AND CANONICAL MODELS 170
4. A ≡ B ∧ C : Exercise.
5. A ≡ B ∨ C : M 𝛴 , 𝛥 ⊩ B ∨ C iff M 𝛴 , 𝛥 ⊩ B or M 𝛴 , 𝛥 ⊩ C (by
Definition 8.7) iff B ∈ 𝛥 or C ∈ 𝛥 (by inductive hypothesis)
iff B ∨ C ∈ 𝛥 (by Proposition 13.2(6)).
6. A ≡ B → C : Exercise.
8. A ≡ ◇B: Exercise. □
𝛤 = □−1 𝛥1 ∪ ◇𝛥2 .
A1 , . . . ,An , ◇B 1 , . . . , ◇Bm ⊢ 𝛴 ⊥.
Problems
Problem 13.1. Complete the proof of Proposition 13.2.
CHAPTER 13. COMPLETENESS AND CANONICAL MODELS 176
Modal Sequent
Calculus
14.1 Introduction
The sequent calculus for propositional logic can be extended by
additional rules that deal with □ and ◇. For instance, for K, we
have LK plus:
𝛤 ⇒ 𝛥,A A, 𝛤 ⇒ 𝛥
□ ◇
□𝛤 ⇒ ◇𝛥, □A ◇A, □𝛤 ⇒ ◇𝛥
For extensions of K, additional rules have to be added as well.
Not every modal logic has such a sequent calculus. Even S5,
which is semantically simple (it can be defined without using ac-
cessibility relations at all) is not known to have a sequent calcu-
lus that results from LK which is complete without the rule Cut.
However, it has a cut-free complete hypersequent calculus.
177
CHAPTER 14. MODAL SEQUENT CALCULUS 178
A ⇒ A B ⇒ B
B,A ⇒ A B,A ⇒ B
∧R
B,A ⇒ A∧B
□
□B, □A ⇒ □(A ∧ B)
∧L
□A ∧ □B, □A ⇒ □(A ∧ B)
XL
□A, □A ∧ □B ⇒ □(A ∧ B)
∧L
□A ∧ □B, □A ∧ □B ⇒ □(A ∧ B)
CL
□A ∧ □B ⇒ □(A ∧ B)
→R
⇒ (□A ∧ □B) → □(A ∧ B)
A ⇒ A
¬R
⇒ A, ¬A
XR
A ⇒ A ⇒ ¬A,A
¬R □
¬A,A ⇒ ⇒ ◇¬A, □A
◇ XR
◇¬A, □A ⇒ ⇒ □A, ◇¬A
¬R ¬R
□A ⇒ ¬◇¬A ¬◇¬A ⇒ □A
→R →R
⇒ □A → ¬◇¬A ⇒ ¬◇¬A → □A
∧R
⇒ □A ↔ ¬◇¬A
CHAPTER 14. MODAL SEQUENT CALCULUS 180
A, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥,A
T□ T◇
□A, 𝛤 ⇒ 𝛥 𝛤 ⇒ 𝛥, ◇A
𝛤 ⇒ 𝛥
D
□𝛤 ⇒ ◇𝛥
□𝛤 ⇒ ◇𝛥,A 4□ A, □𝛤 ⇒ ◇𝛥
4◇
□𝛤 ⇒ ◇𝛥, □A ◇A, □𝛤 ⇒ ◇𝛥
Logic R is . . . Rules
T = KT reflexive □, T□, T◇
D = KD serial □, D
K4 transitive □, 4□, 4◇
B = KTB reflexive, □, T□, T◇
symmetric B□, B◇
S4 = KT4 reflexive, □, T□, T◇
transitive 4□, 4◇
S5 = KT5 reflexive, □, T□, T◇
transitive, 5□, 5◇
euclidean
Problems
Problem 14.1. Find sequent calculus proofs in K for the follow-
ing formulas:
1. □¬p → □(p → q )
CHAPTER 14. MODAL SEQUENT CALCULUS 182
2. (□p ∨ □q ) → □(p ∨ q )
3. ◇p → ◇(p ∨ q )
4. □(p ∧ q ) → □p
1. KT5 ⊢ B;
2. KT5 ⊢ 4;
3. KDB4 ⊢ T;
4. KB4 ⊢ 5;
5. KB5 ⊢ 4;
6. KT ⊢ D.
PART V
But you
can’t tell me
what to
think!
183
CHAPTER 15
Epistemic
Logics
15.1 Introduction
Just as modal logic deals with modal propositions and the entail-
ment relations among them, epistemic logic deals with epistemic
propositions and the entailment relations among them. Rather
than interpreting the modal operators as representing possibility
and necessity, the unary connectives are interpreted in epistemic
or doxastic ways, to model knowledge and belief. For example,
we might want to express claims like the following:
184
CHAPTER 15. EPISTEMIC LOGICS 185
1. ⊥ is an atomic formula.
The mechanics are just like the mechanics for normal modal
logic, just with more accessibility relations added in. For a given
agent, we will generally interpret their accessibility relation as
representing something about their informational states. For ex-
ample, we often treat R a ww ′, as expressing that w ′ is consistent
with a’s information at w. Or to put it another way, at w, they
cannot tell the difference between world w and world w ′.
CHAPTER 15. EPISTEMIC LOGICS 188
1. A ≡ ⊥: Never M,w ⊩ ⊥.
a,b
p
w2
q
a
p
a,b w1
¬q
b
¬p
w3
¬q
a,b
where
R 0 = R and
R n+1 = {⟨x, z ⟩ : ∃y (R n xy ∧ Ryz )}.
CHAPTER 15. EPISTEMIC LOGICS 190
If R is . . . then . . . is true in M:
K (p → q ) → ( Kp → Kq )
(Closure)
reflexive: ∀wRww Kp → p
(Veridicality)
transitive: Kp → KKp
∀u∀v∀w ((Ruv ∧ Rvw) → Ruw) (Positive Introspection)
euclidean: ¬Kp → K¬Kp
∀w∀u∀v ((Rwu ∧ Rwv ) → Ruv ) (Negative Introspection)
Table 15.1: Four epistemic principles.
15.6 Bisimulations
One remaining question that we might have about the expressive
power of our epistemic language has to do with the relationship
between models and the formulas that hold in them. We have
seen from our frame correspondence results that when certain
formulas are valid in a frame, they will also ensure that those
frames satisfy certain properties. But does our modal language,
for example, allow us to distinguish between a world at which
there is a reflexive arrow, and an infinite chain of worlds, each
of which leads to the next? That is, is there any formula A that
might hold at only one of these two worlds?
CHAPTER 15. EPISTEMIC LOGICS 192
a a
a
w2 w3 v2
a a a
w1 v1
a a
p2 , . . .
1. ⊥ is an atomic formula.
1. A ≡ ⊥: Never M,w ⊩ ⊥.
a,b
a,b
w 3 p,q
w 3′ p,q
a,b a
a
b announcement of p
¬p, ¬q w 2 w1 w 1′ p, ¬q
p, ¬q
a,b a,b
M M|p
Is this going
to go on
forever?
198
CHAPTER 16
Temporal
Logics
16.1 Introduction
Temporal logics deal with claims about things that will or have
been the case. Arthur Prior is credited as the originator of tem-
poral logic, which he called tense logic. Our treatment of tempo-
ral logic here will largely follow Prior’s original modal treatment
of introducing temporal operators into the basic framework of
propositional logic, which treats claims as generally lacking in
tense.
For example, in propositional logic, I might talk about a dog,
Beezie, who sometimes sits and sometimes doesn’t sit, as dogs
are wont to do. It would be contradictory in classical logic to
claim that Beezie is sitting and also that Beezie is not sitting. But
obviously both can be true, just not at the same time; adding
temporal operators to the language can allow us to express that
claim relatively easily. The addition of temporal operators also
allows us to account for the validity of inferences like the one
from “Beezie will get a treat or a ball" to “Beezie will get a treat
or Beezie will get a ball."
However, a lot of philosophical issues arise with temporal
logic that might lead us to adopt one framework of temporal
199
CHAPTER 16. TEMPORAL LOGICS 200
1. ⊥ is an atomic formula.
2. ≺ is a binary relation on T .
For now, you will notice that we do not impose any conditions
on our precedence relation ≺. This means that at present, there
CHAPTER 16. TEMPORAL LOGICS 202
1. A ≡ ⊥: Never M,t ⊩ ⊥.
If ≺ is . . . then . . . is true in M:
transitive: FFp → Fp
∀u∀v∀w ((u ≺ v ∧ v ≺ w) → u ≺
w)
linear: ( FPp ∨ PFp) → ( Pp ∨ p ∨ Fp)
∀w∀v (w ≺ v ∨ w = v ∨ v ≺ w)
dense: Fp → FFp
∀w∀v (w ≺ v → ∃u (w ≺ u ∧u ≺
v ))
unbounded (past): Hp → Pp
∀w∃v (v ≺ w)
unbounded (future): Gp → Fp
∀w∃v (w ≺ v )
Table 16.1: Some temporal frame correspondence properties.
G (p → q ) → ( Gp → Gq ) (KG )
H (p → q ) → ( Hp → Hq ) (KH )
that t ≺𝜎 t ′.
What if
things were
different?
207
CHAPTER 17
Introduction
17.1 The Material Conditional
In its simplest form in English, a conditional is a sentence of the
form “If . . . then . . . ,” where the . . . are themselves sentences,
such as “If the butler did it, then the gardener is innocent.” In
introductory logic courses, we earn to symbolize conditionals us-
ing the → connective: symbolize the parts indicated by . . . , e.g.,
by formulas A and B, and the entire conditional is symbolized by
A → B.
The connective → is truth-functional, i.e., the truth value—T
or F—of A → B is determined by the truth values of A and B:
A → B is true iff A is false or B is true, and false otherwise.
Relative to a truth value assignment v, we define v ⊨ A → B iff
v ⊭ A or v ⊨ B. The connective → with this semantics is called
the material conditional.
This definition results in a number of elementary logical facts.
First of all, the deduction theorem holds for the material condi-
tional:
A → B ⊨ ¬A ∨ B (17.2)
¬A ∨ B ⊨ A → B (17.3)
208
CHAPTER 17. INTRODUCTION 209
B ⊨A→B (17.4)
¬A ⊨ A → B (17.5)
¬(A → B) ⊨ A ∧ ¬B (17.6)
A ∧ ¬B ⊨ ¬(A → B) (17.7)
A,A → B ⊨ B (17.8)
A → B,A → C ⊨ A → (B ∧ C ) (17.9)
A → B ⊨ (A ∧ C ) → B (17.10)
A → B,B → C ⊨ A → C (17.11)
A → B ⊨ ¬B → ¬A (17.12)
¬B → ¬A ⊨ A → B (17.13)
CHAPTER 17. INTRODUCTION 210
A ⥽ B ⊨ ¬A ∨ B but: (17.14)
¬A ∨ B ⊭ A ⥽ B (17.15)
B ⊭A⥽B (17.16)
¬A ⊭ A ⥽ B (17.17)
¬(A → B) ⊭ A ∧ ¬B but: (17.18)
A ∧ ¬B ⊨ ¬(A ⥽ B) (17.19)
CHAPTER 17. INTRODUCTION 212
A,A ⥽ B ⊨ B (17.20)
A ⥽ B,A ⥽ C ⊨ A ⥽ (B ∧ C ) (17.21)
A ⥽ B ⊨ (A ∧ C ) ⥽ B (17.22)
A ⥽ B,B ⥽ C ⊨ A ⥽ C (17.23)
A ⥽ B ⊨ ¬B ⥽ ¬A (17.24)
¬B ⥽ ¬A ⊨ A ⥽ B (17.25)
However, the strict conditional still has its own “paradoxes.”
Just as a material conditional with a false antecedent or a true
consequent is true, a strict conditional with a necessarily false an-
tecedent or a necessarily true consequent is true. Moreover, any
true strict conditional is necessarily true, and any false strict con-
ditional is necessarily false. In other words, we have
□¬A ⊨ A ⥽ B (17.26)
□B ⊨ A ⥽ B (17.27)
A ⥽ B ⊨ □(A ⥽ B) (17.28)
¬(A ⥽ B) ⊨ □¬(A ⥽ B) (17.29)
These are not problems if you think of ⥽ as “implies.” Logical
entailment relationships are, after all, mathematical facts and so
can’t be contingent. But they do raise issues if you want to use
⥽ as a logical connective that is supposed to capture “if . . . then
. . . ,” especially the last two. For surely there are “if . . . then . . . ”
statements that are contingently true or contingently false—in
fact, they generally are neither necessary nor impossible.
CHAPTER 17. INTRODUCTION 213
17.4 Counterfactuals
A very common and important form of “if . . . then . . . ” construc-
tions in English are built using the past subjunctive form of to
be: “if it were the case that . . . then it would be the case that . . . ”
Because usually the antecedent of such a conditional is false, i.e.,
counter to fact, they are called counterfactual conditionals (and
because they use the subjunctive form of to be, also subjunctive
conditionals. They are distinguished from indicative conditionals
which take the form of “if it is the case that . . . then it is the
case that . . . ” Counterfactual and indicative conditionals differ
in truth conditions. Consider Adams’s famous example:
Problems
Problem 17.1. Give S5-counterexamples to the entailment rela-
tions which do not hold for the strict conditional, i.e., for:
1. ¬p ⊭ □(p → q )
CHAPTER 17. INTRODUCTION 215
2. q ⊭ □(p → q )
3. ¬□(p → q ) ⊭ p ∧ ¬q
4. ⊭ □(p → q ) ∨ □(q → p)
Problem 17.2. Show that the valid entailment relations hold for
the strict conditional by giving S5-proofs of:
1. □(A → B) ⊨ ¬A ∨ B
2. A ∧ ¬B ⊨ ¬□(A → B)
3. A, □(A → B) ⊨ B
5. □(A → B) ⊨ □((A ∧ C ) → B)
1. □¬B ⊨ A ⥽ B
2. A ⥽ B ⊨ □(A ⥽ B)
3. ¬(A ⥽ B) ⊨ □¬(A ⥽ B)
Minimal
Change
Semantics
18.1 Introduction
Stalnaker and Lewis proposed accounts of counterfactual condi-
tionals such as “If the match were struck, it would light.” Their
accounts were proposals for how to properly understand the truth
conditions for such sentences. The idea behind both proposals is
this: to evaluate whether a counterfactual conditional is true, we
have to consider those possible worlds which are minimally dif-
ferent from the way the world actually is to make the antecedent
true. If the consequent is true in these possible worlds, then the
counterfactual is true. For instance, suppose I hold a match and
a matchbook in my hand. In the actual world I only look at them
and ponder what would happen if I were to strike the match. The
minimal change from the actual world where I strike the match
is that where I decide to act and strike the match. It is minimal
in that nothing else changes: I don’t also jump in the air, striking
the match doesn’t also light my hair on fire, I don’t suddenly lose
216
CHAPTER 18. MINIMAL CHANGE SEMANTICS 217
w A
1. O w is centered on w: {w } ∈ O w .
w1 w7
w5
w p
w6
w2
w3
w4
2. For some S ∈ O w ,
w A
w A
w A
w A
u v
w1
w
w2
q
18.5 Transitivity
For the material conditional, the chain rule holds: A →B,B →C ⊨
A → C . In other words, the material conditional is transitive. Is
the same true for counterfactuals? Consider the following exam-
ple due to Stalnaker.
CHAPTER 18. MINIMAL CHANGE SEMANTICS 224
If Hoover had been born (at the same time he actually did), not
in the United States, but in Russia, he would have grown up in
the Soviet Union and become a Communist (let’s assume). So
the first premise is true. Likewise, the second premise, consid-
ered in isolation is true. The conclusion, however, is false: in all
likelihood, Hoover would have been a fervent Communist if he
had been born in the USSR, and not been a traitor (to his coun-
try). The intuitive assignment of truth values is borne out by the
Stalnaker-Lewis account. The closest possible world to ours with
the only change being Hoover’s place of birth is the one where
Hoover grows up to be a good citizen of the USSR. This is the
closest possible world where the antecedent of the first premise
and of the conclusion is true, and in that world Hoover is a loyal
member of the Communist party, and so not a traitor. To eval-
uate the second premise, we have to look at a different world,
however: the closest world where Hoover is a Communist, which
is one where he was born in the United States, turned, and thus
became a traitor.1
18.6 Contraposition
Material and strict conditionals are equivalent to their contra-
positives. Counterfactuals are not. Here is an example due to
Kratzer:
Problems
Problem 18.1. Find a convincing, intuitive example for the fail-
ure of transitivity of counterfactuals.
CHAPTER 18. MINIMAL CHANGE SEMANTICS 226
¬q
q
w w1
w2
p
¬p
How can it
be true if
you can’t
prove it?
227
CHAPTER 19
Introduction
19.1 Constructive Reasoning
In contrast to extensions of classical logic by modal operators or
second-order quantifiers, intuitionistic logic is “non-classical” in
that it restricts classical logic. Classical logic is non-constructive in
various ways. Intuitionistic logic is intended to capture a more
“constructive” kind of reasoning characteristic of a kind of con-
structive mathematics. The following examples may serve to il-
lustrate some of the underlying motivations.
Suppose someone claimed that they had determined a natu-
ral number n with the property that if n is even, the Riemann
hypothesis is true, and if n is odd, the Riemann hypothesis is
false. Great news! Whether the Riemann hypothesis is true or
not is one of the big open questions of mathematics, and they
seem to have reduced the problem to one of calculation, that is,
to the determination of whether a specific number is even or not.
What is the magic value of n? They describe it as follows: n is
the natural number that is equal to 2 if the Riemann hypothesis
is true, and 3 otherwise.
Angrily, you demand your money back. From a classical point
of view, the description above does in fact determine a unique
value of n; but what you really want is a value of n that is given
explicitly.
To take another, perhaps less contrived example, consider
228
CHAPTER 19. INTRODUCTION 229
since 3log3 x = x.
Intuitionistic logic is designed to capture a kind of reasoning
where moves like the one in the first proof are disallowed. Proving
the existence of an x satisfying A(x) means that you have to give a
specific x, and a proof that it satisfies A, like in the second proof.
Proving that A or B holds requires that you can prove one or the
other.
Formally speaking, intuitionistic logic is what you get if you
restrict a derivation system for classical logic in a certain way.
CHAPTER 19. INTRODUCTION 230
From the mathematical point of view, these are just formal deduc-
tive systems, but, as already noted, they are intended to capture
a kind of mathematical reasoning. One can take this to be the
kind of reasoning that is justified on a certain philosophical view
of mathematics (such as Brouwer’s intuitionism); one can take it
to be a kind of mathematical reasoning which is more “concrete”
and satisfying (along the lines of Bishop’s constructivism); and
one can argue about whether or not the formal description cap-
tures the informal motivation. But whatever philosophical posi-
tions we may hold, we can study intuitionistic logic as a formally
presented logic; and for whatever reasons, many mathematical
logicians find it interesting to do so.
1. ⊥ is an atomic formula.
1. ¬A abbreviates A → ⊥.
2. A ↔ B abbreviates (A → B) ∧ (B → A).
p 1 (⟨N 1 ,N 2 ⟩) = N1
p 2 (⟨N 1 ,N 2 ⟩) = N2
Conjunction
A∧B
∧Elim
A B A
∧Intro
A∧B A∧B
∧Elim
B
Conditional
[A] u
A→B A
→Elim
B
B
u →Intro
A→B
Disjunction
[A] n [B] n
A
∨Intro
A∨B
B
∨Intro A∨B C C
A∨B n ∨Elim
C
Absurdity
⊥ ⊥
I
A
Rules for ¬
Since ¬A is defined as A → ⊥, we strictly speaking do not need
rules for ¬. But if we did, this is what they’d look like:
[A] n
¬A A
⊥ ¬Elim
⊥
n ¬Intro
¬A
Examples of Derivations
1. ⊢ A → (¬A → ⊥), i.e., ⊢ A → ((A → ⊥) → ⊥)
[A] 2 [A → ⊥] 1
⊥ →Elim
1 →Intro
(A → ⊥) → ⊥
2 →Intro
A → (A → ⊥) → ⊥
2. ⊢ ((A ∧ B) → C ) → (A → (B → C ))
CHAPTER 19. INTRODUCTION 238
[A] 2 [B] 1
∧Intro
[(A ∧ B) → C ]3 A∧B
→Elim
1
C
→Intro
2
B → C
→Intro
A → (B → C )
3 →Intro
((A ∧ B) → C ) → (A → (B → C ))
[A ∧ (A → ⊥)] 1 [A ∧ (A → ⊥)] 1
∧Elim ∧Elim
A→⊥ A
⊥ →Elim
1 →Intro
(A ∧ (A → ⊥)) → ⊥
[A] 1
∨Intro
[(A ∨ (A → ⊥)) → ⊥] 2 A ∨ (A → ⊥)
⊥ →Elim
1 →Intro
A→⊥
∨Intro
[(A ∨ (A → ⊥)) → ⊥] 2 A ∨ (A → ⊥)
⊥ →Elim
2 →Intro
((A ∨ (A → ⊥)) → ⊥) → ⊥
1. Ai ∈ 𝛤; or
2. Ai is an axiom; or
(A ∧ B) → A (19.1)
(A ∧ B) → B (19.2)
A → (B → (A ∧ B)) (19.3)
A → (A ∨ B) (19.4)
A → (B ∨ A) (19.5)
(A → C ) → ((B → C ) → ((A ∨ B) → C )) (19.6)
A → (B → A) (19.7)
(A → (B → C )) → ((A → B) → (A → C )) (19.8)
⊥→A (19.9)
Problems
Problem 19.1. Give derivations in intutionistic logic of the fol-
lowing.
1. (¬A ∨ B) → (A → B)
2. ¬¬¬A → ¬A
4. ¬(A ∨ B) ↔ (¬A ∧ B)
Semantics
20.1 Introduction
No logic is satisfactorily described without a semantics, and in-
tuitionistic logic is no exception. Whereas for classical logic, the
semantics based on valuations is canonical, there are several com-
peting semantics for intuitionistic logic. None of them are com-
pletely satisfactory in the sense that they give an intuitionistically
acceptable account of the meanings of the connectives.
The semantics based on relational models, similar to the se-
mantics for modal logics, is perhaps the most popular one. In
this semantics, propositional variables are assigned to worlds,
and these worlds are related by an accessibility relation. That re-
lation is always a partial order, i.e., it is reflexive, antisymmetric,
and transitive.
Intuitively, you might think of these worlds as states of knowl-
edge or “evidentiary situations.” A state w ′ is accessible from w
iff, for all we know, w ′ is a possible (future) state of knowledge,
i.e., one that is compatible with what’s known at w. Once a propo-
sition is known, it can’t become un-known, i.e., whenever A is
known at w and Rww ′, A is known at w ′ as well. So “knowledge”
is monotonic with respect to the accessibility relation.
If we define “A is known” as in epistemic logic as “true in all
epistemic alternatives,” then A∧B is known at w if in all epistemic
alternatives, both A and B are known. But since knowledge is
241
CHAPTER 20. SEMANTICS 242
1. W is a non-empty set,
2. A ≡ ⊥: not M,w ⊩ A.
Proof. Exercise. □
2. If M ⊩ 𝛤 and 𝛤 ⊨ A, then M ⊩ A.
Ww = {u ∈ W : Rwu },
Rw = R ∩ (Ww ) 2 , and
Vw (p) = V (p) ∩ Ww .
1. [ ⊥]] X = ∅
2. [ p]] X = V (p)
Problems
Problem 20.1. Show that according to Definition 20.2, M,w ⊩
¬A iff M,w ⊩ A → ⊥.
Wait, hear
me out:
what if it’s
both true
and false?
248
CHAPTER 21
Paraconsistent
logics
To p and to ¬p, that is the question.
249
PART X
Appendices
250
APPENDIX A
Sets
A.1 Extensionality
A set is a collection of objects, considered as a single object. The
objects making up the set are called elements or members of the
set. If x is an element of a set a, we write x ∈ a; if not, we write
x ∉ a. The set which has no elements is called the empty set and
denoted “∅”.
It does not matter how we specify the set, or how we order
its elements, or indeed how many times we count its elements.
All that matters are what its elements are. We codify this in the
following principle.
251
APPENDIX A. SETS 252
We read the notation on the right as “the set of x’s such that x
is perfect and 0 ≤ x ≤ 10”. The identity here confirms that,
when we consider sets, we don’t care about how they are spec-
ified. And, more generally, extensionality guarantees that there
is always only one set of x’s such that 𝜑(x). So, extensionality
justifies calling {x : 𝜑(x)} the set of x’s such that 𝜑(x).
APPENDIX A. SETS 253
℘(A) = {B : B ⊆ A}
N = {0, 1, 2, 3, . . .}
the set of natural numbers
Z = {. . . , −2, −1, 0, 1, 2, . . .}
APPENDIX A. SETS 255
These are all infinite sets, that is, they each have infinitely many
elements.
As we move through these sets, we are adding more numbers
to our stock. Indeed, it should be clear that N ⊆ Z ⊆ Q ⊆ R:
after all, every natural number is an integer; every integer is a
rational; and every rational is a real. Equally, it should be clear
that N ⊊ Z ⊊ Q, since −1 is an integer but not a natural number,
and 1/2 is rational but not integer. It is less obvious that Q ⊊ R,
i.e., that there are some real numbers which are not rational.
We’ll sometimes also use the set of positive integers Z+ =
{1, 2, 3, . . . } and the set containing just the first two natural num-
bers B = {0, 1}.
Figure A.1: The union A ∪ B of two sets is set of elements of A together with
those of B.
Figure A.2: The intersection A ∩ B of two sets is the set of elements they have
in common.
The union of a set with the empty set is identical to the set:
{a,b,c } ∪ ∅ = {a,b,c }.
A ∩ B = {x : x ∈ A ∧ x ∈ B }
Figure A.3: The difference A \ B of two sets is the set of those elements of A
which are not also elements of B.
abbreviations:
⋃︂ ⋃︂
Ai = {Ai : i ∈ I }
i ∈I
⋂︂ ⋂︂
Ai = {Ai : i ∈ I }
i ∈I
A \ B = {x : x ∈ A and x ∉ B }.
A × B = {⟨x, y⟩ : x ∈ A and y ∈ B }.
A1 = A
Ak +1 = Ak × A
APPENDIX A. SETS 261
Bx 1 = {⟨x 1 , y 1 ⟩ ⟨x 1 , y 2 ⟩ . . . ⟨x 1 , y m ⟩}
Bx 2 = {⟨x 2 , y 1 ⟩ ⟨x 2 , y 2 ⟩ . . . ⟨x 2 , y m ⟩}
.. ..
. .
Bxn = {⟨x n , y 1 ⟩ ⟨x n , y 2 ⟩ . . . ⟨x n , y m ⟩}
Since the x i are all different, and the y j are all different, no two of
the pairs in this grid are the same, and there are n · m of them.□
A∗ = {∅} ∪ A ∪ A2 ∪ A3 ∪ . . .
define sets. If they all did, then we would run into outright contra-
dictions. The most famous example of this is Russell’s Paradox.
Sets may be elements of other sets—for instance, the power
set of a set A is made up of sets. And so it makes sense to ask or
investigate whether a set is an element of another set. Can a set
be a member of itself? Nothing about the idea of a set seems to
rule this out. For instance, if all sets form a collection of objects,
one might think that they can be collected into a single set—the
set of all sets. And it, being a set, would be an element of the set
of all sets.
Russell’s Paradox arises when we consider the property of not
having itself as an element, of being non-self-membered. What if we
suppose that there is a set of all sets that do not have themselves
as an element? Does
R = {x : x ∉ x }
Problems
Problem A.1. Prove that there is at most one empty set, i.e.,
show that if A and B are sets without elements, then A = B.
Relations
B.1 Relations as Sets
In appendix A.3, we mentioned some important sets: N, Z, Q, R.
You will no doubt remember some interesting relations between
the elements of some of these sets. For instance, each of these sets
has a completely standard order relation on it. There is also the
relation is identical with that every object bears to itself and to no
other thing. There are many more interesting relations that we’ll
encounter, and even more possible relations. Before we review
them, though, we will start by pointing out that we can look at
relations as a special sort of set.
For this, recall two things from appendix A.5. First, recall
the notion of a ordered pair: given a and b, we can form ⟨a,b⟩.
Importantly, the order of elements does matter here. So if a ≠ b
then ⟨a,b⟩ ≠ ⟨b,a⟩. (Contrast this with unordered pairs, i.e., 2-
element sets, where {a,b } = {b,a}.) Second, recall the notion of
a Cartesian product: if A and B are sets, then we can form A × B,
the set of all pairs ⟨x, y⟩ with x ∈ A and y ∈ B. In particular,
A2 = A × A is the set of all ordered pairs from A.
Now we will consider a particular relation on a set: the <-
relation on the set N of natural numbers. Consider the set of all
pairs of numbers ⟨n,m⟩ where n < m, i.e.,
264
APPENDIX B. RELATIONS 265
L = {⟨0, 1⟩, ⟨0, 2⟩, . . . , ⟨1, 2⟩, ⟨1, 3⟩, . . . , ⟨2, 3⟩, ⟨2, 4⟩, . . .},
APPENDIX B. RELATIONS 266
is the less than relation, i.e., Lnm iff n < m. The subset of pairs
below the diagonal, i.e.,
G = {⟨1, 0⟩, ⟨2, 0⟩, ⟨2, 1⟩, ⟨3, 0⟩, ⟨3, 1⟩, ⟨3, 2⟩, . . . },
B.4 Orders
Many of our comparisons involve describing some objects as be-
ing “less than”, “equal to”, or “greater than” other objects, in a
certain respect. These involve order relations. But there are differ-
ent kinds of order relations. For instance, some require that any
two objects be comparable, others don’t. Some include identity
(like ≤) and some exclude it (like <). It will help us to have a
taxonomy here.
APPENDIX B. RELATIONS 270
B.5 Graphs
A graph is a diagram in which points—called “nodes” or “ver-
tices” (plural of “vertex”)—are connected by edges. Graphs are
a ubiquitous tool in discrete mathematics and in computer sci-
ence. They are incredibly useful for representing, and visualizing,
relationships and structures, from concrete things like networks
of various kinds to abstract structures such as the possible out-
comes of decisions. There are many different kinds of graphs in
APPENDIX B. RELATIONS 273
1 2 4
3
APPENDIX B. RELATIONS 274
1 2
Problems
Problem B.1. List the elements of the relation ⊆ on the set
℘({a,b,c }).
Proofs
C.1 Introduction
Based on your experiences in introductory logic, you might be
comfortable with a derivation system—probably a natural de-
duction or Fitch style derivation system, or perhaps a proof-tree
system. You probably remember doing proofs in these systems,
either proving a formula or show that a given argument is valid.
In order to do this, you applied the rules of the system until you
got the desired end result. In reasoning about logic, we also prove
things, but in most cases we are not using a derivation system. In
fact, most of the proofs we consider are done in English (perhaps,
with some symbolic language thrown in) rather than entirely in
the language of first-order logic. When constructing such proofs,
you might at first be at a loss—how do I prove something without
a derivation system? How do I start? How do I know if my proof
is correct?
Before attempting a proof, it’s important to know what a proof
is and how to construct one. As implied by the name, a proof is
meant to show that something is true. You might think of this in
terms of a dialogue—someone asks you if something is true, say,
if every prime other than two is an odd number. To answer “yes”
is not enough; they might want to know why. In this case, you’d
give them a proof.
In everyday discourse, it might be enough to gesture at an
276
APPENDIX C. PROOFS 277
Using a Conjunction
Perhaps the simplest inference pattern is that of drawing as con-
clusion one of the conjuncts of a conjunction. In other words:
if we have assumed or already proved that p and q , then we’re
entitled to infer that p (and also that q ). This is such a basic
inference that it is often not mentioned. For instance, once we’ve
unpacked the definition of D = E we’ve established that every
element of D is an element of E and vice versa. From this we can
conclude that every element of E is an element of D (that’s the
“vice versa” part).
Proving a Conjunction
Sometimes what you’ll be asked to prove will have the form of a
conjunction; you will be asked to “prove p and q .” In this case,
you simply have to do two things: prove p, and then prove q . You
could divide your proof into two sections, and for clarity, label
them. When you’re making your first notes, you might write “(1)
Prove p” at the top of the page, and “(2) Prove q ” in the middle of
the page. (Of course, you might not be explicitly asked to prove
a conjunction but find that your proof requires that you prove a
conjunction. For instance, if you’re asked to prove that D = E
you will find that, after unpacking the definition of =, you have to
prove: every element of D is an element of E and every element
of E is an element of D).
APPENDIX C. PROOFS 283
Proving a Disjunction
When what you are proving takes the form of a disjunction (i.e., it
is an statement of the form “p or q ”), it is enough to show that one
of the disjuncts is true. However, it basically never happens that
either disjunct just follows from the assumptions of your theorem.
More often, the assumptions of your theorem are themselves dis-
junctive, or you’re showing that all things of a certain kind have
one of two properties, but some of the things have the one and
others have the other property. This is where proof by cases is
useful (see below).
Conditional Proof
Many theorems you will encounter are in conditional form (i.e.,
show that if p holds, then q is also true). These cases are nice and
easy to set up—simply assume the antecedent of the conditional
(in this case, p) and prove the conclusion q from it. So if your
theorem reads, “If p then q ,” you start your proof with “assume
p” and at the end you should have proved q .
Conditionals may be stated in different ways. So instead of “If
p then q ,” a theorem may state that “p only if q ,” “q if p,” or “q ,
provided p.” These all mean the same and require assuming p
and proving q from that assumption. Recall that a biconditional
(“p if and only if (iff) q ”) is really two conditionals put together:
if p then q , and if q then p. All you have to do, then, is two
instances of conditional proof: one for the first conditional and
another one for the second. Sometimes, however, it is possible
to prove an “iff” statement by chaining together a bunch of other
“iff” statements so that you start with “p” an end with “q ”—but
in that case you have to make sure that each step really is an “iff.”
Universal Claims
Using a universal claim is simple: if something is true for any-
thing, it’s true for each particular thing. So if, say, the hypothesis
of your proof is A ⊆ B, that means (unpacking the definition
APPENDIX C. PROOFS 284
Proof by Cases
Suppose you have a disjunction as an assumption or as an already
established conclusion—you have assumed or proved that p or q
is true. You want to prove r . You do this in two steps: first you
assume that p is true, and prove r , then you assume that q is true
and prove r again. This works because we assume or know that
one of the two alternatives holds. The two steps establish that
either one is sufficient for the truth of r . (If both are true, we
have not one but two reasons for why r is true. It is not neces-
sary to separately prove that r is true assuming both p and q .)
To indicate what we’re doing, we announce that we “distinguish
cases.” For instance, suppose we know that x ∈ B ∪ C . B ∪ C is
defined as {x : x ∈ B or x ∈ C }. In other words, by definition,
x ∈ B or x ∈ C . We would prove that x ∈ A from this by first
assuming that x ∈ B, and proving x ∈ A from this assumption,
and then assume x ∈ C , and again prove x ∈ A from this. You
would write “We distinguish cases” under the assumption, then
“Case (1): x ∈ B” underneath, and “Case (2): x ∈ C halfway
down the page. Then you’d proceed to fill in the top half and the
bottom half of the page.
Proof by cases is especially useful if what you’re proving is
itself disjunctive. Here’s a simple example:
Since x ∈ A, A ≠ ∅.
Let a ∈ A.
It’s maybe good practice to keep bound variables like “x” sep-
arate from hypothetical names like a, like we did. In practice,
however, we often don’t and just use x, like so:
Can you spot where the incorrect step occurs and explain why
the result does not hold?
C.5 An Example
Our first example is the following simple fact about unions and in-
tersections of sets. It will illustrate unpacking definitions, proofs
of conjunctions, of universal claims, and proof by cases.
So, if z ∈ A ∪ (B ∩ C ) then z ∈ (A ∪ B) ∩ (A ∪ C ).
So, if z ∈ (A ∪ B) ∩ (A ∪ C ) then z ∈ A ∪ (B ∩ C ). □
show that some claim p is false, i.e., you want to show ¬p. The
most promising strategy is to (a) suppose that p is true, and (b)
show that this assumption leads to something you know to be
false. “Something known to be false” may be a result that con-
flicts with—contradicts—p itself, or some other hypothesis of the
overall claim you are considering. For instance, a proof of “if q
then ¬p” involves assuming that q is true and proving ¬p from
it. If you prove ¬p by contradiction, that means assuming p in
addition to q . If you can prove ¬q from p, you have shown that
the assumption p leads to something that contradicts your other
assumption q , since q and ¬q cannot both be true. Of course,
you have to use other inference patterns in your proof of the con-
tradiction, as well as unpacking definitions. Let’s consider an
example.
A has no elements iff it’s not the case that there is an x such that
x ∈ A.
Since A ⊆ B, x ∈ B.
Proposition C.11. A ⊆ A ∪ B.
A ∩ (A ∪ B) = A
Motivational Videos
Feel like you have no motivation to do your homework? Feeling
down? These videos might help!
• https://www.youtube.com/watch?v=ZXsQAXx_ao0
• https://www.youtube.com/watch?v=BQ4yd2W50No
• https://www.youtube.com/watch?v=StTqXEQ2l-Y
APPENDIX C. PROOFS 305
Problems
Problem C.1. Suppose you are asked to prove that A ∩ B ≠ ∅.
Unpack all the definitions occuring here, i.e., restate this in a way
that does not mention “∩”, “=”, or “∅.
Proof. If z ∈ A ∪ (A ∩ B) then z ∈ A or z ∈ A ∩ B. If z ∈ A ∩ B,
z ∈ A. Any z ∈ A is also ∈ A ∪ (A ∩ B). □
APPENDIX D
Induction
D.1 Introduction
Induction is an important proof technique which is used, in dif-
ferent forms, in almost all areas of logic, theoretical computer
science, and mathematics. It is needed to prove many of the re-
sults in logic.
Induction is often contrasted with deduction, and character-
ized as the inference from the particular to the general. For in-
stance, if we observe many green emeralds, and nothing that we
would call an emerald that’s not green, we might conclude that
all emeralds are green. This is an inductive inference, in that it
proceeds from many particlar cases (this emerald is green, that
emerald is green, etc.) to a general claim (all emeralds are green).
Mathematical induction is also an inference that concludes a gen-
eral claim, but it is of a very different kind that this “simple in-
duction.”
Very roughly, an inductive proof in mathematics concludes
that all mathematical objects of a certain sort have a certain prop-
erty. In the simplest case, the mathematical objects an inductive
proof is concerned with are natural numbers. In that case an in-
ductive proof is used to establish that all natural numbers have
some property, and it does this by showing that
306
APPENDIX D. INDUCTION 307
D.2 Induction on N
In its simplest form, induction is a technique used to prove results
for all natural numbers. It uses the fact that by starting from 0
and repeatedly adding 1 we eventually reach every natural num-
ber. So to prove that something is true for every number, we can
(1) establish that it is true for 0 and (2) show that whenever it is
true for a number n, it is also true for the next number n +1. If we
abbreviate “number n has property P ” by P (n) (and “number k
has property P ” by P (k ), etc.), then a proof by induction that
P (n) for all n ∈ N consists of:
To make this crystal clear, suppose we have both (1) and (2).
Then (1) tells us that P (0) is true. If we also have (2), we know
in particular that if P (0) then P (0 + 1), i.e., P (1). This follows
from the general statement “for any k , if P (k ) then P (k + 1)” by
putting 0 for k . So by modus ponens, we have that P (1). From (2)
again, now taking 1 for n, we have: if P (1) then P (2). Since we’ve
APPENDIX D. INDUCTION 308
Theorem D.1. With n dice one can throw all 5n + 1 possible values
between n and 6n.
Proof. Let P (n) be the claim: “It is possible to throw any number
between n and 6n using n dice.” To use induction, we prove:
1. The induction basis P (1), i.e., with just one die, you can
throw any number between 1 and 6.
s0 = 0
sn+1 = sn + (n + 1)
s 0 = 0,
s1 = s0 + 1 = 1,
s2 = s1 + 2 =1+2=3
s3 = s2 + 3 = 1 + 2 + 3 = 6, etc.
thereby established that P holds for all numbers less than 1. And
if we know that if P (l ) for all l < k , then P (k ), we know this
in particular for k = 1. So we can conclude P (1). With this we
have proved P (0) and P (1), i.e., P (l ) for all l < 2, and since we
have also the conditional, if P (l ) for all l < 2, then P (2), we can
conclude P (2), and so on.
In fact, if we can establish the general conditional “for all k ,
if P (l ) for all l < k , then P (k ),” we do not have to establish P (0)
anymore, since it follows from it. For remember that a general
claim like “for all l < k , P (l )” is true if there are no l < k . This
is a case of vacuous quantification: “all As are Bs” is true if there
are no As, ∀x (A(x) → B (x)) is true if no x satisfies A(x). In this
case, the formalized version would be “∀l (l < k → P (l ))”—and
that is true if there are no l < k . And if k = 0 that’s exactly the
case: no l < 0, hence “for all l < 0, P (0)” is true, whatever P is.
A proof of “if P (l ) for all l < k , then P (k )” thus automatically
establishes P (0).
This variant is useful if establishing the claim for k can’t be
made to just rely on the claim for k − 1 but may require the
assumption that it is true for one or more l < k .
closing “]” (if there are any at all), and for any ◦ we can find “nice”
expressions on either side, surrounded by a pair of parentheses.
We would like to precisely specify what counts as a “nice
term.” First of all, every letter by itself is nice. Anything that’s
not just a letter by itself should be of the form “[t ◦ s ]” where s
and t are themselves nice. Conversely, if t and s are nice, then we
can form a new nice term by putting a ◦ between them and sur-
round them by a pair of brackets. We might use these operations
to define the set of nice terms. This is an inductive definition.
a, b, c, d
Note that we have not yet proved that every sequence of sym-
bols that “feels” nice is nice according to this definition. However,
it should be clear that everything we can construct does in fact
“feel nice”: brackets are balanced, and ◦ connects parts that are
themselves nice.
The key feature of inductive definitions is that if you want
to prove something about all nice terms, the definition tells you
which cases you must consider. For instance, if you are told that
t is a nice term, the inductive definition tells you what t can look
like: t can be a letter, or it can be [s 1 ◦ s 2 ] for some pair of
nice terms s 1 and s2 . Because of clause (3), those are the only
possibilities.
When proving claims about all of an inductively defined set,
the strong form of induction becomes particularly important. For
instance, suppose we want to prove that for every nice term of
length n, the number of [ in it is < n/2. This can be seen as a
claim about all n: for every n, the number of [ in any nice term
of length n is < n/2.
l1 l2 l1 + l2 + 2 l1 + l − 2 + 3
m1 + m2 + 1 < + +1 = < = k /2.
2 2 2 2
o (s 1 ,s 2 ) =[s 1 ◦ s 2 ]
Proof. By induction on t :
1. t is a letter by itself: Then t has no proper initial segments.
2. If t2 is [s 1 ◦ s 2 ], then t1 ⊑ t2 iff t = t2 , t1 ⊑ s 1 , or t1 ⊑ s 2 .
If t2 = s1 ◦ s2 , then t1 ⊑ t2 iff t1 = t2 , t1 ⊑ s 1 , or t1 ⊑ s2 .
APPENDIX D. INDUCTION 318
s 1 = b and s 2 = a ◦ b.
r 1 = b ◦ a and r 2 = b.
ductively as follows:
{︄
0 if t is a letter
f (t ) =
max( f (s 1 ), f (s 2 )) + 1 if t = [s 1 ◦ s2 ].
For instance
f ([a ◦ b]) = max( f (a), f (b)) + 1 =
= max(0, 0) + 1 = 1, and
f ([[a ◦ b] ◦ c]) = max( f ( [a ◦ b]), f (c)) + 1 =
= max(1, 0) + 1 = 2.
Here, of course, we assume that s 1 an s 2 are nice terms, and
make use of the fact that every nice term is either a letter or of
the form [s 1 ◦ s 2 ]. It is again important that it can be of this form
in only one way. To see why, consider again the bracketless terms
we defined earlier. The corresponding “definition” would be:
{︄
0 if t is a letter
g (t ) =
max(g (s ), g (s )) + 1 if t = [s 1 ◦ s 2 ].
′
or as r 1 ◦ r 2 with
r 1 = a ◦ b and r 2 = c ◦ d.
Calculating g according to the first way of reading it would give
g (s 1 ◦ s 2 ) = max(g (a), g (b ◦ c ◦ d)) + 1 =
= max(0, 2) + 1 = 3
= max(1, 1) + 1 = 2
Problems
Problem D.1. Define the set of supernice terms by
The Greek
Alphabet
Alpha 𝛼 A Nu 𝜈 N
Beta 𝛽 B Xi 𝜉 𝛯
Gamma 𝛾 𝛤 Omicron o O
Delta 𝛿 𝛥 Pi 𝜋 𝛱
Epsilon 𝜀 E Rho 𝜌 P
Zeta 𝜁 Z Sigma 𝜎 𝛴
Eta 𝜂 H Tau 𝜏 T
Theta 𝜃 𝛩 Upsilon 𝜐 𝛶
Iota 𝜄 I Phi 𝜑 𝛷
Kappa 𝜅 K Chi 𝜒 X
Lambda 𝜆 𝛬 Psi 𝜓 𝛹
Mu 𝜇 M Omega 𝜔 𝛺
321
Bibliography
Cheng, Eugenia. 2004. How to write proofs: A quick quide. URL
http://http://eugeniacheng.com/wp-content/uploads/
2017/02/cheng-proofguide.pdf.
322
About the Open
Logic Project
The Open Logic Text is an open-source, collaborative textbook of
formal meta-logic and formal methods, starting at an intermedi-
ate level (i.e., after an introductory formal logic course). Though
aimed at a non-mathematical audience (in particular, students of
philosophy and computer science), it is rigorous.
Coverage of some topics currently included may not yet be
complete, and many sections still require substantial revision.
We plan to expand the text to cover more topics in the future.
We also plan to add features to the text, such as a glossary, a
list of further reading, historical notes, pictures, better explana-
tions, sections explaining the relevance of results to philosophy,
computer science, and mathematics, and more problems and ex-
amples. If you find an error, or have a suggestion, please let the
project team know.
The project operates in the spirit of open source. Not only
is the text freely available, we provide the LaTeX source un-
der the Creative Commons Attribution license, which gives any-
one the right to download, use, modify, re-arrange, convert, and
re-distribute our work, as long as they give appropriate credit.
Please see the Open Logic Project website at openlogicproject.org
for additional information.
323