- •Contents
- •Foreword to the first edition
- •Preface to the second edition
- •Our motivation for (re)writing this book
- •What’s new and what’s gone
- •The interdependence of chapters and prerequisites
- •Acknowledgements
- •Added for second edition
- •1 Propositional logic
- •1.1 Declarative sentences
- •1.2 Natural deduction
- •1.2.1 Rules for natural deduction
- •1.2.2 Derived rules
- •1.2.3 Natural deduction in summary
- •1.2.4 Provable equivalence
- •1.2.5 An aside: proof by contradiction
- •1.3 Propositional logic as a formal language
- •1.4 Semantics of propositional logic
- •1.4.1 The meaning of logical connectives
- •1.4.2 Mathematical induction
- •1.4.3 Soundness of propositional logic
- •1.4.4 Completeness of propositional logic
- •1.5 Normal forms
- •1.5.1 Semantic equivalence, satisfiability and validity
- •1.5.2 Conjunctive normal forms and validity
- •1.5.3 Horn clauses and satisfiability
- •1.6 SAT solvers
- •1.6.1 A linear solver
- •1.6.2 A cubic solver
- •1.7 Exercises
- •1.8 Bibliographic notes
- •2 Predicate logic
- •2.1 The need for a richer language
- •2.2 Predicate logic as a formal language
- •2.2.1 Terms
- •2.2.2 Formulas
- •2.2.3 Free and bound variables
- •2.2.4 Substitution
- •2.3 Proof theory of predicate logic
- •2.3.1 Natural deduction rules
- •2.3.2 Quantifier equivalences
- •2.4 Semantics of predicate logic
- •2.4.1 Models
- •2.4.2 Semantic entailment
- •2.4.3 The semantics of equality
- •2.5 Undecidability of predicate logic
- •2.6 Expressiveness of predicate logic
- •2.6.1 Existential second-order logic
- •2.6.2 Universal second-order logic
- •2.7 Micromodels of software
- •2.7.1 State machines
- •2.7.2 Alma – re-visited
- •2.7.3 A software micromodel
- •2.8 Exercises
- •2.9 Bibliographic notes
- •3 Verification by model checking
- •3.1 Motivation for verification
- •3.2 Linear-time temporal logic
- •3.2.1 Syntax of LTL
- •3.2.2 Semantics of LTL
- •3.2.3 Practical patterns of specifications
- •3.2.4 Important equivalences between LTL formulas
- •3.2.5 Adequate sets of connectives for LTL
- •3.3 Model checking: systems, tools, properties
- •3.3.1 Example: mutual exclusion
- •3.3.2 The NuSMV model checker
- •3.3.3 Running NuSMV
- •3.3.4 Mutual exclusion revisited
- •3.3.5 The ferryman
- •3.3.6 The alternating bit protocol
- •3.4 Branching-time logic
- •3.4.1 Syntax of CTL
- •3.4.2 Semantics of computation tree logic
- •3.4.3 Practical patterns of specifications
- •3.4.4 Important equivalences between CTL formulas
- •3.4.5 Adequate sets of CTL connectives
- •3.5.1 Boolean combinations of temporal formulas in CTL
- •3.5.2 Past operators in LTL
- •3.6 Model-checking algorithms
- •3.6.1 The CTL model-checking algorithm
- •3.6.2 CTL model checking with fairness
- •3.6.3 The LTL model-checking algorithm
- •3.7 The fixed-point characterisation of CTL
- •3.7.1 Monotone functions
- •3.7.2 The correctness of SATEG
- •3.7.3 The correctness of SATEU
- •3.8 Exercises
- •3.9 Bibliographic notes
- •4 Program verification
- •4.1 Why should we specify and verify code?
- •4.2 A framework for software verification
- •4.2.1 A core programming language
- •4.2.2 Hoare triples
- •4.2.3 Partial and total correctness
- •4.2.4 Program variables and logical variables
- •4.3 Proof calculus for partial correctness
- •4.3.1 Proof rules
- •4.3.2 Proof tableaux
- •4.3.3 A case study: minimal-sum section
- •4.4 Proof calculus for total correctness
- •4.5 Programming by contract
- •4.6 Exercises
- •4.7 Bibliographic notes
- •5 Modal logics and agents
- •5.1 Modes of truth
- •5.2 Basic modal logic
- •5.2.1 Syntax
- •5.2.2 Semantics
- •Equivalences between modal formulas
- •Valid formulas
- •5.3 Logic engineering
- •5.3.1 The stock of valid formulas
- •5.3.2 Important properties of the accessibility relation
- •5.3.3 Correspondence theory
- •5.3.4 Some modal logics
- •5.4 Natural deduction
- •5.5 Reasoning about knowledge in a multi-agent system
- •5.5.1 Some examples
- •5.5.2 The modal logic KT45n
- •5.5.3 Natural deduction for KT45n
- •5.5.4 Formalising the examples
- •5.6 Exercises
- •5.7 Bibliographic notes
- •6 Binary decision diagrams
- •6.1 Representing boolean functions
- •6.1.1 Propositional formulas and truth tables
- •6.1.2 Binary decision diagrams
- •6.1.3 Ordered BDDs
- •6.2 Algorithms for reduced OBDDs
- •6.2.1 The algorithm reduce
- •6.2.2 The algorithm apply
- •6.2.3 The algorithm restrict
- •6.2.4 The algorithm exists
- •6.2.5 Assessment of OBDDs
- •6.3 Symbolic model checking
- •6.3.1 Representing subsets of the set of states
- •6.3.2 Representing the transition relation
- •6.3.4 Synthesising OBDDs
- •6.4 A relational mu-calculus
- •6.4.1 Syntax and semantics
- •6.5 Exercises
- •6.6 Bibliographic notes
- •Bibliography
- •Index
2.3 Proof theory of predicate logic |
117 |
The last step introducing y is not the bad one; that step is fine. The bad one is the second from last one, concluding Q(x0) by x e and violating the side condition that x0 may not leave the scope of its box. You can try a few other ways of ‘proving’ this sequent, but none of them should work (assuming that our proof system is sound with respect to semantic entailment, which we define in the next section). Without this side condition, we would also be able to prove that ‘all x satisfy the property P as soon as one of them does so,’ a semantic disaster of biblical proportions!
2.3.2 Quantifier equivalences
We have already hinted at semantic equivalences between certain forms of quantification. Now we want to provide formal proofs for some of the most commonly used quantifier equivalences. Quite a few of them involve several quantifications over more than just one variable. Thus, this topic is also good practice for using the proof rules for quantifiers in a nested fashion.
For example, the formula x y φ should be equivalent to y x φ since both say that φ should hold for all values of x and y. What about ( x φ) ( x ψ) versus x (φ ψ)? A moment’s thought reveals that they should have the same meaning as well. But what if the second conjunct does not start with x? So what if we are looking at ( x φ) ψ in general and want to compare it with x (φ ψ)? Here we need to be careful, since x might be free in ψ and would then become bound in the formula x (φ ψ).
Example 2.12 We may specify ‘Not all birds can fly.’ as ¬x (B(x) → F (x)) or as x (B(x) ¬F (x)). The former formal specification is closer to the structure of the English specification, but the latter is logically equivalent to the former. Quantifier equivalences help us in establishing that specifications that ‘look’ di erent are really saying the same thing.
Here are some quantifier equivalences which you should become familiar with. As in Chapter 1, we write φ1 φ2 as an abbreviation for the validity of φ1 φ2 and φ2 φ1.
Theorem 2.13 Let φ and ψ be formulas of predicate logic. Then we have the following equivalences:
1.(a) ¬x φ x ¬φ
(b)¬x φ x ¬φ.
2.Assuming that x is not free in ψ:
118 |
2 Predicate logic |
(a)x φ ψ x (φ ψ)3
(b)x φ ψ x (φ ψ)
(c)x φ ψ x (φ ψ)
(d)x φ ψ x (φ ψ)
(e)x (ψ → φ) ψ → x φ
(f)x (φ → ψ) x φ → ψ
(g)x (φ → ψ) x φ → ψ
(h)x (ψ → φ) ψ → x φ.
3.(a) x φ x ψ x (φ ψ)
(b)x φ x ψ x (φ ψ).
4.(a) x y φ y x φ
(b)x y φ y x φ.
PROOF: We will prove most of these sequents; the proofs for the remaining ones are straightforward adaptations and are left as exercises. Recall that we sometimes write to denote any contradiction.
1.(a) We will lead up to this by proving the validity of two simpler sequents first: ¬(p1 p2) ¬p1 ¬p2 and then ¬x P (x) x ¬P (x). The reason for proving the first of these is to illustrate the close relationship between andon the one hand and and on the other – think of a model with just two elements 1 and 2 such that pi (i = 1, 2) stands for P (x) evaluated at i. The idea is that proving this propositional sequent should give us inspiration for proving the second one of predicate logic. The reason for proving the latter sequent is that it is a special case (in which φ equals P (x)) of the one we are really after, so again it should be simpler while providing some inspiration. So, let’s go.
1 |
|
¬(p1 p2) |
|
|
|
premise |
|
2 |
|
¬(¬p1 ¬p2) |
|
|
assumption |
||
3 |
|
¬p1 |
assumption |
|
¬p2 |
assumption |
|
4 |
|
¬p1 ¬p2 i1 3 |
|
¬p1 ¬p2 i2 3 |
|
||
5 |
|
|
¬e 4, 2 |
|
|
¬e 4, 2 |
|
6 |
|
p1 |
PBC 3−5 |
|
p2 |
PBC 3−5 |
|
7 |
|
p1 p2 |
|
|
|
i 6, 6 |
|
8 |
|
|
|
|
|
¬e 7, 1 |
|
9 |
|
¬p1 ¬p2 |
|
|
|
PBC 2−8 |
3 Remember that x φ ψ is implicitly bracketed as ( x φ) ψ, by virtue of the binding priorities.
2.3 Proof theory of predicate logic |
119 |
You have seen this sort of proof before, in Chapter 1. It is an example of something which requires proof by contradiction, or ¬¬e, or LEM (meaning that it simply cannot be proved in the reduced natural deduction system which discards these three rules) – in fact, the proof above used the rule PBC three times.
Now we prove the validity of ¬x P (x) x ¬P (x) similarly, except that where the rules for and were used we now use those for and :
1 |
|
|
¬x P (x) |
premise |
||
2 |
|
|
¬x ¬P (x) |
assumption |
||
3 |
|
|
x0 |
|
|
|
4 |
|
|
|
|
|
|
|
|
¬P (x0) |
assumption |
|
|
|
5 |
|
|
x¬P (x) |
x i 4 |
|
|
6 |
|
|
|
¬e 5, 2 |
|
|
7 |
|
|
P (x0) |
PBC 4−6 |
|
|
8 |
|
|
x P (x) |
x i 3−7 |
||
9 |
|
|
|
¬e 8, 1 |
||
10 |
|
|
x ¬P (x) |
PBC 2−9 |
You will really benefit by spending time understanding the way this proof mimics the one above it. This insight is very useful for constructing predicate logic proofs: you first construct a similar propositional proof and then mimic it.
Next we prove that ¬x φ x ¬φ is valid:
1 |
¬x φ |
premise |
2¬x ¬φ assumption
3x0
4 |
|
|
¬φ[x0/x] assumption |
|
||
5 |
|
|
x¬φ |
x i 4 |
|
|
6 |
|
|
|
¬e 5, 2 |
|
|
7 |
|
|
φ[x0/x] |
PBC 4−6 |
|
|
8 |
|
|
x φ |
x i 3−7 |
||
9 |
|
|
|
¬e 8, 1 |
||
10 |
|
|
x ¬φ |
PBC 2−9 |
120 2 Predicate logic
Proving that the reverse x ¬φ ¬x φ is valid is more straightforward, for it does not involve proof by contradiction, ¬¬e, or LEM. Unlike its converse, it has a constructive proof which the intuitionists do accept. We could again prove the corresponding propositional sequent, but we leave that as an exercise.
1 |
|
x ¬φ |
assumption |
|
2 |
|
x φ |
assumption |
|
3 |
|
x0 |
|
|
4 |
|
¬φ[x0/x] assumption |
|
|
5 |
|
φ[x0/x] |
x e 2 |
|
6 |
|
|
¬e 5, 4 |
|
7 |
|
|
x e 1, 3−6 |
|
8 |
|
¬x φ |
¬i 2−7 |
2. (a) Validity of x φ ψ x (φ ψ) can be proved thus:
1 |
( x φ) ψ |
premise |
2 |
x φ |
e1 1 |
3 |
ψ |
e2 1 |
4x0
5 |
φ[x0/x] |
x e 2 |
6 |
φ[x0/x] ψ |
i 5, 3 |
7(φ ψ)[x0/x] identical to 6, since x not free in ψ
8 |
x (φ ψ) |
x i 4−7 |
The argument for the reverse validity can go like this: |
||
1 |
x (φ ψ) |
premise |
2 |
x0 |
|
3 |
(φ ψ)[x0/x] x e 1 |
|
4 |
φ[x0/x] ψ |
identical to 3, since x not free in ψ |
5 |
ψ |
e2 3 |
6 |
φ[x0/x] |
e1 3 |
7 |
x φ |
x i 2−6 |
8 |
( x φ) ψ |
i 7, 5 |
2.3 Proof theory of predicate logic |
121 |
Notice that the use of i in the last line is permissible, because ψ was obtained |
|
for any instantiation of the formula in line 1; although a formal tool for proof |
|
support may complain about such practice. |
|
3. (b) The sequent ( x φ) ( x ψ) x (φ ψ) is proved valid |
using the rule |
|
e; so |
we have two |
principal |
cases, each of which requires the rule |
|||||||||
|
x i: |
|
|
|
|
|
|
|
|
|
|
||
1 |
|
|
|
( x φ) ( x ψ) |
|
|
|
|
|
|
premise |
||
2 |
|
|
|
x φ |
|
|
|
|
|
x ψ |
assumpt. |
|
|
3 |
|
|
x0 |
φ[x0/x] |
|
|
|
|
x0 |
ψ[x0/x] |
assumpt. |
|
|
4 |
|
|
|
φ[x0/x] ψ[x0/x] |
|
|
|
|
φ[x0/x] ψ[x0/x] |
i 3 |
|
|
|
5 |
|
|
|
(φ ψ)[x0/x] |
|
|
|
|
|
(φ ψ)[x0/x] |
identical |
|
|
6 |
|
|
|
x (φ ψ) |
|
|
|
|
|
x (φ ψ) |
x i 5 |
|
|
7 |
|
|
|
x (φ ψ) |
|
|
|
|
|
x (φ ψ) |
x e 2, 3−6 |
|
|
8 |
|
|
|
x (φ ψ) |
|
|
|
|
|
|
e 1, 2−7 |
The converse sequent has x (φ ψ) as premise, so its proof has to use x e as its last rule; for that rule, we need φ ψ as a temporary assumption and need to conclude ( x φ) ( x ψ) from those data; of course, the assumption φ ψ requires the usual case analysis:
1 |
|
x (φ ψ) |
|
|
premise |
||
2 |
|
x0 (φ ψ)[x0/x] |
|
|
assumption |
||
3 |
|
φ[x0 |
/x] ψ[x0/x] |
identical |
|||
4 |
|
φ[x0 |
/x] |
|
ψ[x0/x] |
assumption |
|
5 |
|
x φ |
|
x ψ |
x i 4 |
|
|
6 |
|
x φ x ψ |
|
x φ x ψ i 5 |
|
||
7 |
|
x φ x ψ |
|
|
e 3, 4−6 |
||
8 |
|
x φ x ψ |
|
|
x e 1, 2−7 |
4.(b) Given the premise x y φ, we have to nest x e and y e to conclude y x φ. Of course, we have to obey the format of these elimination rules as done below: