Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Boolos et al. Computability and Logic, 5ed, CUP, 2007.pdf
Скачиваний:
593
Добавлен:
10.08.2013
Размер:
2.33 Mб
Скачать

14

Proofs and Completeness

Introductory textbooks in logic devote much space to developing one or another kind of proof procedure, enabling one to recognize that a sentence D is implied by a set of sentences , with different textbooks favoring different procedures. In this chapter we introduce the kind of proof procedure, called a Gentzen system or sequent calculus, that is used in more advanced work, where in contrast to introductory textbooks the emphasis is on general theoretical results about the existence of proofs, rather than practice in constructing specific proofs. The details of any particular procedure, ours included, are less important than some features shared by all procedures, notably the features that whenever there is a proof of D from , D is a consequence of , and conversely, whenever D is a consequence of , there is a proof of D from . These features are called soundness and completeness, respectively. (Another feature is that definite, explicit rules can be given for determining in any given case whether a purported proof or deduction really is one or not; but we defer detailed consideration of this feature to the next chapter.) Section 14.1 introduces our version or variant of sequent calculus. Section 14.2 presents proofs of soundness and completeness. The former is easy; the latter is not so easy, but all the hard work for it has been done in the previous chapter. Section 14.3, which is optional, comments briefly on the relationship of our formal notion to other such formal notions, as might be found in introductory textbooks or elsewhere, and of any formal notion to the unformalized notion of a deduction of a conclusion from a set of premisses, or proof of a theorem from a set of axioms.

14.1 Sequent Calculus

The idea in setting up a proof procedure is that even when it is not obvious thatimplies D, we may hope to break the route from to D down into a series of small steps that are obvious, and thus render the implication relationship recognizable. Every introductory textbook develops some kind of formal notion of proof or deduction. Though these take different shapes in different books, in every case a formal deduction is some kind of finite array of symbols, and there are definite, explicit rules for determining whether a given finite array of symbols is or is not a formal deduction. The notion of deduction is ‘syntactic’ in the sense that these rules mention the internal structure of formulas, but do not mention interpretations. In the end, though, the condition that there exists a deduction of D from turns out to be equivalent to the condition that every interpretation making all sentences in true makes the sentence D true, which was the original ‘semantic’ definition of consequence. This

166

14.1. SEQUENT CALCULUS

167

equivalence has two directions. The result that whenever D is deducible from , D is a consequence of , is the soundness theorem. The result that whenever D is a consequence of , then D is deducible from , is the Godel¨ completeness theorem.

Our goal in this chapter will be to present a particular system of deduction for which soundness and completeness can be established. The proof of completeness uses the main lemma from the preceding chapter. Our system, which is of the general sort used in more advanced, theoretical studies, will be different from that used in virtually any introductory textbook—or to put a positive spin on it, virtually no reader will have an advantage over any other reader of previous acquaintance with the particular kind of system we are going to be using. Largely for the benefit of readers who have been or will be looking at other books, in the final section of the chapter we briefly indicate the kinds of variations that are possible and are actually to be met with in the literature. But as a matter of fact, it is not the details of any particular system that really matter, but rather the common features shared by all such systems, and except for a brief mention at the end of the next chapter (in a section that itself is optional reading), we will when this chapter is over never again have occasion to mention the details of our particular system or any other. The existence of some proof procedure or other with the properties of soundness and completeness will be the result that will matter.

[Let us indicate one consequence of the existence of such a procedure that will be looked at more closely in the next chapter. It is known that the consequence relation is not effectively decidable: that there cannot be a procedure, governed by definite and explicit rules, whose application would, in every case, in principle enable one to determine in a finite amount of time whether or not a given finite set of sentences implies a given sentence D. Two proofs of this fact appear in sections 11.1 and 11.2, with another to come in chapter 17. But the existence of a sound and complete proof procedure shows that the consequence relation is at least (positively) effectively semidecidable. There is a procedure whose application would, in case does imply D, in principle enable one to determine in a finite amount of time that it does so. The procedure is simply to search systematically through all finite objects of the appropriate kind, determining for each whether or not it constitutes a deduction of D from . For it is part of the notion of a proof procedure that there are definite and explicit rules for determining whether a given finite object of the appropriate sort does or does not constitute such a deduction. If does imply D, then checking through all possible deductions one by one, one would by completeness eventually find one that is a deduction of D from , thus by soundness showing that does imply D; but ifdoes not imply D, checking through all possible deductions would go on forever without result. As we said, these matters will be further discussed in the next chapter.]

At the same time one looks for a syntactic notion of deduction to capture and make recognizable the semantic notion of consequence, one would like to have also a syntactic notion of refutation to capture the semantic notion of unsatisfiability, and a syntactic notion of demonstration to capture the semantic notion of validity. At the cost of some very slight artificiality, the three notions of consequence, unsatisfiability, and validity can be subsumed as special cases under a single, more general notion. We say that one set of sentences secures another set of sentencesif every interpretation that makes all sentences in true makes some sentence in

168 PROOFS AND COMPLETENESS

true. (Note that when the sets are finite, = {C1, . . . , Cm } and = {D1, . . . , Dn }, this amounts to saying that every interpretation that makes C1 & . . . & Cm true makes D1 · · · Dn true: the elements of are being taken jointly as premisses, but the elements of are being taken alternatively as conclusions, so to speak.) When a set contains but a single sentence, then of course making some sentence in the set true and making every sentence in the set true come to the same thing, namely, making the sentence in the set true; and in this case we naturally speak of the sentence as doing the securing or as being secured. When the set is empty, then of course the condition that some sentence in it is made true is not fulfilled, since there is no sentence in it to be made true; and we count the condition that every sentence in the set is made true as being ‘vacuously’ fulfilled. (After all, there is no sentence in the set that is not made true.) With this understanding, consequence, unsatisfiability, and validity can be seen to be special cases of security in the way listed in Table 14-1.

Table 14-1. Metalogical notions

D is a consequence ofis unsatisfiable

D is valid

if and only if if and only if if and only if

secures {D}

securessecures {D}

Correspondingly, our approach to deductions will subsume them along with refutations and demonstrations under a more general notion of derivation. Thus for us the soundness and completeness theorems will be theorems relating a syntactic notion of derivability to a semantic notion of security, from which relationship various other relationships between syntactic and semantic notions will follow as special cases. The objects with which we are going to work in this chapter—the objects of which derivations will be composed—are called sequents. A sequent consists of a finite set of sentences on the left, the symbol in the middle, and a finite set of sentences on the right. We call the sequent secure if its left side secures its right side . The goal will be to define a notion of derivation so that there will be a derivation of a sequent if and only if it is secure.

Deliberately postponing the details of the definition, we just for the moment say that a derivation will be a kind of finite sequence of sequents, called the steps (or lines) of the derivation, subject to certain syntactic conditions or rules that remain to be stated. A derivation will be a derivation of a sequent if and only if that sequent is its last step (or bottom line). A sequent will be derivable if and only if there is some derivation of it. It is in terms of this notion of derivation that we will define other syntactic notions of interest, as in Table 14-2.

Table 14-2. Metalogical notions

A deduction of D from

is a

derivation of {D}

A refutation of

is a

derivation of

A demonstration of D

is a

derivation of {D}

 

 

 

14.1. SEQUENT CALCULUS

169

We naturally say that D is deducible from if there is a deduction of D from , that is refutable if there is a refutation of , and that D is demonstrable if there is a demonstration of D, where deduction, refutation, and demonstration are defined in terms of derivation as in Table 14-2. An irrefutable set of sentences is also called consistent, and a refutable one inconsistent. Our main goal will be so to define the notion of derivation that we can prove the following two theorems.

14.1Theorem (Soundness theorem). Every derivable sequent is secure.

14.2Theorem (Godel¨ completeness theorem). Every secure sequent is derivable.

It will then immediately follow (on comparing Tables 14-1 and 14-2) that there is an exact coincidence between two parallel sets of metalogical notions, the semantic and the syntactic, as shown in Table 14-3.

Table 14-3. Correspondences between metalogical notions

D is deducible from

if and only if

is inconsistent

if and only if

D is demonstrable

if and only if

D is a consequence ofis unsatisfiable

D is valid

To generalize to the case of infinite sets of sentences, we simply define to be derivable from if and only if some finite subset 0 of is derivable from some finite subset 0 of , and define deducibility and inconsistency in the infinite case similarly. As an easy corollary of the compactness theorem, secures if and only if some finite subset 0 of secures some finite subset 0 of . Thus Theorems 14.1 and 14.2 will extend to the infinite case: will be derivable from if and only if is secured by , even when and are infinite.

So much by way of preamble. It remains, then, to specify what conditions a sequence of sequents must fulfill in order to count as a derivation. In order for a sequence of steps to qualify as a derivation, each step must either be of the form { A} { A} or must follow from earlier steps according to one of another of several rules of inference permitting passage from one or more sequents taken as premisses to some other sequent taken as conclusion. The usual way of displaying rules is to write the premiss or premisses of the rule, a line below them, and the conclusion of the rule. The provision that a step may be of the form { A} { A} may itself be regarded as a special case of a rule of inference with zero premisses; and in listing the rules of inference, we in fact list this one first. In general, in the case of any rule, any sentence that appears in a premiss but not the conclusion of a rule is said to be exiting, any that appears in the conclusion but not the premisses is said to be entering, and any that appears in both a premiss and the conclusion is said to be standing. In the special case of the zero-premiss rule and steps of the form { A} { A}, the sentence A counts as entering. It will be convenient in this chapter to work as in the preceding chapter with a version of first-order logic in which the only logical symbols are ,, , =, that is, in which & and are treated as unofficial abbreviations. (If we admitted & and , there would be a need for four more rules, two for each. Nothing

170

 

PROOFS AND COMPLETENESS

Table 14-4. Rules of sequent calculus

 

 

 

 

 

 

 

 

 

 

 

 

(R0)

 

 

 

 

{ A} { A}

 

 

(R1)

 

subset of , subset of

 

(R2a)

 

 

 

{ A}

 

 

(R2b)

{ A}

 

 

 

{ A}

 

 

(R3)

{ A}

 

 

 

{ A, B}

 

 

(R4)

{(A B)}

 

 

 

{ A}

 

 

 

{B}

 

 

 

{ A B}

 

 

 

(R5)

{ A(s)}

 

 

 

{x A(x)}

 

 

 

(R6)

{ A(c)}

c not in or or A(x)

 

(R7)

{x A(x)}

 

 

{s = s}

 

 

 

 

 

 

 

(R8a)

{ A(t)}

 

 

 

{s = t} { A(s)}

 

 

 

(R8b)

{ A(t)}

 

 

(R9a)

{s = t, A(s)}

 

 

 

{ A}

 

 

(R9b)

{ A}

 

 

 

{ A}

 

 

 

{ A}

 

 

 

 

 

 

 

would be harder, but everything would be more tedious.) With this understanding, the rules are those give in Table 14-4.

These rules roughly correspond to patterns of inference used in unformalized deductive argument, and especially mathematical proof. (R2a) or right negation introduction corresponds to ‘proof by contradiction’, where an assumption A is shown to be inconsistent with background assumptions and it is concluded that those background assumptions imply its negation. (R2b) or left negation introduction corresponds to the inverse form of inference. (R3) or right disjunction introduction, together with (R1), allows us to pass from { A} or {B} by way of{ A, B} to {(A B)} , which corresponds to inferring a disjunction from one disjunct. (R4) or left disjunction introduction corresponds to ‘proof by cases’, where something that has been shown to follow from each disjunct is concluded to follow from a disjunction. (R5) or right existential quantifier introduction corresponds to inferring an existential generalization from a particular instance. (R6) or left existential-quantifier introduction is a bit subtler: it corresponds to a common procedure in mathematical proof where, assuming there is something for which a

14.1. SEQUENT CALCULUS

171

condition A holds, we ‘give it a name’ and say ‘let c be something for which the condition A holds’, where c is some previously unused name, and thereafter proceed to count whatever statements not mentioning c that can be shown to follow from the assumption that condition A holds for c as following from the original assumption that there is something for which condition A holds. (R8a, b) correspond to two forms of ‘substituting equals for equals’.

A couple of trivial examples will serve show how derivations are written.

14.3 Example. The deduction of a disjunction from a disjunct.

(1)

A A

(R0)

(2)

A A, B

(R1), (1)

(3)

A A B

(R3), (2)

The first thing to note here is that though officially what occur on the left and right sides of the double arrow in a sequent are sets, and sets have no intrinsic order among their elements, in writing a sequent, we do have to write those elements in some order or other. { A, B} and {B, A} and for that matter { A, A, B} are the same set, and therefore { A} { A, B} and { A} {B, A} and for that matter { A} { A, A, B} are the same sequent, but we have chosen to write the sequent the first way. Actually, we have not written the braces at all, nor will they be written in future when writing out derivations. [For that matter, have also been writing A B for (A B), and will be writing Fx for F(x) below.] An alternative approach would be to have sequences rather than sets of formulas on both sides of a sequent, and introduce additional ‘structural’ rules allowing one to reorder the sentences in a sequences, and for that matter, to introduce or eliminate repetitions.

The second thing to note here is that the numbering of the lines on the left, and the annotations on the right, are not officially part of the derivation. In practice, their presence makes it easier to check that a purported derivation really is one; but in principle it can be checked whether a string symbols constituties a derivation even without such annotation. For there are, after all, at each step only finitely many rules that could possibly have been applied to get that step from earlier steps, and only finitely many earlier steps any rule could possibly have been applied to, and in principle we need only check through these finitely many possibilities to find whether there is a justification for the given step.

14.4 Example. The deduction of a conjunct from a conjunction

(1)

A A

(R0)

(2)

A, B A

(R1), (1)

(3)

B A, A

(R2a), (2)

(4)

A, A, B

(R2a), (3)

(5)

A, A B

(R3), (4)

(6)

( A B) A

(R2b), (5)

(7)

A & B A

abbreviation, (6)

Here the last step, reminding us that ( A B) is what A & B abbreviates, is unofficial, so to speak. We omit the word ‘abbreviation’ in such cases in the future. It

172

PROOFS AND COMPLETENESS

is because & is not in the official notation, and we do not directly have rules for it, that the derivation in this example needs more steps than that in the preceding example.

Since the two examples so far have both been of derivations constituting deductions, let us give two equally short examples of derivations constituting refutations and demonstrations.

14.5 Example. Demonstration of a tautology

(1)

A A

(R0)

(2)

A, A

(R2b), (1)

(3)

A A

(R3), (2)

14.6 Example. Refutation of a contradiction

 

(1)

A A

(R0)

(2)

A, A

(R2b), (1)

(3)

A A

(R3), (2)

(4)

( A A)

(R2a), (3)

(5)

A & A

(4)

The remarks above about the immateriality of the order in which sentences are written are especially pertinent to the next example.

14.7 Example. Commutativity of disjunction

(1)

A A

(R0)

(2)

A B, A

(R1), (1)

(3)

A B A

(R3), (2)

(4)

B B

(R0)

(5)

B B, A

(R1), (4)

(6)

B B A

(R3), (5)

(7)

A B B A

(R4), (3), (6)

The commutativity of conjunction would be obtained similarly, though there would be more steps, for the same reason that there are more steps in Examples 14.4 and 14.6 than in Examples 14.3 and 14.5. Next we give a couple of somewhat more substantial examples, illustrating how the quantifier rules are to be used, and a couple of counter-examples to show how they are not to be used.

14.8 Example. Use of the first quantifier rule

(1)

Fc Fc

(R0)

(2)

Fc, Fc

(R2b), (1)

(3)

x F x, Fc

(R5), (2)

(4)

x F x, x F x

(R5), (3)

(5)

x F x x F x

(R2a), (4)

(6)

x F x x F x

(5)

 

14.1. SEQUENT CALCULUS

173

14.9 Example. Proper use of the two quantifier rules

 

(1)

Fc Fc

(R0)

(2)

Fc Fc, Gc

(R1), (1)

(3)

Gc Gc

(R0)

(4)

Gc Fc, Gc

(R1), (3)

(5)

Fc Gc Fc, Gc

(R4), (2), (4)

(6)

Fc Gc xF x, Gc

(R5), (5)

(7)

Fc Gc xF x, xG x

(R5), (6)

(8)

Fc Gc xF x xG x

(R3), (7)

(9)

x(F x G x) xF x xG x

(R6), (8)

14.10 Example. Improper use of the second quantifier rule

 

(1)

Fc Fc

(R0)

(2)

Fc, Fc

(R2b), (1)

(3)

xF x, Fc

(R6), (2)

(4)

xF x, x F x

(R6), (3)

(5)

xF x x F x

(R2b), (4)

(6)

xF x xF x

(5)

Since xFx does not imply xFx, there must be something wrong in this last example, either with our rules, or with the way they have been deployed in the example. In fact, it is the deployment of (R6) at line (3) that is illegitimate. Specifically, the side condition ‘c not in ’ in the official statement of the rule is not met, since the relevant in this case would be {Fc}, which contains c. Contrast this with a legitimate application of (R6) as at the last line in the preceding example. Ignoring the side condition ‘c not in ’ can equally lead to trouble, as in the next example. (Trouble can equally arise from ignoring the side condition ‘c not in A(x)’, but we leave it to the reader to provide an example.)

14.11 Example. Improper use of the second quantifier rule

(1)

Fc Fc

(R0)

(2)

xF x Fc

(R6), (1)

(3)

xF x, Fc

(R2b), (2)

(4)

xF x, x F x

(R6), (3)

(5)

xF x x F x

(R2a), (4)

(6)

xF x xF x

(5)

Finally, let us illustrate the use of the identity rules.

14.12 Example. Reflexivity of identity

(1)

c = c c = c

(R0)

(2)

c = c

(R7), (1)

(3)

c = c

(R2b), (2)

(4)

x x = x

(R6), (3)

(5)

x x = x

(R2a), (4)

(6)

x x = x

(5)