Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Logic and Integer Programming.pdf
Скачиваний:
45
Добавлен:
10.08.2013
Размер:
2.8 Mб
Скачать

102

3 Modelling in Logic for Integer Programming

No 0–1 variable is necessary at the top level since we have a predicate of the form (3.135). Therefore the full condition (3.144) is modelled by (3.150) to (3.153) together with (3.156) to (3.159) and (3.161) and (3.163).

3.6 References and Further Work

Williams [117] presents the relationship between connectives in the propositional calculus and IP constraints. This is also covered in Williams [112, 115, 119]. Other authors have also written on the subject such as Barth [12], Hurlimann¨ [58], Dietricht et al. [33], Chandru and Hooker [22], Hooker [54, 56] and Wilson [121, 122] .

Balas [5, 7, 9] was the first to develop the theory of disjunctive programming and the resultant cutting planes in [6]. Sherali and Shetty [100] also cover the subject.

The theory of MIP representability was developed by Meyer [82] and Jeroslow [62, 63]. Jeroslow also (stemming from the work of Balas) produced the idea of the splitting of variables to lead to convex formulations (under the slightly misleading title ‘convex formulations’). Jeroslow and Lowe [64] describe modelling techniques based on these methods. Martin[78] obtains a number of IP reformulations by means of new variables.

Improved (tighter) IP formulations, by means of splitting variables, exist for a number of applications. Sometimes it is possible to (fictitiously) introduce ‘multiple commodities’ into a problem where there is only one commodity to increase tightness. Wolsey [126] does this for the lot-sizing problem and Wong [127] and Claus [25] for the network flow formulation of the travelling salesman problem.

Williams [116] gives the duality-based proof to show that the convex formulation is always sharp. Minimax algebra (using the operators ‘ ’ and ‘ ’) was developed by Carre´ [21] and Cunningham Green [28].

Modelling languages have been developed by a number of authors such as Fourer et al. [37] for the AMPL system and Day and Williams [32]. McKinnon and Williams [80] developed the modelling system based on the recursive use of the ‘greater-than-or-equal-to’ predicate. They also show how to avoid introducing a component for every variable in every disjunction. Hadjiconstantinou et al. [50] present another way of systematically converting a logical expression into an IP model. Greenberg and Murphy [48] compare different modelling systems.

3.7 Exercises

3.7.1 Model the disjunction of r constraints using log2 r 0–1 variables as described in Sect. 3.1. Is the LP relaxation weaker than that for the formulation using r constraints?

3.7 Exercises

103

3.7.2Show that if the open facets of a disjunction of polyhedra contain at least as many extreme rays as the number of variables then they represent parallel facets.

3.7.3Give an example of a set of polyhedra with the same recession directions whose open facets are not parallel. Give a MIP representation of the disjunction of these polyhedra without splitting the variables.

3.7.4Show that statement (3.54) is equivalent to (3.55).

3.7.5Reformulate Example 3.5 in CNF and solve the resultant IP.

3.7.6Formulate the dual of Example 3.5.

3.7.7Show that the dual of the dual of a disjunctive programme is the primal.

3.7.8Convert the predicates in Sect. 3.5 into ‘ge’ form.

3.7.9Solve Example 3.2 with objective (3.20) using the method described in Example 2.3.

3.7.10Solve Example 3.5 using the method described in Example 2.3.

Chapter 4

The Satisfiability Problem and Its Extensions

Given a statement in logic can it ever be true? This is the satisfiability problem. We will confine our attention to the propositional calculus. However, the problem also arises in the predicate calculus where it is necessary to consider if there are instantiations of the variables which make a statement true. The problem is equivalent to the inference and consistency problems mentioned in Chapter 1.

Suppose we want to decide if

P = Q

(4.1)

i.e. can we infer Q from P, where P and Q are logical statements or sets of logical statements. Equation (4.1) is equivalent to the logical statement

P −→ Q

(4.2)

(which could also be written as P Q).

We simply need to test if (4.2) is always true, i.e. if its negation can never be true.

The negation can be represented by

 

P · Q

(4.3)

using De Morgan’s laws.

Hence is (4.3) ever true?, i.e. is it satisfiable? If it is then the inference (4.1) does not hold, otherwise it does hold.

Conversely if we have a satisfiability problem of testing if a statement, or set of statements, P is satisfiable we can (trivially) convert it to the inference problem

P = F

(4.4)

If this inference is not true then P is sometimes true, i.e. it is satisfiable.

 

If we wish to check if a set of statements

 

P1, P2, P3, . . . , Pn

(4.5)

H.P. Williams, Logic and Integer Programming, International Series in Operations

105

Research & Management Science 130, DOI 10.1007/978-0-387-92280-5 4,

 

C Springer Science+Business Media, LLC 2009

 

106

4 The Satisfiability Problem and Its Extensions

is consistent then we simply need to test if the statement

 

 

P1 · P2 · P3 · · · Pn

(4.6)

is satisfiable.

Conversely saying P is satisfiable is the same as saying it is consistent. Satisfiability problems with a small number of literals could be solved by the use

of truth tables as follows.

Example 4.1 Is the following statement satisfiable?

 

 

 

 

 

 

(X1 −→ (X2 X3)) (X1 · X3)

(4.7)

We can successively fill in the columns of the following truth table (Table 4.1)

Table 4.1 A Truth Table for Satisfiability

 

 

 

 

 

 

 

 

 

 

 

 

 

X1

X2

X3

X2 X3

X1 −→ X2 X3

X1 · X3

(X1 −→ X2 X3) −→ (X1 · X3)

T

T

T

T

T

 

F

F

T

T

F

T

T

 

F

F

T

F

T

T

T

 

F

F

T

F

F

F

F

 

F

T

F

T

T

T

T

 

F

F

F

T

F

T

T

 

T

T

F

F

T

T

T

 

F

F

F

F

F

F

T

 

T

T

 

 

 

 

 

 

 

 

 

 

 

 

 

This demonstrates that the statement is satisfiable. It is satisfiable for the three truth table settings

X1 = T,

X2 = X3 = F

X1 = F,

X2 = T, X3 = F

X1 = X2 = X3 = F

It can be seen that the number of rows in such a truth table will be 2n , where n is the number of literals. 2n grows exponentially rendering such an approach impractical for other than small problems. We present practical alternatives (although in the ‘worst-case’ satisfiability problems are of exponential complexity, as discussed in Section 2.4).

4.1 Resolution and Absorption

In order to carry out a more efficient procedure we write a logical statement in conjunctive normal form (CNF). Therefore we consider statements written as a conjunction of disjunctive clauses. We will use the following, larger, example.

4.1 Resolution and Absorption

 

 

 

 

 

 

 

107

Example 4.2 Is the following statement satisfiable?

 

 

 

 

 

 

 

X3 X4

(4.8)

 

X1

·

 

 

 

X3 X4

(4.9)

X2

 

 

·X1

 

 

3

 

 

 

(4.10)

 

 

 

X

 

 

·

 

 

X4

(4.11)

 

 

X3

·

 

 

X3

 

 

 

(4.12)

X1

 

X4

 

 

·

 

1

 

 

(4.13)

 

 

X

X2

 

 

·

 

X3

(4.14)

 

 

X2

 

 

·X2

X4

(4.15)

 

 

·X3

X4

(4.16)

i.e. we have a conjunction of the (disjunctive) clauses which are written on each line. If it turns out that a conjunction of clauses, such as that above, is not satisfiable then there is also interest in finding the maximum number of clauses which are, together, satisfiable. This is known as the maximum satisfiability problem. We will

show how to solve it as an IP model in Sect. 4.5.

Before solving Example 4.2, in Sect. 4.2, we define two procedures needed. Resolution consists of choosing two clauses which have exactly one literal which

is negated in one clause and unnegated in the other (they may or may not have other literals in common, but if so they must have the same sign).

An example is provided by clauses (4.9) and (4.12).

 

 

 

 

 

 

X2 X3 X4 and X1 X3 X4

(4.17)

Two such clauses can be combined by forming their disjunction and deleting the common literal (in this case X4 ) which is negated in one and unnegated in the other. The resultant clause is referred to as their resolvent. In the case of (4.17) we obtain the resolvent

 

 

 

 

(4.18)

X1 X2 X3

Clearly the resolvent is implied by the original two clauses since the eliminated common literal (X4) is either true or false. If it is true the remaining part of one of the clauses is true and if it is false the remaining part of the other clause is true. Hence either way the disjunction of them is true giving the resolvent, i.e. the resolvent is an implication of the original statement.

Absorption involves looking for clauses that contain all the literals of another clause and the common literals have the same sign. The smaller clause is said to absorb the larger. For example (4.16) absorbs (4.8). Clearly the smaller clause implies the larger which may therefore be removed (obviously if there are duplicate clauses either one can be removed).

108

4 The Satisfiability Problem and Its Extensions

To simplify a logical statement in CNF we repeatedly apply absorption and resolution, appending the resolvent to the set of clauses and removing absorbed clauses, until no further simplification is possible. The result is a ‘simpler ’ but equivalent statement.

Applying these procedures to the example we finally obtain the conjunction

 

 

 

 

 

 

 

X2 · X4 · (X1 X3) · (X1 X3)

(4.19)

The individual clauses obtained (in this example X2, X4, X1 X3, X1 X3) are known as prime implications. They are the smallest (i.e. least number of literals) disjunctive clauses that are implied by the original statement.

This procedure delivers us the complete sum of prime implications, i.e. every possible minimal clausal prime implication. All other clausal implications would be absorbed by a prime implication (and therefore not be prime).

In order to show this suppose we were to have a disjunctive clause C which is implied by the original statement and is the longest clause not absorbed by any of the prime implications. We can, without loss of generality, assume that all the literals in C are unnegated (by, if necessary, replacing the negated literals throughout by unnegated ones). Therefore let

C = X1 X2 · · · Xr

(4.20)

In order for C to be false X1 = X2 = · · · = Xr

= F. Therefore C cannot

contain all the literals in the original statement as the only way for the original statement to be false would be for all the literals in the prime implications to be unnegated. This would cause them each to absorb C. Therefore C must not contain some variable, say Xi . In this case, since C is the longest clause not absorbed Xi C and Xi C which are both implied must be absorbed by prime implications. The prime implication which absorbs Xi C must contain Xi otherwise it would absorb

C. The prime implication which absorbs Xi C must contain Xi for a similar reason. The resolvent of these two implications would be a prime implication which absorbs C contradicting the definition of C.

The complete sum of prime implications is not, however, necessarily the simplest equivalent statement. There might be a proper subset of the prime implications, the conjunction of which is still an equivalent statement. The finding of the minimal equivalent set of prime implications is, however, another (difficult) problem. We address this problem in Sect. 4.6. For the small example above, the four derived prime implications also form the minimal set. However, we will now use resolution and absorption, together with a branching procedure, for the more modest aim of determining if a statement in CNF is satisfiable.