Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Cheng A.Real-time systems.Scheduling,analysis,and verification.2002.pdf
Скачиваний:
64
Добавлен:
23.08.2013
Размер:
3.68 Mб
Скачать

298 DESIGN AND ANALYSIS OF PROPOSITIONAL-LOGIC RULE-BASED SYSTEMS

of the n rules should be fired and can check in polynomial time whether t1 + t2 +

· · · + tn T and R C. This time-budgeting problem can be shown to be NPcomplete by an easy reduction from the NP-complete knapsack problem. The knapsack problem consists of a finite set U , a size s(u), and a value v(u) for each u U , a size constraint T , and a value objective C. All values s(u), v(u), T , and C are positive integers. The issue is to determine whether a subset U1 U exists such that the sum of the sizes s(u) U1 T and the sum of the values v(u) U1 C. To transform the knapsack problem into the time-budgeting problem, let each item

ui U correspond to a unique rule i such that

 

s(ui ).

qi (t¯) =

v(ui ) if

ti

 

0

if

ti

< s(ui )

Obviously, the knapsack problem has a solution iff it is possible to schedule a subset of the rules to fire a total of T times so that R C.

The time-budgeting problem captures the property of an important class of realtime applications in which the precision and/or certainty of a computational result can be traded for computation time. Solution methods to this problem are therefore of practical interest. For the case in which the total reward is the sum of the value functions of the subsystems, the problem can be solved by a well known pseudopolynomial time algorithm based on the dynamic programming solution to the knapsack problem. Since this computation is done off-line, computation time is usually not critical. However, if the total reward is a more complex function than the sum, the dynamic programming approach may not apply. We shall propose another approach that is suboptimal but can handle complex total reward functions. The idea is to use a continuous function to interpolate and bound each reward function and then apply the method of Lagrange multipliers to maximize the total reward, subject to the given timing constraint. This approach will be explored in the next section.

10.8.2The Method of Lagrange Multipliers for Solving the Time-Budgeting Problem

Given that the reward for firing the ith rule ti times is qi (ti ), and T is the maximum number of iterations allowed, the time-budgeting problem can be formulated as a

combinatorial optimization problem whose objective is to maximize R subject to the constraint: t1 +· · ·+tn T = 0. For the above program, R(t¯) = q1(t1)+· · ·+qn (tn ).

Other than the requirement that the ti s must be integral, this problem is in a form that can be solved by the method of Lagrange multipliers. To maximize (or minimize) a reward function f (t¯) subject to the side condition g(t¯) = 0 (i.e., response time constraint in our case), we solve for t¯ in H (t¯, λ) = 0, where λ is the Lagrange multiplier and

H (t¯, λ) = f (t¯) λ · g(t¯).

Example 8. Consider the following EQL program, which is an instance of the timebudgeting problem with two rules.

THE SYNTHESIS PROBLEM

299

initially: R = 0, t1 = t2 = 0 input: read(C)

1.R, t1 := R + q1(t¯), t1 + 1 IF R < C

2.[] R, t2 := R + q2(t¯), t2 + 1 IF R < C

Let T = 10. The reward functions, q1 and q2, for these two rules are given below.

Discrete reward function q1

 

t1

1

 

2

 

 

3

 

 

4

 

5

6

7

8

9

10

 

 

q1

4

 

5

 

 

7

 

 

8

 

9

9

10

11

12

12

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Discrete reward function q2

 

 

 

 

 

 

t1

 

1

 

2

3

 

4

 

 

5

6

 

7

 

8

 

9

 

10

 

 

q1

 

6

 

8

9

 

9

 

 

10

10

 

10

 

10

 

10

 

10

 

The Lagrange multipliers method can be applied as follows. First, we interpolate

and bound the two sets of data points with two continuous and differentiable functions f1 and f2, obtaining f1(t1) = 4 · t11/2, f2(t2) = 10 · (1 et2 ). The graph below (Figure 10.7) shows the plots of the two discrete reward functions and their

Maximizing the Total Quality

14

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

12

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

10

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

q

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4

 

 

 

 

 

 

 

 

 

 

 

q1(t1), f1(t1) = 4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

t1

 

 

 

 

 

2

 

 

 

 

 

 

q2(t2), f2(t2) = 10 (1 et2 )

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

0

2

4

6

8

 

10

 

 

 

 

 

 

 

 

 

 

 

 

t

 

 

 

 

 

 

 

 

Figure 10.7 Continuous functions f1 and f2 approximating the discrete functions q1 and q2.

300 DESIGN AND ANALYSIS OF PROPOSITIONAL-LOGIC RULE-BASED SYSTEMS

respective approximate continuous functions. The discrete reward function q1 and its corresponding approximate function f1 are plotted in dotted lines. The discrete reward function q2 and its corresponding approximate function f2 are plotted in solid lines.

The side constraint of this problem is t1 + t2 = T = 10. Both t1 and t2 must be non-negative because a rule cannot fire a negative number of times. We have:

H (t1, t2, λ) = f (t¯) λ · g(t¯)

=f1(t1) + f2(t2) λ · (t1 + t2 T )

=4 t11/2 + 10 (1 et2 ) λ (t1 + t2 10).

Differentiating H (t1, t2, λ) with respect to t1, t2, and λ, and then setting each derivative equal to 0, we obtain the following three equations:

H

= 2 t11/2 λ = 0,

(1)

t1

H

= 10 et2 λ = 0,

(2)

 

t2

H

= −(t1 + t2) + 10 = 0.

(3)

 

∂λ

Combining the first two equations, we obtain two equations with two unknowns. Solving for t1 and t2, we get

2t11/2 10et2 = 0

t1 + t2 = 10.

The values for t1 and t2 are 7.391 and 2.609, respectively. Because these optimal values are not integral, we first truncate to obtain t1 = 7 and t2 = 2. We are then left with one extra time unit which can be used to fire a rule once. We allocate this extra time unit to the rule that will add the largest marginal reward to R. Ties are broken arbitrarily. In our example, the marginal reward for firing rule 1 or rule 2 is 1 in either case. We select rule 2 to fire for another time to obtain a total reward = 19, with t1 = 7 and t2 = 3. For programs with more rules, an integral solution is obtained by truncating the Lagrange multiplier solution and using a greedy algorithm to select rules to fire to maximize the marginal reward. In this example, this also turns out to be the optimal solution to the integer optimization problem.

It should be noted that it is unclear whether the quality of the solutions obtained by the Lagrange multiplier approach is in general better than that of a greedy algorithm for solving the knapsack problem. However, this approach can handle more general reward functions, and more importantly, it lends itself to parameterizing the solution with respect to the response-time constraint T and the reward objective C. For example, we may use a quadratic B-spline interpolation algorithm to interpolate

SPECIFYING TERMINATION CONDITIONS IN ESTELLA

301

and bound each set of discrete reward values to obtain n quadratic functions. After taking the partial derivatives, as required by the Lagrange multiplier method, we have n + 1 linear equations. Given the values of T and C at run time, these equations can be efficiently solved, for example, by the Gaussian elimination algorithm. The use of a continuous function to bound the rewards also gives us a better handle on guaranteeing that an equational rule-based program can meet some minimum performance index in bounded time than ad hoc greedy algorithms, which must be analyzed for individual reward functions. Such guarantees are of great importance for safety-critical applications.

10.9 SPECIFYING TERMINATION CONDITIONS IN ESTELLA

So far we have introduced the basic features of real-time expert systems and a framework for analysis. Now we describe a comprehensive analysis approach and a language for specifying termination conditions of rule-based systems. We have seen that determining how fast an expert system can respond under all possible situations is a difficult and, in general, an undecidable problem [Browne, Cheng, and Mok, 1988].

The focus here is on determining whether a rule-based EQL program has bounded response time. The verification of whether a rule-based program satisfies the specification, that is, checking logical correctness, has been studied extensively by non-real-time system researchers and developers. Earlier, we described an efficient analysis methodology for analyzing a large class of rule-based EQL programs to determine whether a program in this class has bounded response time. In particular, we identified several sets of primitive behavioral constraint assertions called “special forms” of rules with the following property: an EQL program that satisfies all constraints in one of these sets of constraint assertions is guaranteed to have bounded response time. Once a rule set is found to have bounded response time, efficient algorithms reported in [Cheng, 1992b] can be used to compute tight response-time bounds for this rule set.

Since the verification of these constraint assertions is based on static analysis of the EQL rules and does not require checking the state-space graph corresponding to all execution sequences of these rules, our analysis methodology makes the analysis of programs with a large number of rules and variables feasible. A suite of computer-aided software engineering tools based on this analysis approach has been implemented and has been used successfully to analyze several real-time expert systems developed by Mitre and NASA for the Space Shuttle and the planned Space Station.

Unlike the design and analysis of non-time-critical systems and software, the design and analysis of real-time systems and software often require specialized knowledge about the application under consideration. General techniques applicable to all or even to a large class of real-time systems and software incur a large penalty in either performance or works for very small systems. Here, we enhance the applicability of our analysis technique by introducing a new facility with

Соседние файлы в предмете Электротехника