Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Cheng A.Real-time systems.Scheduling,analysis,and verification.2002.pdf
Скачиваний:
58
Добавлен:
23.08.2013
Размер:
3.68 Mб
Скачать

74 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS

and run on a Windows 95 machine. The evaluation copy does not provide a “save” capability, so the X-38 task system is not modeled; rather, we analyze example programs and a complete tutorial to assess the capabilities of this tool. Analogies between example projects provided with the tool and the X-38 are drawn to analyze how this tool might be used to represent the X-38 task structure. The tool provides a tabular, rather than graphical, interface for specifying the system, so the system being studied seems more difficult to visualize. PerfoRMAx utilizes an event-action- resource model for task scheduling. Because tasks in the X-38 system are not sporadic, but rather cyclic and based mainly on a minor frame time, the event-action model used by this tool is less intuitive than the task-resource model provided by PERTS. To use this tool, the user must first define the events, actions, and resources in the system, along with attributes for each. Events are triggers that start a particular action (task).

In the case of the X-38, there are no sporadic events, only timed events to start task execution. In order to model the X-38 system, a triggering event for each task either needs to be artificially defined, or one event that sets off an entire list of tasks, based on a timer, needs to be defined. If the first method is chosen in which each task is triggered by a different event, the tool provides no intuitive way to capture dependencies between events/tasks to ensure deterministic task sequencing. In the latter case, if one event is used to set off a chain of actions, the tool limits chained actions to a number smaller than this project requires. Since the tool produces a timeline based on events only, a single event triggering a number of tasks would show up on the timeline as one item.

We need the ability to view each task and their start/end times on a timeline. To model the system as a set of one or more chained actions seems to be a less intuitive representation for the project and appears to prohibit the per-action or per-task timing analysis needed by the system. The tool does not provide an easy representation of the order dependencies across multiple resources, so the requirement for schedulability analysis of the system is not satisfied. Because a particular task execution order is desired in the X-38 and no phasing information is allowed as input into the tool, it is quite difficult to predetermine task start times and end with the desired resultant schedule even if each task is represented as an event-action pair. A smaller set of scheduling algorithms and event-action specification attributes is provided, and task timing interrelationships are not as easily captured. Hardcopy output of the timeline display does not seem to be available above an entire screen capture. Robust online help and status messages are provided, and extensibility is advertised.

3.4.3 TimeWiz

We examine an evaluation copy of TimeWiz requested from TimeSys Corporation at:

http://www.timesys.com

and received and installed on a Windows 95 machine. The evaluation copy does not provide a “save” capability, so like the perfoRMAx tool, the X-38 system is not modeled. Instead, we analyze example programs, a complete tutorial, and robust online

AVAILABLE REAL-TIME OPERATING SYSTEMS

75

help to assess the capabilities of this tool. In addition, a representative from TimeSys has visited the NASA Johnson Space Center, where the X-38 is being designed, and provided an informative consultation on how to best model the X-38 system.

Similar to PerfoRMAx, TimeWiz utilizes an event-action resource model for task scheduling, which seems less intuitive a model for this project. TimeWiz provides a more extensive and robust user interface and some graphical representation capabilities, but the majority of information is entered and viewed tabularly. A rich set of single-node scheduling methodologies and resource sharing paradigms is provided, though only a simple user-defined priority scheduling algorithm is needed. Many object attributes are provided to record aspects of the system. Because, like PerfoRMAx, the timeline display depicts events rather than actions versus time, in order to produce the timeline display as required, each task needs to be artificially paired with a similarly named event. Unlike PerfoRMAx, however, this tool has begun to provide capabilities to capture dependencies and/or precedence relations between events, called “user defined internal events.” However, since these dependencies are not yet integrated with the scheduling engine, the requirement to provide schedulability analysis for this system is not met.

Without a way to model dependencies between actions on the same and different resources, the scheduler can devise a valid schedule but have the actions out of the desired sequence. Starttimes (phases) and priorities can be manually entered, so it is possible to generate the desired deterministic schedule. A system timeline showing event start/stop times on all resources in the system is not provided, but is required by the X-38 project. The timeline display is on a per-resource basis, does not contain deadline/start time annotations, and shows events (triggers) rather than actions, and hardcopy capabilities for the timeline are currently limited. This product may be the most preliminary of those evaluated, but it is rapidly evolving to meet particular customer needs and seems extremely well supported. Other features of the tool include an Application Programming Interface (API) for extensibility and some integrated reporting capabilities.

3.5 AVAILABLE REAL-TIME OPERATING SYSTEMS

The goals of conventional, non-real-time operating systems are to provide a convenient interface between the user and the computer hardware while attempting to maximize average throughput, to minimize average waiting time for tasks, and to ensure the fair and correct sharing of resources. However, meeting task deadlines is not an essential objective in non-real-time operating systems since its scheduler usually does not consider the deadlines of individual tasks when making scheduling decisions.

For real-time applications in which task deadlines must be satisfied, a real-time operating system (RTOS) with an appropriate scheduler for scheduling tasks with timing constraints must be used. Since the late 1980s, several experimental as well as commercial RTOSs have been developed, most of which are extensions and modifications of existing operating systems such as UNIX. Most current RTOSs conform

76 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS

to the IEEE POSIX standard and its real-time extensions [Gallmeister and Lanier, 1991; Gallmeister, 1995; Posix]. Commercial RTOSs include LynxOS, RTMX O/S, VxWorks, and pSOSystem.

LynxOS is LynuxWorks’s hard RTOS based on the Linux operating system. It is scalable, Linux-compatible, and highly deterministic, and is available from

http://www.lynx.com/

LynuxWorks also offers BlueCat Linux, an open-source Linux for fast embedded system development and deployment.

RTMX O/S has support for X11 and Motif on M68K, MIPS, SPARC and PowerPC processors, and is available from

http://www.rtmx.com/

VxWorks and pSOSystem are Wind River’s RTOSs with a flexible, scalable, and reliable architecture, and are available for most CPU platforms. Details can be found in

http://www.windriver.com/products/html/os.html

3.6 HISTORICAL PERSPECTIVE AND RELATED WORK

Scheduling real-time tasks has been extensively studied. The fact that the deadline of every task must be satisfied distinguishes real-time scheduling from non-real-time- scheduling, in which the entire task set is treated as an aggregrate and an overall performance such as throughput is more important. One of the first fundamental works in the field of real-time scheduling is the seminal paper by Liu and Layland [Liu and Layland, 1973], which has laid the groundwork for uniprocessor scheduling using the RM algorithm as well as deadline-based techniques.

Lehoczky, Sha, and Ding give an exact characterization of the RM scheduler and present the necessary and sufficient conditions for RM schedulability in [Lehoczky, Sha, and Ding, 1989]. Lehoczky then provides in [Lehoczky, 1990] an approach for RM scheduling of periodic tasks whose deadlines do not equal their corresponding periods. Mok [Mok, 1984] presents the deterministic rendezvous model and the kernelized monitor model for scheduling periodic tasks.

Lehoczky, Sha, and Strosnider present the deferred server (DS) algorithm [Lehoczky, Sha, and Strosnider, 1987] for scheduling sporadic tasks in a system with periodic tasks. For the special case in which the DS has the shortest period among all tasks (so the DS has the highest priority), a schedulable utilization is presented in [Lehoczky, Sha, and Strosnider, 1987; Strosnider, Lehoczky, and Sha, 1995]. An approach for checking for EDF schedulability is given in [Baruah, Mok, and Rosier, 1990]. Xu and Parnas [Xu and Parnas, 1990] describe scheduling algorithms for tasks with precedence constraints. Sha et al. [Sha, Rajkumar, and Lehoczky, 1990] present a framework for solving the priority inversion problem.

SUMMARY 77

Lin et al. [Lin, Liu, and Natarajan, 1987] propose the concept of imprecise computations that allows a task to trade the quality of the its produced output with the processor time allocated to this task. Wang and Cheng [Wang and Cheng, 2002] recently introduced a new schedulability test and compensation strategy for imprecise computation. Applications of this imprecise computation approach include the work on video transmission by Huang and Cheng [Huang and Cheng, 1995] and by Cheng and Rao [Cheng and Rao, 2002], and the work on TIFF image transmission in ATM networks by Wong and Cheng [Wong and Cheng, 1997].

Several general textbooks on real-time scheduling are available. Liu’s book [Liu, 2000] describes many scheduling algorithms and presents real-time communication and operating systems issues. Krishna and Shin’s book [Krishna and Shin, 1997] also discusses scheduling but is shorter and more tutorial. It also covers real-time programming languages, databases, fault tolerance, and reliability issues. Burns and Wellings’ book [Burns and Wellings, 1990; Burns and Wellings, 1996] focuses on programming languages for real-time systems. Shaw’s book [Shaw, 2001] describes real-time software design, operating systems, programming languages, and techniques for predicting execution time.

3.7 SUMMARY

Scheduling a set of tasks (processes) is to determine when to execute which task, thus determining the execution order of these tasks, and in the case of a multiprocessor or distributed system, to also determine an assignment of these tasks to specific processors. This task assignment is analogous to assigning tasks to a specific person in a team of people. Scheduling is a central activity of a computer system, usually performed by the operating system. Scheduling is also necessary in many non-computer systems such as assembly lines.

In non-real-time systems, the typical goal of scheduling is to maximize average throughput (number of tasks completed per unit time) and/or to minimize average waiting time of the tasks. In the case of real-time scheduling, the goal is to meet the deadline of every task by ensuring that each task can be completed by its specified deadline. This deadline is derived from environmental constraints imposed by the application.

Schedulability analysis is to determine whether a specific set of tasks or a set of tasks satisfying certain constraints can be successfully scheduled (completing execution of every task by its specified deadline). A schedulability test is used to validate that a given application can satisfy its specified deadlines when scheduled according to a specific scheduling algorithm. This schedulability test is often done at compile time, before the computer system and its tasks start their execution. If the test can be performed efficiently, then it can be done at run-time as an on-line test. A schedulable utilization is the maximum utilization allowed for a set of tasks that will guarantee a feasible scheduling of this task set.

A hard real-time system requires that every task or task instance completes its execution by its specified deadline, and failure to do so even for a single task or task

78 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS

instance may lead to catastrophic consequences. A soft real-time system allows some tasks or task instances to miss their deadlines, but a task or task instance that misses a deadline may yield a lower-quality output. Basically two types of schedulers exist: compile-time (static) and run-time (on-line or dynamic).

Optimal scheduler: An optimal scheduler is one that may fail to meet a deadline of a task only if no other scheduler can.

Note that optimal in real-time scheduling does not necessarily mean fastest average response time or smallest average waiting time. A task Ti is characterized by the following parameters:

S: start, release, or arrival time,

c: (maximum) computation time,

d: relative deadline (deadline relative to the task’s start time), D: absolute deadline (wall clock time deadline).

Mainly three types of tasks exist. A single-instance task executes only once. A periodic task has many instances or iterations, and a fixed period exists between two consecutive releases of the same task. A sporadic task has zero or more instances, and a minimum separation occurs between two consecutive releases of the same task. An aperiodic task is a sporadic task with either a soft deadline or no deadline. Therefore, if the task has more than one instance (sometimes called a job), we also have the following parameter:

p: period (for periodic tasks); minimum separation (for sporadic tasks).

The following are additional constraints that may complicate scheduling:

1.Frequency of tasks requesting service periodically,

2.Precedence relations among tasks and subtasks,

3.Resources shared by tasks, and

4.Whether task preemption is allowed or not.

If tasks are preemptable, we assume that a task can be interrupted only at discrete (integer) time instants unless otherwise indicated.

The application and the environment in which the application is embedded are main factors determining the start (release) time, deadline, and period of a task. The computation (or execution) times of a task are dependent on its source code, object code, execution architecture, memory management policies, and actual number of page faults and I/O.

For real-time scheduling purposes, we use the worst-case execution (or computation) time (WCET) as C. This time is not simply an upper bound on the execution of the task code without interruption. This computation time has to include the time the CPU is executing non-task code, such as code for handling page fault caused by

SUMMARY 79

this task, as well as the time an I/O request spends in the disk queue for bringing in a missing page for this task.

Determining the computation time of a process is crucial to successfully scheduling it in a real-time system. An overly pessimistic estimate of the computation time would result in wasted CPU cycles, whereas an under-approximation could result in missed deadlines.

One way of approximating WCET is to perform testing and use the largest computation time seen during these tests. Another typical approach to determining a process’s computation time is analyzing the source code [Harmon, Baker, and Whalley, 1994; Park, 1992; Park, 1993; Park and Shaw, 1990; Shaw, 1989; Puschner and Koza, 1989; Nielsen, 1987; Chapman, Burns, and Wellings, 1996; Lundqvist and Stenstrvm, 1999; Sun and Liu, 1996]. An alternative to the above methods is to use a probability model to model the WCET of a process as suggested by Burns and Edgar [Burns and Edgar, 2000]. The idea here is to model the distribution of the computation time and use it to compute a confidence level for any given computation time.

A popular real-time scheduling strategy is the rate-monotonic (RMS or RM) scheduler, which is a fixed (static)-priority scheduler using the task’s (fixed) period as the task’s priority. RMS executes at any time instant the instance of the ready task with the shortest period first. If two or more tasks have the same period, RMS randomly selects one for execution next. The RM scheduling algorithm is not optimal in general since schedulable task sets exist that are not RM-schedulable. However, there is a special class of period task sets for which the RM scheduler is optimal.

Schedulability Test 1: Given a set of n independent, preemptable, and periodic tasks on a uniprocessor such that their relative deadlines are equal to or larger than their respective periods and their periods are exact (integer) multiples of each other, let U be the total utilization of this task set. A necessary and sufficient condition for feasible scheduling of this task set is

n

U = ci 1.

i =1 pi

Schedulability Test 2: Given a set of n independent, preemptable, and periodic tasks on a uniprocessor, let U be the total utilization of this task set. A sufficient condition for feasible scheduling of this task set is

U n(21/n 1).

Schedulability Test 3: Let

i

pk

, 0 < t pi .

wi (t) = k 1 ck

 

 

t

 

=

 

 

 

80

REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS

 

The inequality

 

 

 

 

wi (t) t

 

 

 

holds for any time instant t chosen as follows,

pij

 

 

t = kp j , j = 1, . . . , i, k = 1, . . . ,

 

 

 

p

 

iff task Ji is RM-schedulable. If di = pi , we replace pi by min(di , pi ) in the above expression.

Another fixed-priority scheduler is the deadline-monotonic (DM) scheduling algorithm, which assigns higher priorities to tasks with shorter relative deadlines. If every task’s period is the same as its deadline, the RM and DM scheduling algorithms are equivalent. In general, these two algorithms are equivalent if every task’s deadline is the product of a constant k and this task’s period, that is, di = kpi .

An optimal, run-time scheduler is the earliest-deadline-first (also known as EDF or ED) algorithm, which, at every instant, first executes the ready task with the earliest (closest or nearest) absolute deadline. If more than one task have the same deadline, EDF randomly selects one for execution next. EDF is a dynamic-priority scheduler since task priorities may change at run-time depending on the nearness of their absolute deadlines. Some authors [Krishna and Shin, 1997] call EDF a DM scheduling algorithm whereas others [Liu, 2000] define the DM algorithm as a fixedpriority scheduler that assigns higher priorities to tasks with shorter relative deadlines.

Another optimal, run-time scheduler is the least-laxity-first (LL) algorithm. Let c(i ) denote the remaining computation time of a task at time i . At the arrival time of a task, c(i ) is the computation time of this task. Let d(i ) denote the deadline of a task relative to the current time i . Then the laxity (or slack) of a task at time i is d(i ) c(i ). Thus the laxity of a task is the maximum time the task can delay execution without missing its deadline in the future. The LL scheduler executes at every instant the ready task with the smallest laxity. If more than one task have the same laxity, LL randomly selects one for execution next.

For a uniprocessor, both ED and LL schedulers are optimal for preemptable tasks with no precedence, resource, or mutual exclusion constraints. A simple necessary and sufficient condition exists for scheduling a set of independent, preemptable periodic tasks [Liu and Layland, 1973].

Schedulability Test 4: Let ci denote the computation time of task Ji . For a set of n periodic tasks such that relative deadline di of each task is equal to or greater than its respective period pi (di pi ), a necessary and sufficient condition for feasible scheduling of this task set on a uniprocessor is that the utilization of the tasks is less

SUMMARY 81

than or equal to 1:

n

U = ci 1.

i =1 pi

Schedulability Test 5: A sufficient condition for feasible scheduling of a set of independent, preemptable, and periodic tasks on a uniprocessor is

n

ci

1.

=min(di , pi )

i1

The term ci /min(di , pi ) is the density of task Ji .

Schedulability Test 6: Given a set of n independent, preemptable, and periodic tasks on a uniprocessor, let U be the utilization as defined in Schedulability Test 4

(U =

 

n ci

), dmax be the maximum relative deadline among these tasks’ dead-

 

i =1

pi

lines,

P be the least common multiple (LCM) of these tasks’ periods, and s(t) be the

 

 

 

 

sum of the computation times of the tasks with absolute deadlines less than t. This task set is not EDF-schedulable iff either of the following conditions is true:

U > 1

or

t

< min P + dmax,

 

 

 

U

max ( pi di )

1 U 1i n

such that s(t) > t.

A fixed-priority scheduler assigns the same priority to all instances of the same task, thus the priority of each task is fixed with respect to other tasks. However, a dynamic-priority scheduler may assign different priorities to different instances of the same task, thus the priority of each task may change with respect to other tasks as new task instances arrive and complete.

In general, there is no optimal fixed-priority scheduling algorithm since given any fixed-priority algorithm, we can always find a schedulable task set that cannot be scheduled by this algorithm. On the other hand, both EDF and LL algorithms are optimal dynamic-priority schedulers.

Sporadic tasks may be released at any time instant but there is a minimum separation between releases of consecutive instances of the same sporadic task. To schedule preemptable sporadic tasks, we can transform the sporadic tasks into equivalent periodic tasks. This makes it possible to apply the scheduling strategies for periodic tasks introduced earlier.

A simple approach to schedule sporadic tasks is to treat them as periodic tasks with the minimum separation times as their periods. Then we schedule the periodic equivalents of these sporadic tasks using the scheduling algorithm described earlier.

82 REAL-TIME SCHEDULING AND SCHEDULABILITY ANALYSIS

Unlike periodic tasks, sporadic tasks are released irregularly or may not be released at all. Therefore, even though the scheduler (say the RM algorithm) allocates a time slice to the periodic equivalent of a sporadic task, this sporadic task may not actually be released. The processor remains idle during this time slice if this sporadic task does not request service. When this sporadic task does request service, it immediately runs if its release time is within its corresponding scheduled time slice. Otherwise, it waits for the next scheduled time slice for running its periodic equivalent.

The second approach to schedule sporadic tasks is to treat them as one periodic task Js with the highest priority and a period chosen to accommodate the minimum separations and computation requirements of this collection of sporadic tasks. Again, a scheduler is used to assign time slices on the processor to each task, including Js . Any sporadic task may run within the time slices assigned to Js , whereas the other (periodic) tasks run outside of these time slices.

The third approach, called deferred server (DS) [Lehoczky, Sha, and Strosnider, 1987], to schedule sporadic tasks, is the same as the second approach with the following modification. The periodic task corresponding to the collection of sporadic tasks is the DS. When there is no sporadic task waiting for service during a time slice assigned to sporadic tasks, the processor runs the other (periodic) tasks. If a sporadic task is released, then the processor preempts the currently running periodic tasks and runs the sporadic task for a time interval up to the total time slice assigned to sporadic tasks.

For a deferred server with an arbitrary priority in a system of tasks scheduled using the RM algorithm, no known schedulable utilization guarantees a feasible scheduling of this system. However, for the special case in which the DS has the shortest period among all tasks (so the DS has the highest priority), there is a schedulable utilization [Lehoczky, Sha, and Strosnider, 1987; Strosnider, Lehoczky, and Sha, 1995].

Schedulability Test 7: Let ps and cs be the period and allocated time for the de-

ferred server. Let Us =

ci

be the utilization of the server. A set of n independent,

ps

preemptable, and periodic tasks with relative deadlines being the same as the corresponding periods on a uniprocessor such that the periods satisfy ps < p1 < p2 <

· · · < pn < 2 ps and pn > ps + cs is RM-schedulable if the total utilization of this task set (including the DS) is at most

U (n) = (n 1)

Us

+

1

 

1

 

1 .

 

 

 

+

2

 

 

 

 

Us

 

 

n

1

 

 

 

 

 

 

 

 

 

 

Schedulability Test 8: Suppose we have a set M of tasks that is the union of a set Mp of periodic tasks and a set Ms of sporadic tasks. Let the nominal (or initial) laxity li of task Ti be di ci . Each sporadic task Ti = (ci , di , pi ) is replaced by an equivalent periodic task Ti = (ci , di , pi ) as follows:

ci = ci ,

SUMMARY 83

pi = min( pi , li + 1), di = ci .

If we can find a feasible schedule for the resulting set M of periodic tasks (which includes the transformed sporadic tasks), then we can schedule the original set M of tasks without knowing in advance the start (release or request) times of the sporadic tasks in Ms .

A sporadic task (c, d, p) can be transformed into and scheduled as a periodic

task (c , d , p ) if the following conditions hold: (1) d d c, (2) c = c, and

(3) p d d + 1.

A task precedence graph (also called a task graph or precedence graph) shows the required order of execution of a set of tasks. A node represents a task and directed edges indicate the precedence relationships between tasks. The notation Ti Tj means that Ti must complete execution before Tj can start to execute.

Schedulability Test 9: For a multiprocessor system, if a schedule exists which meets the deadlines of a set of single-instance tasks whose start times are the same, then the same set of tasks can be scheduled at run time even if their start times are different and not known a priori. Knowledge of pre-assigned deadlines and computation times alone is enough for scheduling using the LL algorithm.

EDF-based scheduling algorithms are available for communicating tasks using the deterministic rendezvous model and for periodic tasks with critical sections using kernelized monitor model.

Generalizing the scheduling problem from a uniprocessor to a multiprocessor system greatly increases the problem complexity since we now have to tackle the problem of assigning tasks to specific processors. In fact, for two or more processors, no scheduling algorithm can be optimal without a priori knowledge of the (1) deadlines,

(2) computation times, and (3) start times of the tasks.

Schedulability Test 10: Given a set of k independent, preemptable (at discrete time instants), and periodic tasks on a multiprocessor system with n processors with

 

k

ci

 

U =

 

n,

 

i =1

pi

 

 

 

let

T = GC D( p1, . . . , pk )

t = GC D T , T

p1

 

 

c1

ck

, . . . , T . pk

A sufficient condition for feasible scheduling of this task set is that t is integral.

Соседние файлы в предмете Электротехника