
- •1)Introduction to computer architecture. Organizational and methodical instructions
- •2)Interrupts. Interrupts and the Instruction Cycle. Interrupt Processing. Multiprogramming.
- •3)Introduction to Operating systems (os), general architecture and organization.
- •Introduction to os Organization
- •4)Major Achievements of os: the Process; Memory Management; Information Protection and Security; Scheduling and Resource Management; System Structure.
- •5)Process Description and Control
- •6)Process States. Process Models.The Creation and Termination of Processes.Suspended Processes.
- •7)Process Description. Operating System Control Structures.Process Switching.
- •8)Memory Management
- •10)Memory Management Requirements. Relocation. Protection. Sharing.
- •11.Memory Partitioning. Relocation.
- •12) Virtual Memory. Paging. Segmentation. Combined Paging and Segmentation.
- •13) Replacement Policy. First-Come-First-Served, Round-Robin algorithms
- •14) Replacement Policy. Shortest Process Next, Shortest Remaining Time algorithms.
- •15)File Organization and Access.
- •16.Secondary Storage Management.
- •17.Short history of Windows, general architecture, software configuration, registry, main administration tools; the boot process.
- •18. Mutual Exclusion and Synchronization
- •19. Programming tools for mutual exclusion: locks, semaphores
- •20. Deadlock: general principle, synchronization by events, monitors, the producer/consumer example, the dining philosophers’ problem
- •21. Disk scheduling. Raid-arrays
- •22. Input-output management
- •23. Page replacement algorithms
- •24. Mutual exclusion and synchronization
- •25. Overview of computer hardware
- •26.Uniprocessor scheduling.
- •27.Implementation of disk scheduling algorithms sstf, scan.
- •28. Implementation of disk scheduling algorithm look.
- •29. Implementation of lru replacement algorithm.
- •31.Briefly describe basic types of processor scheduling. Examples
- •I/o scheduling
- •32.What is the difference between preemptive and nonpreemptive scheduling? Examples.
20. Deadlock: general principle, synchronization by events, monitors, the producer/consumer example, the dining philosophers’ problem
Monitors. Semaphores provide a primitive yet powerful and fixable tool for enforcing mutual exclusion and for coordinating processes.
The monitor is a programming- language construct that provides equivalent functionality to that of semaphores and that is easier to control. The monitor construct has been implemented in a number of programming languages, including Concurrent Pascal, Pascal- Plus, Modula- 2, Modula- 3, and Java. It has also been implemented as a program library. This allows programmers to put a monitor lock on any object. In particular, for something like a linked list, you may want to lock all linked lists with one lock, or have one lock for each list, or have one lock for each element of each list.
The chief characteristics of a monitor are the following:
The local data variables are accessible only by the monitor’s producers and not by any external procedure.
A process enters the monitor by invoking one of its procedures.
Only one process may be executing in the monitor at a time; any other processes that have invoked the monitor are blocked, waiting for the monitor to become available.
One of the most common problems faced in concurrent processing is the producer/consumer problem. The general statement is this: there are one or more produces generating some type of data and placing these in a buffer. There is a single consumer that is talking items out of the buffer one at a time. The system is to be constrained to prevent the overlap of buffer operations. That is, only one agent may access the buffer at any one time. The problem is to make sure that the producer won`t try to add data into the buffer if it`s full and that the consumer won`t try to remove data from an empty buffer. We will lock at a number of solutions to this problem to illustrate both the power and the pitfalls of semaphores.
21. Disk scheduling. Raid-arrays
Over the last 40 years, the increase in the speed of processors and main memory has far outstripped that for disk access, with processor and main memory speeds increasing by about two orders of magnitude compared to one order of magnitude for disk.
Disk Performance Parameters.
To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector
Seek time (time it takes to position the head at the desired track)
Rotational delay or rotational latency (time its takes for the beginning of the sector to reach the head)
Access time (1. sum of seek time and rotational delay. 2. the time it takes to get in position to read or write)
Data transfer occurs as the sector moves under the head
Disk scheduling policies:
seek time is the reason for differences in performance
for a single disk there will be a number of I/O requests
If requests are selected randomly, we will poor performance
FIFO (process request sequentially. Fair to all processes. Approaches random scheduling in performance if there are many processes)
Priority (goal is not to optimize disk use but to meet other objectives. Short batch jobs may have higher priority. Provide good interactive response time)
LIFO (good for transaction processing systems. Possibility of starvation since a job may never regain in the head of the line)
Shortest Service Time First (select the disk I/O request that requires the least movement of the disk arm from its current position. Always choose the minimum Seek time )
SCAN (arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction. Direction is reversed)
C-SCAN (restricts scanning to one direction only. When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again)
RAID-arrays
redundant array of independent disks
set of physical disk drives viewed by the operating system as a single logical drive
data are distributed across the physical drives of an array
redundant disk capacity is used to store parity information
RAID 0 (non-redundant)
RAID 1 (mirrored)
RAID 2 (redundancy through Hamming code)
RAID 3 (bit- interleaved parity)
RAID 4 (block- level parity)
RAID 5 (block- level distributed parity)
RAID 6 (dual redundancy)