Friday 7 March 2014

PROCESSOR SCHEDULING

PROCESSOR SCHEDULING
We shall start our discussion on processor scheduling with an example.
· Assume we have two processes: A, B
· Assume each process executes for one second and then does I/0 for one second and this pattern is repeated 10 times.
 1. With no multiprogramming:

A is run first: Then B is run:
--------------------- --------------------
for 2*10=20 seconds for 2*10=20 seconds
So, it takes 40 seconds to run processes A and B.
However, actual execution time is only half of this time : 20 seconds. The other 20 seconds is idle time for the processor.

Processor utilization can be calculated as follows :
Processor utilization = ( 20 / 40 ) * 100 = 50%
 2. If we have multiprogramming:
So, this time, the total execution time is 21 sec.
Processor utilization is ( 20 / 21 ) * 100 = ~100%.
However note that, this is an extreme case. Also, note that process A is finished at the same time, but process B is now finished in half the previous time.
The processor executes a large number of processes or tasks which belong to users or system programs. Therefore, a process is a program in execution.
Processes execute through a cycle of repetitive processor and I/O bursts.
Processor-bound program: has a few very long processor bursts.
I/0-bound program: has very short processor bursts.
Knowing the distribution of processor-bound vs. I/0-bound processes can be very important in selecting an appropriate processor scheduling algorithm.
Process State Diagram

START : the process has just arrived.
READY : the process is waiting to grab the processor.
RUNNING : the process has been allocated the processor.
WAITING : the process is doing I/0.
HALTED : the process has finished execution. (is just to leave the system)
Each process is represented in the OS by its process control block (PCB).
The number of PCB's will change with time. The designer must decide whether to predeclare the maximum no. of processes and to allocate memory for that many PCB's, or to have a dynamic PCB memory management policy.
If we have single processor system there will never be more than one running process at a time. The other processes will be waiting for the processor. We shall keep the processes which are ready on a list called theREADY queue.
The ready queue is generally a linked list. The list header will contain pointers to the first and last PCB's in the list. The ready queue is not necessarily a FIFO queue.
When a process is allocated the processor, it executes and after some time it either finishes or makes an I/O request for an I/O device like a disk. The disk may be busy with the I/O of another process so the new requesting process will have to wait for the disk. The list of processes waiting for a particular I/O device is called a DEVICE queue. Each device has its own device queue.
In multiprogrammed systems, the processor can be switched from one process to another. Note that when an interrupt occurs, PC and register contents for the running process (which is being interrupted) must be saved so that the process can be continued correctly afterwards.
For the purpose of discussing processor scheduling, we shall assume we have a single I/O server and a single device queue.

Long-term Scheduler, Short-term Scheduler, Medium-term Scheduler
If we consider batch systems, there will often be more processes submitted than the no. of processes that can be executed immediately. So incoming processes are spooled (to a disk). The long-term scheduler selects processes from this process pool and loads selected processes into memory for execution. The short-term scheduler selects the process to get the processor from among the processes which are already in memory.
The short-time scheduler will be executing frequently (mostly at least once every 10 milliseconds). So it has to be very fast in order to achieve a better processor utilization. Since the short-time scheduler like all other OS programs will have to execute on the processor if it takes 1 millisecond to choose a process that means ( 1 / ( 10 + 1 )) = 9% of the processor time is being used for short time scheduling and only 91% may be used by processes for execution.
The long-term scheduler on the other hand executes much less frequently. It controls the degree of multiprogramming (no. of processes in memory at a time). If the degree of multiprogramming is to be kept stable (say 10 processes at a time), then the long-term scheduler may only need to be invoked when a process finishes execution.
The long-term scheduler must select a good process mix of I/O-bound and processor bound processes. If most of the processes selected are I/O-bound, then the ready queue will almost be empty while the device queue(s) will be very crowded. If most of the processes are processor-bound, then the device queue(s) will almost be empty while the ready queue is very crowded and that will cause the short-term scheduler to be invoked very frequently.
Time-sharing systems (mostly) have no long-term scheduler. The stability of these systems either depends upon a physical limitation (no. of available terminals) or the self-adjusting nature of users (if you can't get response, you quit).
It can sometimes be good to reduce the degree of multiprogramming by removing processes from memory and storing them on disk. These processes can then be reintroduced into memory by the medium-term scheduler. This operation is also known as swapping. Swapping may be necessary to improve the process mix or to free memory.
Processor Scheduling Algorithms:
Processor scheduling algorithms try to answer the following crucial question : Which process in the ready queue will get the processor? In order to answer this question, one may consider the relative importance of the following performance criteria :
1. Processor Utilization:
(processor busy time / (processor busy time+processor idle time))
We would like to keep the processor as busy as possible, that is, we want to reduce the idle time. (In real life, 40 % ---> 90 %).
2. Throughput :
( no. of processes completed / time unit )
It is the measure of work done in a unit time interval.
3. Turnaround time (tat) :
( t(process completed) - t(process submitted) )
It is the sum of time spent waiting to get into the ready queue, execution time and I/O time.
4. Waiting time :
( time spent waiting in the ready queue )
The processor scheduling algorithm only affects the time spent waiting in the ready queue, it doesn't affect the total execution time or I/O time. So considering only the waiting time for each process instead of considering the turnaround time will be sufficient.
5. Response time :
( t(first response) - t(submission of request) )
It is the amount of time it takes to start responding to a request. This criterion is important for interactive systems.
We normally want to maximize the average processor utilization and throughput, and we want to minimize the average turnaround time, waiting time and response time. However it may sometimes be desirable to optimize the minimum or maximum values of our criteria instead of the average values. For example, we may want to minimize the maximum response time to ensure that all users get good service.
We shall see a number of processor scheduling algorithms. How do we select a processor scheduling algorithm for a particular system? We first decide on the relative importance of our selection criteria (processor utilization, throughput, waiting time, response time), together with some minimal or maximal or average values required.
For example,
· "maximize processor utilization but response time should not exceed 1 second"
· "maximize throughput, but average turnaround time should be linearly proportional to total execution time".
Now, we shall discuss a number of well-known processor scheduling algorithms.
 1: First-Come-First-Served (FCFS)
It is the simplest processor scheduling algorithm. The process which requests the processor first, gets it first.
FCFS is implemented with a FIFO queue. When the processor becomes free, the process whose PCB is at the head of the ready queue gets the processor and its PCB is removed from the ready queue.
Its performance may often be very poor compared to other processor scheduling methods.
In FCFS, the average turnaround time depends on the incoming order or processes and it may vary substantially.
FCFS may cause processes with short processor bursts to wait for a long time. If one process with a large processor burst gets hold the processor, all the other processes will wait for it to release the processor and the ready queue will grow. This is called the convoy effect.
Example:
process next processor burst incoming order
--------- ---------------------- ----------------
A 16 msec 1
B 4 msec 2
C 1 msec 3
The timing (Gannt) chart is :
0
16
20 21
A
B
C
tat[A] = 16 msec, tat[B] = 20 msec, tat[C] = 21 msec
So, average tat is (16 + 20 + 21) / 3 = 19 msec.
If the incoming order is C, B, A :

0 1 5

21
C
B
A
tat[A] = 1 msec, tat[B] = 5 msec, tat[C] = 21 msec
So, average tat is (1 + 5 + 21) / 3 = 9 msec.

 2: Shortest-Process-First (SPF)
In this method, the processor is assigned to the process with the smallest next processor burst time. Therefore, we need to know the length of its next processor burst time for each process.
Example:
process next processor burst
--------- ----------------------
A 7 msec
B 14 msec
C 3 msec
D 6 msec

0
3
9
16 30
C
D
A
B

tat[A] = 16 , tat[B] = 30 , tat[C] = 3 , tat[D] = 9 msec.
average tat = (16 + 30 + 3 + 9) / 4 = 14.5 msec
If we had used the FCFS policy:
0
7
21
24 30
A
B
C
D
tat[A] = 7, tat[B] = 21, tat[C] = 24, tat[D] = 30 msec,
average tat = (7 + 21 + 24 + 30) / 4 = 20.5 msec.
SPF gives the minimum average waiting time for a given set of processes. Moving a short process in front of a long one in the ready queue decreases the waiting time of the long process.
1)
WT short









long job


short job

no WT long
gain in the
total

2)
no WT short

waiting time

short job

long job





WT long


One approach in implementing this algorithm may be to predict next processor burst times by using the previous processor burst times for the processes in the ready queue, and choose the process with the shortest predicted next processor burst time to get the processor.

3: Priority scheduling
priority is associated with each process and the processor is given to the process with the highest priority. Equal priority process are scheduled FCFS.
Note that SPF is a special case of the priority scheduling algorithm where priority = 1 / next processor burst time.
Priorities can be fixed externally or they may be calculated by the OS. Externally, if all users have to code time limits and maximum memory for their programs, priorities are known before execution. Internally, a next processor burst time prediction such as that of SPF can be used to determine priorities dynamically.
A priority scheduling algorithm can leave some low-priority processes in the ready queue indefinitely. (If the system is heavily loaded, there will always be higher-priority processes to grab the processor). This is called the starvation problem.
One solution for the starvation problem might be to gradually increase the priority of processes that stay in the system for a long time.
 4: Preemptive scheduling
FCFS, SPF and priority algorithms discussed so for are all non-preemptive algorithms. That is, once the processor is allocated to a process, it keeps the processor until it terminates or it requests I/O.
SPF and priority scheduling algorithms can be modified to be preemptive.
Consider SPF. While one process is executing on the processor, another process arrives. The new process may have a predicted next processor burst time shorter than what is left of the currently executing process.
If the SPF algorithm is preemptive, the currently executing process will be preempted from the processor and the new process will start executing. A preemptive SPF algorithm is also known as a Shortest-Remaining-Time-First(SRTF) algorithm.
Similarly a priority scheduling algorithm may be preemptive. If the process that is joining the ready queue has a higher-priority than the currently running process, it preempts the current one and gets the processor.

Example : ( preemptive SPF = SRTF )
process arrival time next processor burst
--------- -------------- ----------------------
A 0 15 msec
B 2 2 msec
C 7 9 msec
D 10 3 msec

0
2
4
7
10
13
19 29
A
B
A
C
D
C
A















Pree
mption

Pree
mption

tat[A] = 29, tat[B] = 2, tat[C] = 12, tat[D]= 3 msec,
average tat = 46/4 = 11.5 msec.
Same example, non-preemptive SPF:
0
15
17
20 29
A
B
D
C
Average tat is 62 / 4 = 15.5 msec.
 5: Round-Robin Scheduling (RRS)
RRS is a preemptive process scheduling algorithm suitable for time-sharing systems. It is based on a small unit of time called time slice (time quantum), generally ranging from 10 to 100 milliseconds.
The ready queue is treated as a FIFO circular queue. The RRS traces the ready queue allocating the processor to each process on the ready queue for a time interval <= time quantum.
The round-robin scheduler takes the first process from the ready queue, sets a timer to interrupt after one time quantum and gives the processor to that process. If the process has a processor burst < time slice, then it releases the processor voluntarily, either by terminating or by issuing an I/O request. The RRS then proceeds with the next process in the ready queue.
On the other hand, if the process has a processor burst > time slice, then the timer will go off after one time quantum expires, and it interrupts the OS. Registers, etc. are saved and the process is put at the end of the ready queue. Next process in ready queue is selected.

Example :
Process Next processor burst time
------- -------------------------
A 12 millisecs
B 5 "
C 8 "
D 2 "
and assume that time slice is 3 millisecs.
0
3
6
9
11
14
16
19
22
24 27
A
B
C
D
A
B
C
A
C
A
The performance of RRS depends heavily on the time quantum selected.
a. If time quantum ® ¥ RRS becomes Þ FCFS
b. If time quantum is very small, RRS is called processor sharing. (It looks like as if each of the n processes has its own processor running at 1/n'th the speed of the actual processor)
At the end of each quantum we get an interrupt from the timer. The processor must be switched to another process. The PC and register values of the executing process must be saved, and the PC and other register values for the new process must be loaded. This is called context switching and it is an overhead. The context switching overhead varies from processor to processor. It depends on the memory speed, processor speed, number of registers and the existence of special instructions for loading / storing all registers. (Typically: 10 ---> 100 microseconds)
A smaller time quantum increases the no. of context switches so, it increases the overhead. In practice, 80 % of processor bursts should be < time quantum. Also time quantum must be much greater than the context switching time.
 6: Processor Scheduling Considering I/O:
Upto this point in processor scheduling algorithms discussed, we have ignored I/0 times. In fact processes do I/0 and execution, one followed after the other, cyclically. Consider the following example where it is assumed that execution and I/O burst durations for a process are fixed and known beforehand. In processor scheduling, at a given time, only the processes in the ready queue are considered.

Example:
process arrival_time exec_1 I/O_1 exec_2 I/O_2 exec_3 I/O_3
------- ------------ ------ ----- ------ ----- ------ -----
A 0 5 10 5 10 5 10
B 5 5 12 5 12 5 12
Assume there is a single I/O device and context switching time is negligible.

0
5
7
10 12
15
20
27
32
37
42
49
54
59 61
P:
A1

B1

A2

B2

A3

B3



0
5
7
10 12
15
20
27
32
37
42
49
54
59 61
I/O:

A1
B1
A2
B2
A3
B3
7: Multi-Level Queues

If processes can easily be classified into groups we may use a multi-queue scheduling algorithm (MQS). We partition the ready queue into separate queues. Processes are assigned to one queue depending on their properties (e.g. memory size, process type). Then each queue will use a different process scheduling algorithm.
For example,
· Foreground processes: interactive } different response
· Background processes: batch } time requirements.
a. schedule foreground queue by a RRS, background queue by a FCFS
b. in addition to that, there must be an interqueue scheduling policy (scheduling choice between the queues). Assume the foreground queue has an absolute priority over the background queue, or do time-slicing among the queues.

Another example :

 QUESTIONS

1. Construct an example to compare the Shortest Job First strategy of processor scheduling with a Longest Job First strategy, in terms of processor utilization, turnaround time and throughput.

2. The following jobs are in the given order in the ready queue:
Job CPU Burst(msec) Priority
--------------------------------------
A 6 3
B 1 1
C 2 3
D 1 4
E 3 2
None of these jobs have any I/O requirement.
a. what is the turnaround time of each job with First come First Served, Shortest Job First, Round Robin (time quantum=1) and non-preemptive priority scheduling? Assume that the operating system has a context switching overhead of 1 msec. for saving and another 1 msec. for loading the registers of a process.
b. what is the waiting time for each job with each of the four scheduling techniques and assumption as in part a?
3. The following data is given for a computer system employing Round-Robin processor Scheduling with time slice=5, if two processes are to enter to the ready queue at the same time then they will enter in the alphabetical order of their names:
process name arrival time CPU burst I/O burst CPU burst
-----------------------------------------------------------------
A 0 4 5 6
B 3 3 - -
C 10 2 7 2
-----------------------------------------------------------------
a. Assuming that context switch time is 0, draw the Gannt Chart for the above processes, and calculate the average waiting time and CPU utilization.
b. Assuming context switch time is 1, repeat part 'a',
c. Discuss briefly how the time slice should be selected to increase the system performance, given average CPU burst time, average I/O time, and the context switch time.

4. Consider the following job arrival and CPU burst times given:
job arrival time CPU burst time
--- -------------- ------------------
A 0 msec 7 msec
B 2 msec 4 msec
C 3 msec 1 msec
D 6 msec 6 msec
a. Using the shortest job first scheduling method, draw the Gannt chart (showing the order of execution of these jobs), and calculate the average waiting time for this set of jobs.
b. Repeat a. using the shortest remaining time first scheduling method.
c. What is the main difference between these two methods ?

5. Explain the following briefly:
a. What is an I/O bound job?
b. What is CPU bound job?
c. Suppose there is one I/O bound job and one CPU bound job in a ready queue. Which one should get the processor in order to achieve a better CPU and I/O device utilization.

6. A processor scheduling algorithm determines an order of execution of active processes in a computer system.
a. If there are n processes to be scheduled on one processor, how many possible different schedules are there? Give a formula in terms of n.
b. Repeat part a. for n processes to be scheduled on m processors.


7. Explain the following terms :
a. Long-term scheduler b. Short-term scheduler
Which one controls the degree of multiprogramming?

8. a. Find and draw the Gannt Chart for the following processes assuming a preemptive shortest remaining time first processor scheduling algorithm.
Process Arrival time Next Processor Burst Time
---------- -------------- ---------------------------
A 0 msec 12 msec
B 2 9
C 5 2
D 5 7
E 9 3
F 10 1
Clearly indicate every preemption on your Gannt Chart.
b. Calculate the turnaround times for each process, and find the average turnaround time for the above processes.

9. The UNIX OS has a priority_based processor scheduling algorithm. Priorities are computed as the ratio of processor execution time to real time (total elapsed time).
a. Does this algorithm give higher priority to processor bound processes, or to I/O bound processes? Explain.
b. Under which conditions can this algorithm increase the overall system performance? Discuss the effects of this algorithm on the system performance criteria.
c. If priorities are recomputed every second, and if there is no I/O, to which algorithm does this UNIX processor scheduling algorithm degenerate to?


10. Show the execution of the following processes on Gannt Charts for the processor and the I/O device if Shortest Process First Algorithm for the Processor is used. Assume that the processes in the I/O queues are processes in First Come First Served manner . Find also the average waiting time in ready queue.
CPU and I/0 Bursts
Arrival
Process
1st CPU
1st I/O
2nd CPU
2nd I/O
3rd CPU
0
A
4
4
4
4
4
2
B
8
1
8
1
-
3
C
2
1
2
1
-
7
D
1
1
1
1
1
11. Show the execution of the following processes on Gannt Charts for the processor and the I/O device if Shortest Remaining Time First Algorithm for the Processor is used. Find also the average waiting time in ready queue. If the processess are to enter to the ready queue at the same time due to a. preemption of the processor, b. new submission, or c. completion of I/O operation, then the process of case a., will be in front of the process of case b., which will be in front of the process of case c.. Assume that I/O queue works in First Come First Served manner.
CPU and I/0 Bursts
Arrival
Process
1st CPU
1st I/O
2nd CPU
2nd I/O
3rd CPU
0
A
4
4
4
4
4
2
B
8
1
8
1
-
3
C
1
1
1
1
-
12.Consider a priority based processor scheduling algorithm such that whenever there is a process in the ready queue having priority higher priority than the one executing in CPU, it will force the executing process to prempt the CPU, and itself will grab the CPU. If there are more then one such processes having highests priority, then among these the one that entered into the ready queue first should be chosen. Assume that all the processes have a single CPU burst and have no I/O operation and consider the following table:
Process id
Submit (i) msec
Burst(i) msec
P1
0
8
P2
2
4
P3
3
2
where Submit(i) denotes when process Pi is submitted into the system (assume they are submitted just before the tic of the clock) and Burst(i) denotes the length of the CPU burst for process Pi . Let let Tnow to denote the present time and Execute(i) to denote the total execution of process Pi in cpu until Tnow. Assuming that the priorities are calculated every msec (just after the tic of the clock), draw the corresponding Gannt charts for the following priority formulas, which are valid when the process has been submitted but not terminated yet,
a. Priority(i)= Tnow-Submit(i)
b. Priority(i)= Burst(i)-Execute(i)
c. Priority(i)=(Burst(i)-Execute(i))-1
13. Consider the following job arrival and CPU, I/O burst times given. Assume that context switching time is negligible in the system and there is a single I/O device, which operates in first come first served manner,
process
arrival t.
CPU-1
I/O-1
CPU-2
A
0
2
4
4
B
1
1
5
2
C
8
2
1
3
Draw the Gannt charts both for the CPU and the I/O device., and then find out what is the average turn around time and cpu utilization for
a. first come first served
b. round robin
processor scheduling algorihms.
14. Consider the following job arrival:
process
arrival time
next CPU burst
A
0
7
B
1
5
C
2
3
D
5
1
a. Draw the Gannt chart for the CPU if nonpreemptive SPF scheduling is used
b. Repeat part a. if preemptive SPF scheduling is used
c. Repeat part a. if nonpreemptive priority based CPU scheduling is used with
priority (i)=10+tnow-tr(i)-cpu(i)
where ts(i)=the time process i is submitted to the sytem,
tr(i)= the time proceess i entered to the ready queue last time,
cpu(i)=next cpu burst length of process i,
tnow=current time.
The process having the highest priority will grab the CPU. In the case more than one processes having the same priority, then the one being the first in the alphabetical order will be chosen.
d. For each of above cases indicate if starvation may occur or not

No comments:

Post a Comment