Sunday 23 June 2013

Process Scheduling

Parameters affecting Process Scheduling decisions

  • CPU utilization decreases by overhead of context switching.
  • Throughput is number of completed process per unit time and would increase if shortest task are executed first
  • Turnaround time is time between submission of process and its completion. Average turnaround time is minimized by executing shortest tasks first
  • Waiting Time time spent in ready queue. It is also minimized when shortest tasks are executed first
  • Response time is increased if context switching is infrequent.
    • response time is the (time of first result from system  - time of process submission)
    • if process is submitted at time t0 and it is scheduled at time tn for for time t then response time is (tn+t) - t0
  • run time of process = Turnaround time - waiting time 
  • cpu utilization for n process = 
                    sum(run time of n process)/{sum(run time of n process) + sum(all contextswitch)}

  • Penalty Rate = turnaround time / run time of process
  • When calculating the turnaround time, waiting time response time, context switching time should be added it may be little when there are fewer context switch but may be significant as context switch increases.
  • So by running shortest jobs  following parameters are optimized .
    • Throughput , turnaround time, waiting time and also CPU utilization because no reorganization of the process queue is required, scheduling overhead is minimal.

Schedulers
  • In Preemptive scheduling transitions from running to ready state .Special hardware is required for preemptive scheduling ,affects the design of kernel
  • The scheduling algorithm in which there is  prioritization or are premptive may lead to starvation because a high priority process will always  hog the cpu.there wont be starvation in non premptive scheduling
First Come First Serve (FCFS) process scheduling
  • cpu utilization is optimal because least possible context switch, on process completion only
  • Throughput, turnaround time, average waiting time and response time not optimal as long process can hog the cpu.

Shortest  Job First (SJF) non premptive process scheduling
  • CPU utilization is optimal
  • Average waiting time is optimal among all non preemptive schedulings
  • throughput is optimal in all non preemptive scheduling
Shortest Remaining Time First (SRTF) premptive SJF process scheduling
  • includes overhead of more context switching and also to position the process in queue
  • CPU utilization not optimal as context switch increases
  • maximum throughput if correct estimates for cpu burst and if context switch negligible
  • Average turnaround time is less than SJF as newly arrived short burst process would be executed first which reduces the waiting time of short burst process.
  • Starvation is possible.
Round Robin (RR) scheduling
  • High context switching hence low cpu utilization
  • optimal response time
  • thoughput between FCFS and SRT
  • no starvation. 
  • round robin is good for short jobs
  • Average turnaround time is greater than in SJF  but average waiting time is less than in SJF
  • If time quantum q is increased ,
    •  Average Turnaround time will decrease
    •  Average waiting time will increase
  • FCFS is RRS with infinite time quantum

Observations about Round Robin scheduling
if we have process p1 with alternating burst of t1 for cpu and t2  for i/o the result of the computation will be visible only after the i/o is done that is response time would be t1+t2 and not t1
Response time if all n process are active then response time = n*(q+s) under certain conditions when q is greater than cpu burst
Two i/o operations of process p1 and p2 are performed in parallel if they wait for different resource, until specified otherwise , that is p2 does not have to wait


Problem 1  Consider n processes sharing the CPU in a round-robin scheduling.  Assuming that each process switch takes s seconds, what must be the quantum size q such that the overhead resulting from process switching is minimized but, at the same time, each process is guaranteed to get its turn at the CPU at least every t seconds (i.e., it is idle for no more than t seconds between time slices)?

Solution
next execution of the process should occur within t seconds

Each process runs for q period and if there are n process
p1 p2 p3 ..... pn p1 p2 ... then p1 turn comes again when it has completed time quanta for remaining process p2 to pn  i.e it would take at most (n-1)q time
So Each process in round robin gets its turn after <= (n-1)q time when we don't consider overheads
but if we consider over head(s) then it would be ns+(n-1)q
So we have ns + (n-1)q <= t
overhead will be reduced when time quantum is maximum allowable i.e,  q = (t-ns)/(n-1)


Problem 2 consider 10 identical process intiated at same time with 15 identical requests
Each request consumes 20 ms of CPU A request is followed by I/O that consumes 10ms The scheduling overhead is 2ms. Find response time for first and last request when
1) time quantum q >= 20ms and
2) time quantum q =10 ms

Solution
Gantt chart
p1 s p2 s p3 s .................p10 s p1 s p2 s ........p10 s p1 s
first requests                               second requests for each process
assuming output recieved after q+s
response time for first process first request = 20 + 2 = 22ms
response time for last process first request =  (20+2) *10 = 220ms
For second run each process first spends 10ms in the I/O wait then makes request
so time quantum q will be spent as
10msec i/o wait + 10 cpu
second request is made after 10ms i/o wait
and result of second request will occur on third run after completing the 10ms CPU
response time for first process second request = 10(on second run)+2+ 9(20+2) + 10(third run) = 10*(20+2)
similarly for all other processes also same time would be incurred

when q=10ms
that is even the first request will not complete on first run
(10+2)*10 // first completion
10+2  //now 20ms CPU for process 1 completes
that is 11*(12)ms for completion of first process first request
For last process it would be (10+2)*10+(10+2)*10 = 240ms
For subsequent request each has to spend one cycle in I/O and then similar way we have to add above time

//pending
link

Refrences
http://en.wikipedia.org/wiki/Scheduling_(computing)
http://read.cs.ucla.edu/111/notes/lec7?do=index
http://people.engr.ncsu.edu/efg/501/f97/solutions.html

No comments:

Post a Comment