In the following your answers should be functions of N, S and T.
a) Find the maximum value of Q such that no process will ever go more than T msec
Time taken for one process per quantum = quantum,Q+context switch,S
Max wait time, T = N(Q+S)
T = NQ+NS
Q = (T-NS)/N
b) Find the maximum value of Q such that no process will ever go more than T msecs between executing instructions on the CPU?
Max wait time, T = N(Q+S) - Q ie.last instruction just before context switch executes at the end of the quantum of the first time when process executes..
T = NQ+NS-Q
T = Q(N-1)+NS
Q = (T-NS)/(N-1)
2. Suppose that there are two processes, PH and PL, running in a system. Each process is single-threaded. The operating system’s scheduler is preemptive and uses round-robin scheduling with a quantum of q time units.
The scheduler supports two priority levels, HIGH and LOW. Processes at LOW priority will run only if there are no runnable HIGH priority processes. Process PH is a HIGH priority process.
It behaves as described in the following pseudo-code:
while (TRUE) do
compute for tc time units
block for tb time units to wait for a resource
That is, if this process were the only one running in the system, it would alternate between running for tc units of time and blocking for tb units of time. Assume that tc is less than q.
Process PL is a low priority process. This process runs forever, doing nothing but computation. That is, it never blocks waiting for a resource.
a. For what percentage of the time will the low priority process PL be running in this system?Express your answer in terms of tb and tc.
tb/(tb + tc)
b. Repeat part (a), but this time under the assumption that there are two HIGH priority processes (PH1 and PH2) and one LOW priority process (PL). Assume that each HIGH priority process waits for a different resource. Again, express your answer in terms of tb and tc. Your answer should be correct for all tb greater than 0 and all 0 less than tc less than q.
(tb−tc)/(tb+tc) if tc is less than tb
0 if tc greater than or equal to tb
Suppose a processor uses a prioritized round robin scheduling policy. New processes are assigned an initial quantum of length q. Whenever a process uses its entire quantum without blocking, its new quantum is set to twice its current quantum. If a process blocks before its quantum expires, its new quantum is reset to q. For the purposes of this question, assume that
every process requires a finite total amount of CPU time.
(a) Suppose the scheduler gives higher priority to processes that have larger quanta. Is starvation possible in this system? Why or why not?
No, starvation is not possible. Because we assume that a process will terminate, the worst
that can happen is that a CPU bound process will continue to execute until it completes.
When it finishes, one of the lower priority processes will execute. Because the I/O bound
processes will sit on the low priority queue, they will eventually make it to the head of the
queue and will not starve.
(b) Suppose instead that the scheduler gives higher priority to processes that have smaller quanta. Is starvation possible in this system? Why or why not?
Yes, starvation is possible. Suppose a CPU bound process runs on the processor, uses its
entire quantum, and has its quantum doubled. Suppose a steady stream of I/O bound processes
enter the system. Since they will always have a lower quantum and will be selected for
execution before the process with the doubled quantum, they will starve the original process.
Assume that 3 processes all with requirements of 1 second of CPU time each and
no I/O arrive at the same time.
a)What will be the average response time (i.e., average time to
completion) for the processes Round Robin (RR) scheduling assuming a timeslice of 0.1 sec and no overhead for context switches (i.e., context switches are free).
Answer: 2.9 seconds
Time for completion for process A =0.28
Time for completion for process B =0.29
Time for completion for process C = 0.30
Average time for completion =0. 29
Suppose that the operating system is running a round-robin scheduler with a 50 msec time quantum. There are three processes with the following characteristics:
* Process A runs for 60 msec, blocks for 100 msec, runs for 10 msec and terminates.
* Process B runs for 70 msec, blocks for 40 msec, runs for 20 msec, and terminates.
* Process C runs for 20 msec, blocks for 80 msec, runs for 60 msec, and terminates.
Process A enters the system at time 0. Process B enters at time 10 msec. Process C enters at time 20 msec. Trace the evolution of the system. You should ignore the time required for a context switch. The time required for process P to block is the actual clock time between the time that P blocks and the time that it unblocks, regardless of anything else that is happening.
Time Running process Events
0-50 A B enters at time 10. C enters at time 20.
100-120 C C blocks until time 200.
120-130 A A blocks until time 230.
130-150 B B blocks until time 190
150-190 Idle B unblocks at time 190
190-210 B C unblocks at time 200. B terminates at time 210
210-260 C A unblocks at time 230.
260-270 A A terminates at time 270.
270-280 C C terminates at time 280.
Consider a variant of the round-robin scheduling algorithm where the entries in the ready queue are pointers to process-control-blocks.
1. What would be the effect of putting two pointers to the same process in the ready queue?
Doubling the time given to that process.
2. What would be the major advantages and disadvantages of this scheme?
Simple scheme which would provide some priority work with minimal modification to scheduler. But, overhead for managing pointers is a nuisance -- what if the process is io waiting or done? Have to remove BOTH pointers from ready queue, etc. Also may increase overhead if same process runs back-to-back -- it was not necessary to switch contexts.
3. How would you modify the basic round-robin algorithm to achieve the same effect without duplicate pointers?
Add a simple quantum indicator to PCB.
For each of the following statements, indicate whether you think it is probably true (T) or probably false (F). Then give a brief (one sentence) reason. There is not necessarily a single correct answer to each question, so your one sentence explanation is the most important part of your answer.
1. Small time slices always improve the average completion time of a system.
Probably false: Small time slices will sometimes improve the average response of the system. If the slice is too small, the context switching time will start to dominate the useful computation time and everything (including response time) will suffer.
2. Using a round robin scheduler, a large time slice is bad for interactive users.
Probably true: Large time slices can allow non-interactive processes keep control of the CPU for longer periods of time, causing the interactive processes to be less responsive.
3. Shortest Job First (SJF) or Shortest Completion Time First (SCTF) scheduling is difficult to build on a real operating system.
Probably true: SCTF scheduling requires knowledge of how much time a process is going to take. This requires future knowledge. You might require a user to specify the maximum amount of time that a process could run (and kill it if it exceeds this amount), then use a variant on SCTF.