Saturday 5 January 2013

Operating system Problems

Deadlock problems
Things to Remember
1)Using Banker Algorithm you always work on NEED not MAX claim
2)We add the Allocation NOT  the NEED to Avaiable
3)Available resources, when total resources is given = total - allocated
4)when a new process enters the system find its need request and new available as
new available = available = request for process
5)A process might not be in deadlock or safe state then it is in unsafe state that is a process might not request the maximum need and so wont go in deadlock
6)Deadlock can involve process not in circular chain if they request resource currently held by process in circular chain
7)When available is greater than need of every process we have safe sequence

Problems
Problem Consider p processes each needs a max of m resources and a total  r resources are available. What should be relation between p,m,r  to make the system deadlock free?

Solution:
The worst case is when every process is holding m-1 resource and a process requests 1 more resource and that cannot be fullfilled if we only had one more resource we can allocated any process that resource and full fill others need also  So total number of resources should be

r > (m-1)*p

we can prove that system can safely reach the state where each process is allocated m-1 resource


We can also interpret above equation as
mp < r + p
that is sum of all the resource need of all process should be less than r+p

Problem A system has 3 processes and 4 available resources with max need of each process as 2
Prove that system is always deadlock free
Solution we check the worst case we derived above that is
r>(m-1)p
here m=2 and p=3 so r should be greater than 3 Hence its deadlock free


Detect Deadlock from code
whenever we are asked to find if the system is in deadlock we have to look if
1)request of the form below and
2)processes p0 and p1 executed simultaneously

p0                       p1
Request Ri     request Rj
request Rj      request Ri

 Problem



Solution:
Even processes requests
Ri
Ri+2
that is requests resource i and i+2 from beginning

odd process requests
R(n-i)
R(n-i-2)
that is ith and i-2 from last

consider even process as index i and odd as j
Now system can be in deadlock when

1)Ri = Rn-j-2 that is
i = n-j-2
i+j = n-2
and
2) i+2 = n-j that is
i+j = n-2

So we have equation  i+j=n-2
if this is satisfied for any two processes then only deadlock is possible
but we know that i is even and j is odd so
1) i+j is odd and RHS n-2 can be odd only when n is odd and also
2) if number of process are k then i and j has to be < k

Lets see for
1)n=41 and k=19
n-2 = 39 and maximum j can be 19 and i 18 which combine donot ma ke 39 so no deadlock possible
2)when n=21 and k=12
we have n-2 = 19 and i max can be 12 and j 11 which can make the total sum so lets check
when i =10 and j=9 . yes we have deadlock

Scheduling Problems
Things to Remember
In round robin overhead is minimized when time quanta q is maximized
In round robin we use queue order that is does not necessarily
Convoy effect when process needs to use resource for a small time but other process is holding it for long time. FCFS scheduling may suffer convoy effect.
I/O bound process do not use their entire CPU quanta. Better utilization by giving high priority to I/O bound process because of convoy effect


Problem
Use time q=5ms

Solution
Round robin
Shown below is gantt chart with queue


waiting time
total time that is time from arrival to completion - time executing (given in table)
for P1 : 55 - 20 = 35
for p2 : (77-4) - 32= 41
for p3 : 59-14-9 = 36
for p4 : 65-12-11=42
Average 38.5


Shortest Job first
p1 p4 p3 p2
Average = 17.75


Shortest Remaining First




Problem Consider Exponential average to predict length of next CPU burst
1) a=0
then system always uses past history to predict CPU burst without taking in account the recent observation
2)a=.99
then gives very high weight to recent observation and very less to past history and hence almost no memory is required for history


Problem 10 i/o bound process and 1 cpu bound process Each I/O bound process issues I/O operation once for every 1ms of CPU computing and each I/O takes 10ms to complete . context switch overhead = .1ms What is CPU utilization for
1)q=1 ms
Solution CPU utilization is (execution time )/(execution time + overhead time)
for each time quanta system incurs cost of .1 so
1/1.1 = .91
2)q=10ms
A cpu bound process will run for 10ms then a context switch of .1ms total 10.1
each i/o bound process interrupts cpu to context switch for I/O activity after 1ms so total is 1.1 and for 10 process would be 10*1.1
for a execution time of 20ms we have
CPU utilization  = 20/21.1 = .94

Multiprogramming
If a job spends p fraction of time in I/O waiting then
with sequential execution CPU utilization is 1-p
If p =.60 that is its spending 60% of time doing I/O then CPU utilization is only 40%
For multiprogramming with n jobs in the memory
probability that all n processes are waiting for I/O is p^n
CPU utilization is 1-P^n that is probability that at least one process is ready for CPU
this is on the assumption of no cpu overhead

Problem  Suppose two jobs, each of which needs 10 minutes of CPU time, start simultaneously. Assume 50% I/O wait time.
a) How long will it take for both to complete if they run sequentially?
solution Each process takes total of 10 min for I/O + 10 min for CPU that is 20
Total time for complete T = 20min +20min = 40mins
b) How long if they run in parallel?
probability p for which time is spent in I/O
p = 1/2 //50% I/O
CPU utilization = 1-p^2 = .75
T*.75=20


File system
Inode
A files block are scattered randomly
Inodes keeps pointer to data blocks
Each Inode has 15 pointers
First 12 points directly to data block
13th points to indirect block that is block containing data block
14th points to doubly indirect block that is points to block containing 128 addresses of indirectblock
15th points to triply indirect block that is block which points to doubly indirect block
Consider block size of 4K as in unix file system
then 12 direct block pointer can point to 12*4K =48K
if each pointer is of 4 byte then a single block can accomodate 4K/4 = 1K
one indirect can point to 1K*4K =4MB
one doubly indirect 1K*1K*4K


Problem  Let there be 8 direct block pointers, and a singly, doubly, and triply indirect pointer in each inode and  that the system block size and the disk sector size are both 4K. If the disk block pointer is 32 bits, with 8 bits for identifying physical disk, and 24 bits to identify the physical block, then:
1)maximum file size?
Solution each block can hold = 4K/4 = 1K pointers
so direct blocks = 8*4K
single indirect =1K*4K
double indirect =1K*1K*4K
triple indirect = 1K*1K*1K*4K
all sums around 4TB

2)what is maximum file system partition size
Since disk block pointer is 32 bit it can point 2^32 blocks
which includes 8 bits to identify physical disk and 24 bits to identify block
since each block is 4K so total = 16TB

3)assuming inode in memory how many disk access are required for byte position 2634012
since each block is of 4K we find the offset 2634012 modulo 4K = 284  and block number is 643
this block will be addressed by single indirect block so number of access = 1for accessing indirect block and 1 for accessing the actual block =2


Memory Management

.
Remember to always add the time to check cache or tlb even when we have miss.

Problem Assuming single level page table If a memory reference takes 100 nanoseconds, time for memory mapped reference is ?
i)when no TLB
 100 for accessing Page table and 100ns for accessing actual data frame
ii)TLB with access time 15ns and .9 memory hit in TLB
 if page number is present in TLB then 15+100
 if page number not found in TLB then 15+100+100
Hence EMAT = 115*.9 + 215*.1

Problem Consider demand paging with the page table held in registers, memory access times
8 msecs for  page fault if --  empty page is available or the replaced page is not modified,
20 msecs if the replaced page is modified
70% of the time the page to be replaced is modified
100 nsecs memory access time


What is the maximum acceptable page fault rate (P) for an effective access time of no more than 200 nsecs?
Solution
p = page fault rate
and m be percentage of time page is modified then

EMAT >= 1-p(memory access time) + p (m (time to write back dirty page & service fault) + (1-m)(time to service page fault))

.2 >= 1-p(.1)+ p (.70*20000 + .30*8000)

Problem Consider average size of a process as p, size of a page table entry is e, what page size minimizes wasted space due to internal fragmentation and page table?
 Solution
page tables used to store the frame number and control bits is actually a overhead or wasted space
so we calculate wasted space  = space used by page table + space wasted due to internal fragmentation
avg number of pages =p/s
and total size of all pages =pe/s
avg space wasted due to internal fragmentation s/2

so total waste = pe/s + s/2
to find min take derivative
- pe/s^2 + 1/2 = 0
 pe/s^2 = 1/2
 s = sqrt(2pe)

 
Problem Consider the average process size is 128 KB and the number of page entries is 8. What will be appropriate page size
Solution



Page replacement
one has reference time highest means it was referenced farthest on left 
FIFO doesnot depend on the reference time but on the load time
NRU replaces one with r=0,w=0
second chance replaces earliest page with r=0
 If you are given page has been modified written to or dirty then it takes 2 I/O for replacing that page
1 to bring in new page and 1 to write back this page
Virtual to physical address binding takes place during runtime
Consider the following piece of code which multiplies two matrices:
int a[1024][1024], b[1024][1024], c[1024][1024];
multiply()
{
   unsigned i, j, k;
   for(i = 0; i < 1024; i++)
       for(j = 0; j < 1024; j++)
           for(k = 0; k < 1024; k++)
               c[i][j] += a[i,k] * b[k,j];
}
Assume that the binary for executing this function fits in one page, and the stack also fits in one page. Assume further that an integer requires 4 bytes for storage. Compute the number of TLB misses if the page size is 4096 and the TLB has 8 entries with a replacement policy consisting of LRU.
Solution:
1024*(2+1024*1024) = 1073743872
The binary and the stack each fit in one page, thus each takes one entry in the TLB. While the function is running, it is accessing the binary page and the stack page all the time. So the two TLB entries for these two pages would reside in the TLB all the time and the data can only take the remaining 6 TLB entries.
We assume the two entries are already in TLB when the function begins to run. Then we need only consider those data pages.
Since an integer requires 4 bytes for storage and the page size is 4096 bytes, each array requires 1024 pages. Suppose each row of an array is stored in one page. Then these pages can be represented as a[0..1023], b[0..1023], c[0..1023]: Page a[0] contains the elements a[0][0..1023], page a[1] contains the elements a[1][0..1023], etc.
For a fixed value of i, say 0, the function loops over j and k, we have the following reference string:
a[0], b[0], c[0], a[0], b[1], c[0], ¡ a[0], b[1023], c[0]
¡
a[0], b[0], c[0], a[0], b[1], c[0], ¡ a[0], b[1023], c[0]
For the reference string (1024 rows in total), a[0], c[0] will contribute two TLB misses. Since a[0] and b[0] each will be accessed every four memory references, the two pages will not be replaced by the LRU algorithm. For each page in b[0..1023], it will incur one TLB miss every time it is accessed. So the number of TLB misses for the second inner loop is
2+1024*1024 = 1048578
So the total number of TLB misses is 1024*1048578 = 1073743872
Partitions
It is possible to get the new block from the external fragmentation

Problem Consider following hole sizes in memory in  this order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB. Which hole is taken for successive segment
request of 12KB, 10KB, 9KB for
a. First fit?
20KB , 10KB,18KB
b. Best fit?
12KB, 10KB,9KB
c. Worst fit?
20KB,18KB,12KB
d. Next fit?
20KB,18KB,9KB

Secondary storage problems
Problem Consider
10000 RPM spindle speed,
300 Sectors per track,
512 Bytes per sector,
9801 cylinders (tracks per platter side),
24 tracks per cylinder (12 platters),

6ms average seek time,
0.6 ms track-to-track seek time.

Suppose you want to read a 4,608MB file under the following two sets of assumptions:
a) Assume the file is allocated consecutive sectors and tracks on one surface of one of the platters. How long does it take to read the file’s data and what is the average throughput?

Solution
total time to read file =avg seek time+ avg rotational delay + transfer time
rotational delay = 60s/m  / 10000rev/min= 6ms /rev

Calculate bandwidth
total number of bytes in track = total number of sectors * number of bytes per sector = 300*512 = 153600 bytes/sector
bandwidth = number of bytes in track/ rotational delay = 153600/6 bytes/sec
transfer time for x bytes = x/bandwidth = 6*x/156300

For file
Number of blocks in file = 4608000 / 512 blocks = 9000 blocks
or total of  30 tracks

transfer for first track T1 = average seek + average rotational + transfer time
=6ms+3ms+6ms =15
For remaining tracks it would be 6ms transfer time for track + .6ms track to track time
6.6*29
total time =15 +6.6*9
average throughput is 4608000/total time



b) If the individual blocks are randomly scattered over the disk how long will it now
take to transfer this file and what is the average transfer rate?



we read each sector with same avg rotational delay and  avg seek time
so each request is uniform with time of 6ms avg seek time+3ms of avg rotational dealy+6/300 to read a sector =9.02
and there are 9000 such sectors so total time would be 9000*9.02ms

No comments:

Post a Comment