Fix Partitioning
1. If using equal size partitions, main memory is divided into fix size partitions initially
- A process whose size <= partition can only be loaded in it.
- If program large than partition size program has to design it with overlays reduces the OS overhead.
- Could lead to internal fragmentation for small size process
- Process can be loaded into any free partition.
- so a big process can be allocated the large size partitions and for a small size process we can allocate small partition reducing the internal fragmentation
- Process can be loaded in the smallest partition within which it fits.
- A scheduling queue is required for each different size partition which helps swapped out process.
- It minimizes the wasted memory from internal frag.
- But could lead to wastage when the large space remains unused because process will be queued for the other partitions that are smallest fit.
Dynamic Partition
Dynamic partition leads to external fragmentation where portion of memory outside the variable size partition are fragmented.Solution to external fragmentation problem is compaction to move the free space together.Its time consuming process and complete overhead. Processes must be dynamically relocatable for moving them around in memory
If we are to avoid compaction we can make intelligent assignment of process to memory so as to reduce the external fragmentation
Best fit First Fit and Next Fit. First fit is usually best and fastest.
First fit may lead to holes in the front end of memory. More the fragments in the front end more scanning for first fit
Next fit may lead to more external fragmentation leaving the need for compaction.
Best fits leads to many smaller memory holes leading to more frequent compaction.
Buddy System
compromise between two partitioning
Swapping
Consider at t0 we have
some n process taking up the complete memory and at time t1 new process
with priority higher than all the n process arrives.And we are using
some priority scheduling then this process should get the CPU and should
be brought in the main memory but since there is no space we need to
swap out some process and bring high priority process in memory and then
after its finished or some other swaps out we should swap that low
priority process in .
Observations
Swapping was used in the
old time sharing systems where memory was enough to load only single
process so at end of each time quantum new process was brought in memory
by swapping present process. context switch time would be sum of swapping out and again swapping in process or part of it from backing store.
For efficient utilization of CPU each process execution time should be greater than the overhead i.e swap time.
Swapping is very expensive compared to instruction execution.
Swap time is dependent majorly on the transfer time, so if we move only that much portion of main memory which process is actually using, it would reduce the swap time.
We never swap the process with pending I/O or we execute I/O operations into operating system buffers only
Unix uses a variant of swapping, Demand Paging .
Paging
Allows physical address space of the process to be non contagiousMemory is divided into fixed small size chunks called frames and process into pages
Memory wasted in paging is only due to internal fragmentation and that only fraction in last page of process.
Differs from fixed partitioning in that
small partitions
process can be allocated more that one partition
non contagious allocation for process is possible
In simple partitioning logical address is location relative to beginning of the program and processor translates logical address to physical address
If Page size is decided as 2^n then remaining bits in the relative address of the process would decide the maximum number of pages allowed in page
In case of paging special hardware is used for logical address to physical address translation.Hardware looks up the logical address in the page table to find the physical address
Fig. logical address divided into offset and page number |
Page table
stores mapping between logical address and physical address. Contains one entry for each page of the process
operating system maintains a copy of page table for each process
Context switch time will increase as page table references need to be changed
If a page table entry is 4 byte long that is 32 bits it can address 2^32 physical page frames and if each frame is 4kb then we can address( 2^32)(2^12) = 2^44 bytes
Fig. lookup for virtual address in page table |
Page Table Entry PTE
Each page table entry contain page frame address and property bits like a present bit, a dirty or modified bit, address space or process ID information.
Assuming 32-bit physical address with page size of 2^8, we use 4 bytes for PTE (24 bit physical page number with 8 bits left for control.)
Sharing and Protection
Reentrant code, non self modifying code can be shared between different process
we map the re-entrant portion of code for each process into same physical frames.
the protection bits in the PTE helps perform certain validation like process is not trying to modify the read only page, that each reference to memory is valid.
Types of Page table
Multilevel Page table
Since each
process can occupy a huge amount of virtual memory that is it can have
large number of pages that much entry for each process would be too
high.To overcome this we store page table in virtual memory rather than
real memory that is page table is divided into pages and when a process
is running a part of its page table must be in memory including page
that is executingFig. two level page table organization |
All level page table are actually frames stored in the memory
Each entry in the page table is page frame address of size of PTE
The format for multilevel page table specifies the offsets for each level page table
Consider three level page table the logical address is divided into four part
1.offset into page directory or fist level page table.each entry points to page frame containing second level entries
2.offset into page middle directory or second level page table
3.offset into third level page table which specifies page frame
4.offset into page frame.
Fig. format of logical address for 3 level page table |
Problem 1
Consider 32 bit Physical address 4KB pages, 4GB virtual address spaceNumber of pages in virtual address space 4GB/4KB = 2^20
Size of page table would be (PTE size) * (number of pages) = 4byte * 2^20 = 2^22 or 4MB
How many virtual pages would be required = (size of page table) / (size of page) = 4MB/4KB = 2^10
So, a top level page table would have 2^10 page table entries and total size would be = (2^10)(4bytes) = 2^12 i.e 4KB.Hence a two level page table would need 4KB more than single level page table. However, not all the second-level tables need to be resident in memory, and page tables are not needed for unmapped virtual memory. Thus the total usage is actually lower, and only the top-level table must be in memory at all times, requiring only 4KB (exactly one page) of physical memory per process.
Problem 2
consider a three level page table 36 bit physical address2, 9, 9, 12 bits for first level second third and offset of page respectively
page frame size is 4KB and PTE is 4bytes
what will be number of bits required to address each next level table or page frame
Solution
First find Number of bits required to address a single page frame
Number of frames in the main memory = (2^36/2^12) = 2^24
so we need 24 bits for addressing a page frame
Since each page table at each level is nothing but page frame addressed by PTE we need 24 bits to address each page frame
First level would be addressed by some process pointer after that each page table will have PTE to address next page frame.
Problem 3 consider a three level page table with
10,8,6,8 bits for first level second, third and offset of page respectively
1. What is the size of a page table for a process of 256K
SolutionFig . three level page table organization for prob 3 |
Number of third level page table = (number of virtual pages) / (number of entries third level can hold)
To calculate number of third level page required we first find
how many pages are required by program or present in virtual address space
number of pages in virtual address space = (size of program ) / (page size)
= (256K)/2^8 = 2^10
so there should be 2^10 entries in the third level page table but each third level page table can hold 2^6 entries only so 2^10/2^6 =16 third level page table will be required
Number of second level page table = (number of third level page table)/(entries in each second level page table)
=16/2^8 ~= 1
Number of first level page table = (number of second level page table)/(entries first level can hold)
= 1/1024 ~= 1so size of page table = 1024*4*1+256*1+64*4*16
Note if different section are stored differently like data code stack, where two section could get same offset bits at second level then we need to store this in different page table. Number of second page table will change the first level will have three different entry for each section.
Problem 4 Consider system with 36 bit virtual address space and 8k page size with 4byte PTE.
If virtual address space of process is 8G analyze single two and three level page table organizations.
Solution
For first level page table number of page table
//pending
Inverted Page Table (IPT) with hashing
fixed size page table independent of number of processes and number of pages
entry for each physical frame in memory so if physical address is 32 bit with 8 bits for frame then there are 2^24 entries in the page table.
virtual address division = page number + offset but translation would be
page number portion is mapped into hash value which is pointer to inverted page table. since more than one virtual page number can give same hash value chaining technique is used
each page table entry holds page number,process identifier, control bits and chain pointer. Combination of pid and page identifies a page in virtual address space of a process. chain pointer may contains index value to address same table
system implementing IPT has difficulty implementing shared pages
http://www.cs.utexas.edu/users/dahlin/Classes/UGOS/hw/7bsol.html
http://www.ece.cmu.edu/~ece548/handouts/742sln1b.pdf
No comments:
Post a Comment